1
|
Chen L, Wang X, Li Y, Bao Y, Wang S, Zhao X, Yuan M, Kang J, Sun S. Development of a deep-learning algorithm for etiological classification of subarachnoid hemorrhage using non-contrast CT scans. Eur Radiol 2025:10.1007/s00330-025-11666-2. [PMID: 40382487 DOI: 10.1007/s00330-025-11666-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 03/06/2025] [Accepted: 04/13/2025] [Indexed: 05/20/2025]
Abstract
OBJECTIVES This study aims to develop a deep learning algorithm for differentiating aneurysmal subarachnoid hemorrhage (aSAH) from non-aneurysmal subarachnoid hemorrhage (naSAH) using non-contrast computed tomography (NCCT) scans. METHODS This retrospective study included 618 patients diagnosed with SAH. The dataset was divided into a training and internal validation cohort (533 cases: aSAH = 305, naSAH = 228) and an external test cohort (85 cases: aSAH = 55, naSAH = 30). Hemorrhage regions were automatically segmented using a U-Net + + architecture. A ResNet-based deep learning model was trained to classify the etiology of SAH. RESULTS The model achieved robust performance in distinguishing aSAH from naSAH. In the internal validation cohort, it yielded an average sensitivity of 0.898, specificity of 0.877, accuracy of 0.889, Matthews correlation coefficient (MCC) of 0.777, and an area under the curve (AUC) of 0.948 (95% CI: 0.929-0.967). In the external test cohort, the model demonstrated an average sensitivity of 0.891, specificity of 0.880, accuracy of 0.887, MCC of 0.761, and AUC of 0.914 (95% CI: 0.889-0.940), outperforming junior radiologists (average accuracy: 0.836; MCC: 0.660). CONCLUSION The study presents a deep learning architecture capable of accurately identifying SAH etiology from NCCT scans. The model's high diagnostic performance highlights its potential to support rapid and precise clinical decision-making in emergency settings. KEY POINTS Question Differentiating aneurysmal from naSAH is crucial for timely treatment, yet existing imaging modalities are not universally accessible or convenient for rapid diagnosis. Findings A ResNet-variant-based deep learning model utilizing non-contrast CT scans demonstrated high accuracy in classifying SAH etiology and enhanced junior radiologists' diagnostic performance. Clinical relevance AI-driven analysis of non-contrast CT scans provides a fast, cost-effective, and non-invasive solution for preoperative SAH diagnosis. This approach facilitates early identification of patients needing aneurysm surgery while minimizing unnecessary angiography in non-aneurysmal cases, enhancing clinical workflow efficiency.
Collapse
Affiliation(s)
- Lingxu Chen
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- Department of Radiology, Beijing Neurosurgical Institute, Beijing, China
| | - Xiaochen Wang
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- Department of Radiology, Beijing Neurosurgical Institute, Beijing, China
| | - Yuanjun Li
- Department of Radiology, Zhongshan Hospital Affiliated to Xiamen University, Hubinnan Road, Xiamen, China
| | - Yang Bao
- Neusoft Medical Systems, Shenyang, China
| | - Sihui Wang
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Xuening Zhao
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Mengyuan Yuan
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Jianghe Kang
- Department of Radiology, Zhongshan Hospital Affiliated to Xiamen University, Hubinnan Road, Xiamen, China
| | - Shengjun Sun
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.
- Department of Radiology, Beijing Neurosurgical Institute, Beijing, China.
| |
Collapse
|
2
|
Reka S, Praba TS, Prasanna M, Reddy VNN, Amirtharajan R. Automated high precision PCOS detection through a segment anything model on super resolution ultrasound ovary images. Sci Rep 2025; 15:16832. [PMID: 40369044 PMCID: PMC12078606 DOI: 10.1038/s41598-025-01744-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Accepted: 05/08/2025] [Indexed: 05/16/2025] Open
Abstract
PCOS (Poly-Cystic Ovary Syndrome) is a multifaceted disorder that often affects the ovarian morphology of women of their reproductive age, resulting in the development of numerous cysts on the ovaries. Ultrasound imaging typically diagnoses PCOS, which helps clinicians assess the size, shape, and existence of cysts in the ovaries. Nevertheless, manual ultrasound image analysis is often challenging and time-consuming, resulting in inter-observer variability. To effectively treat PCOS and prevent its long-term effects, prompt and accurate diagnosis is crucial. In such cases, a prediction model based on deep learning can help physicians by streamlining the diagnosis procedure, reducing time and potential errors. This article proposes a novel integrated approach, QEI-SAM (Quality Enhanced Image - Segment Anything Model), for enhancing image quality and ovarian cyst segmentation for accurate prediction. GAN (Generative Adversarial Networks) and CNN (Convolutional Neural Networks) are the most recent cutting-edge innovations that have supported the system in attaining the expected result. The proposed QEI-SAM model used Enhanced Super Resolution Generative Adversarial Networks (ESRGAN) for image enhancement to increase the resolution, sharpening the edges and restoring the finer structure of the ultrasound ovary images and achieved a better SSIM of 0.938, PSNR value of 38.60 and LPIPS value of 0.0859. Then, it incorporates the Segment Anything Model (SAM) to segment ovarian cysts and achieve the highest Dice coefficient of 0.9501 and IoU score of 0.9050. Furthermore, Convolutional Neural Network - ResNet 50, ResNet 101, VGG 16, VGG 19, AlexNet and Inception v3 have been implemented to diagnose PCOS promptly. Finally, VGG 19 has achieved the highest accuracy of 99.31%.
Collapse
Affiliation(s)
- S Reka
- School of Computing, SASTRA Deemed University, Thirumalaisamudram, Thanjavur, 613401, India
| | - T Suriya Praba
- School of Computing, SASTRA Deemed University, Thirumalaisamudram, Thanjavur, 613401, India.
| | - Mukesh Prasanna
- School of Computing, SASTRA Deemed University, Thirumalaisamudram, Thanjavur, 613401, India
| | | | - Rengarajan Amirtharajan
- School of Electrical and Electronics Engineering, SASTRA Deemed University, Thirumalaisamudram, Thanjavur, 613401, India
| |
Collapse
|
3
|
Xing Y, Lin X. Challenges and advances in the management of inflammation in atherosclerosis. J Adv Res 2025; 71:317-335. [PMID: 38909884 DOI: 10.1016/j.jare.2024.06.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 06/14/2024] [Accepted: 06/15/2024] [Indexed: 06/25/2024] Open
Abstract
INTRODUCTION Atherosclerosis, traditionally considered a lipid-related disease, is now understood as a chronic inflammatory condition with significant global health implications. OBJECTIVES This review aims to delve into the complex interactions among immune cells, cytokines, and the inflammatory cascade in atherosclerosis, shedding light on how these elements influence both the initiation and progression of the disease. METHODS This review draws on recent clinical research to elucidate the roles of key immune cells, macrophages, T cells, endothelial cells, and clonal hematopoiesis in atherosclerosis development. It focuses on how these cells and process contribute to disease initiation and progression, particularly through inflammation-driven processes that lead to plaque formation and stabilization. Macrophages ingest oxidized low-density lipoprotein (oxLDL), which partially converts to high-density lipoprotein (HDL) or accumulates as lipid droplets, forming foam cells crucial for plaque stability. Additionally, macrophages exhibit diverse phenotypes within plaques, with pro-inflammatory types predominating and others specializing in debris clearance at rupture sites. The involvement of CD4+ T and CD8+ T cells in these processes promotes inflammatory macrophage states, suppresses vascular smooth muscle cell proliferation, and enhances plaque instability. RESULTS The nuanced roles of macrophages, T cells, and the related immune cells within the atherosclerotic microenvironment are explored, revealing insights into the cellular and molecular pathways that fuel inflammation. This review also addresses recent advancements in imaging and biomarker technology that enhance our understanding of disease progression. Moreover, it points out the limitations of current treatment and highlights the potential of emerging anti-inflammatory strategies, including clinical trials for agents such as p38MAPK, tumor necrosis factor α (TNF-α), and IL-1β, their preliminary outcomes, and the promising effects of canakinumab, colchicine, and IL-6R antagonists. CONCLUSION This review explores cutting-edge anti-inflammatory interventions, their potential efficacy in preventing and alleviating atherosclerosis, and the role of nanotechnology in delivering drugs more effectively and safely.
Collapse
Affiliation(s)
- Yiming Xing
- Cardiology Department, The First Affiliated Hospital of Anhui Medical University, Hefei City, Anhui Province, 230022, China
| | - Xianhe Lin
- Cardiology Department, The First Affiliated Hospital of Anhui Medical University, Hefei City, Anhui Province, 230022, China.
| |
Collapse
|
4
|
Li Y, Li T, He K, Cui XX, Zhang LL, Wei XL, Liu Z, Wu M. A predictive nomogram of thyroid nodules based on deep learning ultrasound image analysis. Front Endocrinol (Lausanne) 2025; 16:1504412. [PMID: 40365227 PMCID: PMC12069047 DOI: 10.3389/fendo.2025.1504412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Accepted: 03/28/2025] [Indexed: 05/15/2025] Open
Abstract
Objectives The ultrasound characteristics of benign and malignant thyroid nodules were compared to develop a deep learning model, aiming to establish a nomogram model based on deep learning ultrasound image analysis to improve the predictive performance of thyroid nodules. Materials and methods This retrospective study analyzed the clinical and ultrasound characteristics of 2247 thyroid nodules from March 2016 to October 2023. Among them, 1573 nodules were used for training and testing the deep learning models, and 674 nodules were used for validation, and the deep learning predicted values were obtained. These 674 nodules were randomly divided into a training set and a validation set in a 7:3 ratio to construct a nomogram model. Results The accuracy of the deep learning model in 674 thyroid nodules was 0.886, with a precision of 0.900, a recall rate of 0.889, and an F1-score of 0.895. The binary logistic analysis of the training set revealed that age, echogenic foci, and deep learning predicted values were statistically significant (P<0.05). These three indicators were used to construct the nomogram model, showing higher accuracy compared to the China thyroid imaging reports and data systems (C-TIRADS) classification and deep learning models. Moreover, the nomogram model exhibited high calibration and clinical benefits. Conclusion Age, deep learning predicted values, and echogenic foci can be used as independent predictive factors to distinguish between benign and malignant thyroid nodules. The nomogram integrates deep learning and patient clinical ultrasound characteristics, yielding higher accuracy than the application of C-TIRADS or deep learning models alone.
Collapse
Affiliation(s)
- Yuan Li
- Department of Ultrasound, the Second Hospital, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Ting Li
- Department of Ultrasound, the Second Hospital, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Kai He
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Xiao-xiao Cui
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Lu-lu Zhang
- Department of Pathology, the Second Hospital, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Xiu-liang Wei
- Department of Ultrasound, the Second Hospital, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Zhi Liu
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Mei Wu
- Department of Ultrasound, the Second Hospital, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| |
Collapse
|
5
|
Li C, Sultan RI, Bagher-Ebadian H, Qiang Y, Thind K, Zhu D, Chetty IJ. Enhancing CT image segmentation accuracy through ensemble loss function optimization. Med Phys 2025. [PMID: 40275531 DOI: 10.1002/mp.17848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2024] [Revised: 04/02/2025] [Accepted: 04/04/2025] [Indexed: 04/26/2025] Open
Abstract
BACKGROUND In CT-based medical image segmentation, the choice of loss function profoundly impacts the training efficacy of deep neural networks. Traditional loss functions like cross entropy (CE), Dice, Boundary, and TopK each have unique strengths and limitations, often introducing biases when used individually. PURPOSE This study aims to enhance segmentation accuracy by optimizing ensemble loss functions, thereby addressing the biases and limitations of single loss functions and their linear combinations. METHODS We implemented a comprehensive evaluation of loss function combinations by integrating CE, Dice, Boundary, and TopK loss functions through both loss-level linear combination and model-level ensemble methods. Our approach utilized two state-of-the-art 3D segmentation architectures, Attention U-Net (AttUNet) and SwinUNETR, to test the impact of these methods. The study was conducted on two large CT dataset cohorts: an institutional dataset containing pelvic organ segmentations, and a public dataset consisting of multiple organ segmentations. All the models were trained from scratch with different loss settings, and performance was evaluated using Dice similarity coefficient (DSC), Hausdorff distance (HD), and average surface distance (ASD). In the ensemble approach, both static averaging and learnable dynamic weighting strategies were employed to combine the outputs of models trained with different loss functions. RESULTS Extensive experiments revealed the following: (1) the linear combination of loss functions achieved results comparable to those of single loss-driven methods; (2) compared to the best non-ensemble methods, ensemble-based approaches resulted in a 2%-7% increase in DSC scores, along with notable reductions in HD (e.g., a 19.1% reduction for rectum segmentation using SwinUNETR) and ASD (e.g., a 49.0% reduction for prostate segmentation using AttUNet); (3) the learnable ensemble approach with optimized weights produced finer details in predicted masks, as confirmed by qualitative analyses; and (4) the learnable ensemble consistently outperforms the static ensemble across most metrics (DSC, HD, ASD) for both AttUNet and SwinUNETR architectures. CONCLUSIONS Our findings support the efficacy of using ensemble models with optimized weights to improve segmentation accuracy, highlighting the potential for broader applications in automated medical image analysis.
Collapse
Affiliation(s)
- Chengyin Li
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
- Department of Radiation Oncology, Henry Ford Health, Detroit, Michigan, USA
| | - Rafi Ibn Sultan
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Hassan Bagher-Ebadian
- Department of Radiation Oncology, Henry Ford Health, Detroit, Michigan, USA
- Department of Radiology, Michigan State University, E. Lansing, Michigan, USA
- Department of Osteopathic, Michigan State University, E. Lansing, Michigan, USA
- Department of Physics, Oakland University, Rochester, Michigan, USA
| | - Yao Qiang
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Kundan Thind
- Department of Radiation Oncology, Henry Ford Health, Detroit, Michigan, USA
| | - Dongxiao Zhu
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Indrin J Chetty
- Department of Radiation Oncology, Cedars Sinai Medical Center, Los Angeles, California, USA
| |
Collapse
|
6
|
Jonske F, Kim M, Nasca E, Evers J, Haubold J, Hosch R, Nensa F, Kamp M, Seibold C, Egger J, Kleesiek J. Why does my medical AI look at pictures of birds? Exploring the efficacy of transfer learning across domain boundaries. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 261:108634. [PMID: 39913993 DOI: 10.1016/j.cmpb.2025.108634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 01/23/2025] [Accepted: 01/31/2025] [Indexed: 02/21/2025]
Abstract
PURPOSE In medical deep learning, models not trained from scratch are typically fine-tuned based on ImageNet-pretrained models. We posit that pretraining on data from the domain of the downstream task should almost always be preferable. MATERIALS AND METHODS We leverage RadNet-12M and RadNet-1.28M, datasets containing >12 million/1.28 million acquired CT image slices from 90,663 individual scans, and explore the efficacy of self-supervised, contrastive pretraining on the medical and natural image domains. We compare the respective performance gains for five downstream tasks. For each experiment, we report accuracy, AUC, or DICE score and uncertainty estimations based on four separate runs. We quantify significance using Welch's t-test. Finally, we perform feature space analysis to characterize the nature of the observed performance gains. RESULTS We observe that intra-domain transfer (RadNet pretraining and CT-based tasks) compares favorably to cross-domain transfer (ImageNet pretraining and CT-based tasks), generally achieving comparable or improved performance - Δ = +0.44% (p = 0.541) when fine-tuned on RadNet-1.28M, Δ = +2.07% (p = 0.025) when linearly evaluating on RadNet-1.28M, and Δ = +1.63% (p = 0.114) when fine-tuning on 1 % of RadNet-1.28M data. This intra-domain advantage extends to LiTS 2017, another CT-based dataset, but not to other medical imaging modalities. A corresponding intra-domain advantage was also observed for natural images. Outside the CT image domain, ImageNet-pretrained models generalized better than RadNet-pretrained models. We further demonstrate that pretraining on medical images yields domain-specific features that are preserved during fine-tuning, and which correspond to macroscopic image properties and structures. CONCLUSION We conclude that intra-domain pretraining generally outperforms cross-domain pretraining, but that very narrow domain definitions apply. Put simply, pretraining on CT images instead of natural images yields an advantage when fine-tuning on CT images, and only on CT images. We further conclude that ImageNet pretraining remains a strong baseline, as well as the best choice for pretraining if only insufficient data from the target domain is available. Finally, we publish our pretrained models and pretraining guidelines as a baseline for future research.
Collapse
Affiliation(s)
- Frederic Jonske
- Institute of AI in Medicine (IKIM), University Medicine Essen (AöR), University Duisburg-Essen, Germany (Address: Institut für künstliche Intelligenz in der Medizin (IKIM), Girardetstr. 2 45131 Essen, NRW, Germany.
| | - Moon Kim
- Institute of AI in Medicine (IKIM), University Medicine Essen (AöR), University Duisburg-Essen, Germany (Address: Institut für künstliche Intelligenz in der Medizin (IKIM), Girardetstr. 2 45131 Essen, NRW, Germany.
| | - Enrico Nasca
- Institute of AI in Medicine (IKIM), University Medicine Essen (AöR), University Duisburg-Essen, Germany (Address: Institut für künstliche Intelligenz in der Medizin (IKIM), Girardetstr. 2 45131 Essen, NRW, Germany.
| | - Janis Evers
- Institute of AI in Medicine (IKIM), University Medicine Essen (AöR), University Duisburg-Essen, Germany (Address: Institut für künstliche Intelligenz in der Medizin (IKIM), Girardetstr. 2 45131 Essen, NRW, Germany.
| | - Johannes Haubold
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen (AöR), Germany.
| | - René Hosch
- Institute of AI in Medicine (IKIM), University Medicine Essen (AöR), University Duisburg-Essen, Germany (Address: Institut für künstliche Intelligenz in der Medizin (IKIM), Girardetstr. 2 45131 Essen, NRW, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen (AöR), Germany.
| | - Felix Nensa
- Institute of AI in Medicine (IKIM), University Medicine Essen (AöR), University Duisburg-Essen, Germany (Address: Institut für künstliche Intelligenz in der Medizin (IKIM), Girardetstr. 2 45131 Essen, NRW, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen (AöR), Germany.
| | - Michael Kamp
- Institute of AI in Medicine (IKIM), University Medicine Essen (AöR), University Duisburg-Essen, Germany (Address: Institut für künstliche Intelligenz in der Medizin (IKIM), Girardetstr. 2 45131 Essen, NRW, Germany; Institute for Neuroinformatics, Ruhr University Bochum, Germany; Department of Data Science & AI, Monash University, Australia.
| | - Constantin Seibold
- Institute of AI in Medicine (IKIM), University Medicine Essen (AöR), University Duisburg-Essen, Germany (Address: Institut für künstliche Intelligenz in der Medizin (IKIM), Girardetstr. 2 45131 Essen, NRW, Germany.
| | - Jan Egger
- Institute of AI in Medicine (IKIM), University Medicine Essen (AöR), University Duisburg-Essen, Germany (Address: Institut für künstliche Intelligenz in der Medizin (IKIM), Girardetstr. 2 45131 Essen, NRW, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Germany.
| | - Jens Kleesiek
- Institute of AI in Medicine (IKIM), University Medicine Essen (AöR), University Duisburg-Essen, Germany (Address: Institut für künstliche Intelligenz in der Medizin (IKIM), Girardetstr. 2 45131 Essen, NRW, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Germany; Department of Physics, TU Dortmund University, Dortmund, Germany.
| |
Collapse
|
7
|
Kim J, Kim MH, Lim DJ, Lee H, Lee JJ, Kwon HS, Kim MK, Song KH, Kim TJ, Jung SL, Lee YO, Baek KH. Deep Learning Technology for Classification of Thyroid Nodules Using Multi-View Ultrasound Images: Potential Benefits and Challenges in Clinical Application. Endocrinol Metab (Seoul) 2025; 40:216-224. [PMID: 39805576 PMCID: PMC12061742 DOI: 10.3803/enm.2024.2058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 08/31/2024] [Accepted: 09/23/2024] [Indexed: 01/16/2025] Open
Abstract
BACKGRUOUND This study aimed to evaluate the applicability of deep learning technology to thyroid ultrasound images for classification of thyroid nodules. METHODS This retrospective analysis included ultrasound images of patients with thyroid nodules investigated by fine-needle aspiration at the thyroid clinic of a single center from April 2010 to September 2012. Thyroid nodules with cytopathologic results of Bethesda category V (suspicious for malignancy) or VI (malignant) were defined as thyroid cancer. Multiple deep learning algorithms based on convolutional neural networks (CNNs) -ResNet, DenseNet, and EfficientNet-were utilized, and Siamese neural networks facilitated multi-view analysis of paired transverse and longitudinal ultrasound images. RESULTS Among 1,048 analyzed thyroid nodules from 943 patients, 306 (29%) were identified as thyroid cancer. In a subgroup analysis of transverse and longitudinal images, longitudinal images showed superior prediction ability. Multi-view modeling, based on paired transverse and longitudinal images, significantly improved the model performance; with an accuracy of 0.82 (95% confidence intervals [CI], 0.80 to 0.86) with ResNet50, 0.83 (95% CI, 0.83 to 0.88) with DenseNet201, and 0.81 (95% CI, 0.79 to 0.84) with EfficientNetv2_ s. Training with high-resolution images obtained using the latest equipment tended to improve model performance in association with increased sensitivity. CONCLUSION CNN algorithms applied to ultrasound images demonstrated substantial accuracy in thyroid nodule classification, indicating their potential as valuable tools for diagnosing thyroid cancer. However, in real-world clinical settings, it is important to aware that model performance may vary depending on the quality of images acquired by different physicians and imaging devices.
Collapse
Affiliation(s)
- Jinyoung Kim
- Division of Endocrinology and Metabolism, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Min-Hee Kim
- Division of Endocrinology and Metabolism, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Dong-Jun Lim
- Division of Endocrinology and Metabolism, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Hankyeol Lee
- Department of Computer Engineering, Hongik University, Seoul, Korea
| | - Jae Jun Lee
- Department of Industrial and Data Engineering, Hongik University, Seoul, Korea
| | - Hyuk-Sang Kwon
- Division of Endocrinology and Metabolism, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Mee Kyoung Kim
- Division of Endocrinology and Metabolism, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Ki-Ho Song
- Division of Endocrinology and Metabolism, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Tae-Jung Kim
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - So Lyung Jung
- Department of Radiology, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Yong Oh Lee
- Department of Industrial and Data Engineering, Hongik University, Seoul, Korea
| | - Ki-Hyun Baek
- Division of Endocrinology and Metabolism, Department of Internal Medicine, College of Medicine, The Catholic University of Korea, Seoul, Korea
| |
Collapse
|
8
|
König C, Copado P, Lamarca M, Guendouz W, Fischer R, Schlechte M, Acuña V, Berna F, Gawęda Ł, Vellido A, Nebot À, Angulo C, Ochoa S. Data harmonization for the analysis of personalized treatment of psychosis with metacognitive training. Sci Rep 2025; 15:10159. [PMID: 40128308 PMCID: PMC11933379 DOI: 10.1038/s41598-025-94815-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2024] [Accepted: 03/17/2025] [Indexed: 03/26/2025] Open
Abstract
Personalized medicine is a data-driven approach that aims to adapt patients' diagnostics and therapies to their characteristics and needs. The availability of patients' data is therefore paramount for the personalization of treatments on the basis of predictive models, and even more so in machine learning-based analyses. Data harmonization is an essential part of the process of data curation. This study presents research on data harmonization for the development of a harmonized retrospective database of patients in Metacognitive Training (MCT) treatment for psychotic disorders. This work is part of the European ERAPERMED 2022-292 research project entitled 'Towards a Personalized Medicine Approach to Psychological Treatment of Psychosis' (PERMEPSY), which focuses on the development of a personalized medicine platform for the treatment of psychosis. The study integrates information from 22 studies into a common format to enable a data analytical approach for personalized treatment. The harmonized database comprises information about 698 patients who underwent MCT and includes a wide range of sociodemographic variables and psychological indicators used to assess a patient's mental health state. The characteristics of patients participating in the study are analyzed using descriptive statistics and exploratory data analysis.
Collapse
Affiliation(s)
- Caroline König
- Soft Computing Research Group (SOCO) at Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Centre, Universitat Politècnica de Catalunya (UPC Barcelona Tech), Jordi Girona 1-3, 08034, Barcelona, Spain.
| | - Pedro Copado
- Soft Computing Research Group (SOCO) at Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Centre, Universitat Politècnica de Catalunya (UPC Barcelona Tech), Jordi Girona 1-3, 08034, Barcelona, Spain
| | - Maria Lamarca
- MERITT Group, Institut de Recerca Sant Joan de Déu, Parc Sanitari Sant Joan de Déu, 08830, Sant Boi de Llobregat, Barcelona, Spain
- Consorcio de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos III, Madrid, Spain
- Clinical and Health Psychology Department, School of Psychology, Universitat Autònoma de Barcelona, Bellaterra, 08193, Barcelona, Spain
| | - Wafaa Guendouz
- Soft Computing Research Group (SOCO) at Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Centre, Universitat Politècnica de Catalunya (UPC Barcelona Tech), Jordi Girona 1-3, 08034, Barcelona, Spain
| | - Rabea Fischer
- Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Merle Schlechte
- Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Vanessa Acuña
- Departamento de Psiquiatría, Escuela de Medicina, Facultad de Medicina, Universidad de Valparaíso, Valparaíso, Chile
| | - Fabrice Berna
- Inserm, University Hospital of Strasbourg, University of Strasbourg, 67091, Strasbourg, France
| | - Łucasz Gawęda
- Experimental Psychopathology Lab, Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland
| | - Alfredo Vellido
- Soft Computing Research Group (SOCO) at Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Centre, Universitat Politècnica de Catalunya (UPC Barcelona Tech), Jordi Girona 1-3, 08034, Barcelona, Spain
| | - Àngela Nebot
- Soft Computing Research Group (SOCO) at Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Centre, Universitat Politècnica de Catalunya (UPC Barcelona Tech), Jordi Girona 1-3, 08034, Barcelona, Spain
| | - Cecilio Angulo
- Knowledge Engineerig Research Group (GREC) at Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Centre, Universitat Politècnica de Catalunya (UPC Barcelona Tech), Jordi Girona 1-3, 08034, Barcelona, Spain
| | - Susana Ochoa
- MERITT Group, Institut de Recerca Sant Joan de Déu, Parc Sanitari Sant Joan de Déu, 08830, Sant Boi de Llobregat, Barcelona, Spain
- Consorcio de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos III, Madrid, Spain
| |
Collapse
|
9
|
Oguzhan A, Peskersoy C, Devrimci EE, Kemaloglu H, Onder TK. Implementation of machine learning models as a quantitative evaluation tool for preclinical studies in dental education. J Dent Educ 2025; 89:383-397. [PMID: 39327675 DOI: 10.1002/jdd.13722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/02/2024] [Accepted: 09/04/2024] [Indexed: 09/28/2024]
Abstract
PURPOSE AND OBJECTIVE Objective, valid, and reliable evaluations are needed in order to develop haptic skills in dental education. The aim of this study is to investigate the validity and reliability of the machine learning method in evaluating the haptic skills of dentistry students. MATERIALS AND METHODS One-hundred fifty 6th semester dental students have performed Class II amalgam (C2A) and composite resin restorations (C2CR), in which all stages were evaluated with Direct Observation Practical Skills forms. The final phase was graded by three trainers and supervisors separately. Standard photographs of the restorations in the final stage were taken from different angles in a special setup and transferred to the Python program which utilized the Structural Similarity algorithm to calculate both the quantitative (numerical) and qualitative (visual) differences of each restoration. The validity and reliability analyses of inter-examiner evaluation were tested by Cronbach's Alpha and Kappa statistics (p = 0.05). RESULTS The intra-examiner reliability between Structural Similarity Index (SSIM) and examiners was found highly reliable in both C2A (α = 0.961) and C2CR (α = 0.856). The compatibility of final grades given by SSIM (53.07) and examiners (56.85) was statistically insignificant (p > 0.05). A significant difference was found between the examiners and SSIM when grading the occlusal surfaces in C2A and on the palatal surfaces of C2CR (p < 0.05). The concordance of observer assessments was found almost perfect in C2A (κ = 0.806), and acceptable in C2CR (κ = 0.769). CONCLUSION Although deep machine learning is a promising tool in the evaluation of haptic skills, further improvement and alignments are required for fully objective and reliable validation in all cases of dental training in restorative dentistry.
Collapse
Affiliation(s)
- Aybeniz Oguzhan
- Department of Restorative Dentistry, Faculty of Dentistry, Ege University, Izmir, Turkey
| | - Cem Peskersoy
- Department of Restorative Dentistry, Faculty of Dentistry, Ege University, Izmir, Turkey
| | - Elif Ercan Devrimci
- Department of Restorative Dentistry, Faculty of Dentistry, Ege University, Izmir, Turkey
| | - Hande Kemaloglu
- Department of Restorative Dentistry, Faculty of Dentistry, Ege University, Izmir, Turkey
| | - Tolga Kagan Onder
- Department of Mechanical Engineering, ARQUQ Project Partnership, Izmir, Turkey
| |
Collapse
|
10
|
He R, Jie P, Hou W, Long Y, Zhou G, Wu S, Liu W, Lei W, Wen W, Wen Y. Real-time artificial intelligence-assisted detection and segmentation of nasopharyngeal carcinoma using multimodal endoscopic data: a multi-center, prospective study. EClinicalMedicine 2025; 81:103120. [PMID: 40026832 PMCID: PMC11871492 DOI: 10.1016/j.eclinm.2025.103120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2024] [Revised: 01/16/2025] [Accepted: 01/31/2025] [Indexed: 03/05/2025] Open
Abstract
Background Nasopharyngeal carcinoma (NPC) is a common malignancy in southern China, and often underdiagnosed due to reliance on physician expertise. Artificial intelligence (AI) can enhance diagnostic accuracy and efficiency using large datasets and advanced algorithms. Methods Nasal endoscopy videos with white light imaging (WLI) and narrow-band imaging (NBI) modes from 707 patients treated at one center in China from June 2020 to December 2022 were prospectively collected. A total of 8816 frames were obtained through standardized data procedures. Nasopharyngeal Carcinoma Diagnosis Segmentation Network Framework (NPC-SDNet) was developed and internally tested based on these frames. Two hundred frames were randomly selected to compare the diagnostic performance between NPC-SDNet and rhinologists. Two external testing sets with 2818 images from other hospitals validated the robustness and generalizability of the model. This study was registered at clinicaltrials.gov (NCT04547673). Findings The diagnostic accuracy, precision, recall, and specificity of NPC-SDNet using WLI were 95.0% (95% CI: 94.1%-96.2%), 93.5% (95% CI: 90.2%-95.2%), 97.2% (95% CI: 96.2%-98.3%), and 93.5% (95% CI: 91.7%-94.0%), respectively, and using NBI were 95.8% (95% CI: 94.0%-96,8%), 93.1% (95% CI: 91.0%-95.6%), 96.0% (95% CI: 95.7%-96.8%), and 97.2% (95% CI: 97.1%-97.4%), respectively. Segmentation performance was also robust, with mean Intersection over Union scores of 83.4% (95% CI: 81.8%-85.6%; NBI) and 83.7% (95% CI: 85.1%-90.1%; WLI). In head-to-head comparisons with rhinologists, NPC-SDNet achieved a diagnostic accuracy of 94.0% (95% CI: 91.5%-95.8%) and processed 1000 frames per minute, outperforming clinicians (68.9%-88.2%) across different expertise levels. External validation further supported the reliability of NPC-SDNet, with area under the receiver operating characteristic curve (AUC) values of 0.998 and 0.977 in NBI images, 0.977 and 0.970 in WLI images. Interpretation NPC-SDNet demonstrates excellent real-time diagnostic and segmentation accuracy, offering a promising tool for enhancing the precision of NPC diagnosis. Funding This work was supported by National Key R&D Program of China (2020YFC1316903), the National Natural Science Foundation of China (NSFC) grants (81900918, 82020108009), Natural Science Foundation of Guangdong Province (2022A1515010002), Key-Area Research and Development of Guangdong Province (2023B1111040004, 2020B1111190001), and Key Clinical Technique of Guangzhou (2023P-ZD06).
Collapse
Affiliation(s)
- Rui He
- Department of Otolaryngology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, PR China
- Otorhinolaryngology Institute, Sun Yat-sen University, Guangzhou, Guangdong, PR China
| | - Pengyu Jie
- The School of Intelligent Engineering, Sun Yat-Sen University-Shenzhen Campus, Shenzhen, 518107, PR China
| | - Weijian Hou
- Department of Otolaryngology Head and Neck Surgery, Kiang Wu Hospital, 999078, Macau, PR China
| | - Yudong Long
- Department of Otolaryngology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, PR China
- Otorhinolaryngology Institute, Sun Yat-sen University, Guangzhou, Guangdong, PR China
| | - Guanqun Zhou
- Department of Radiation Oncology, Sun Yat-sen University Cancer Centre, Guangzhou, PR China
| | - Shumei Wu
- Department of Otolaryngology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, PR China
- Otorhinolaryngology Institute, Sun Yat-sen University, Guangzhou, Guangdong, PR China
| | - Wanquan Liu
- The School of Intelligent Engineering, Sun Yat-Sen University-Shenzhen Campus, Shenzhen, 518107, PR China
| | - Wenbin Lei
- Department of Otolaryngology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, PR China
- Otorhinolaryngology Institute, Sun Yat-sen University, Guangzhou, Guangdong, PR China
| | - Weiping Wen
- Department of Otolaryngology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, PR China
- Otorhinolaryngology Institute, Sun Yat-sen University, Guangzhou, Guangdong, PR China
| | - Yihui Wen
- Department of Otolaryngology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, PR China
- Otorhinolaryngology Institute, Sun Yat-sen University, Guangzhou, Guangdong, PR China
| |
Collapse
|
11
|
Li J, Zhou Z, Yang J, Pepe A, Gsaxner C, Luijten G, Qu C, Zhang T, Chen X, Li W, Wodzinski M, Friedrich P, Xie K, Jin Y, Ambigapathy N, Nasca E, Solak N, Melito GM, Vu VD, Memon AR, Schlachta C, De Ribaupierre S, Patel R, Eagleson R, Chen X, Mächler H, Kirschke JS, de la Rosa E, Christ PF, Li HB, Ellis DG, Aizenberg MR, Gatidis S, Küstner T, Shusharina N, Heller N, Andrearczyk V, Depeursinge A, Hatt M, Sekuboyina A, Löffler MT, Liebl H, Dorent R, Vercauteren T, Shapey J, Kujawa A, Cornelissen S, Langenhuizen P, Ben-Hamadou A, Rekik A, Pujades S, Boyer E, Bolelli F, Grana C, Lumetti L, Salehi H, Ma J, Zhang Y, Gharleghi R, Beier S, Sowmya A, Garza-Villarreal EA, Balducci T, Angeles-Valdez D, Souza R, Rittner L, Frayne R, Ji Y, Ferrari V, Chatterjee S, Dubost F, Schreiber S, Mattern H, Speck O, Haehn D, John C, Nürnberger A, Pedrosa J, Ferreira C, Aresta G, Cunha A, Campilho A, Suter Y, Garcia J, Lalande A, Vandenbossche V, Van Oevelen A, Duquesne K, Mekhzoum H, Vandemeulebroucke J, Audenaert E, Krebs C, van Leeuwen T, Vereecke E, Heidemeyer H, Röhrig R, Hölzle F, Badeli V, Krieger K, Gunzer M, et alLi J, Zhou Z, Yang J, Pepe A, Gsaxner C, Luijten G, Qu C, Zhang T, Chen X, Li W, Wodzinski M, Friedrich P, Xie K, Jin Y, Ambigapathy N, Nasca E, Solak N, Melito GM, Vu VD, Memon AR, Schlachta C, De Ribaupierre S, Patel R, Eagleson R, Chen X, Mächler H, Kirschke JS, de la Rosa E, Christ PF, Li HB, Ellis DG, Aizenberg MR, Gatidis S, Küstner T, Shusharina N, Heller N, Andrearczyk V, Depeursinge A, Hatt M, Sekuboyina A, Löffler MT, Liebl H, Dorent R, Vercauteren T, Shapey J, Kujawa A, Cornelissen S, Langenhuizen P, Ben-Hamadou A, Rekik A, Pujades S, Boyer E, Bolelli F, Grana C, Lumetti L, Salehi H, Ma J, Zhang Y, Gharleghi R, Beier S, Sowmya A, Garza-Villarreal EA, Balducci T, Angeles-Valdez D, Souza R, Rittner L, Frayne R, Ji Y, Ferrari V, Chatterjee S, Dubost F, Schreiber S, Mattern H, Speck O, Haehn D, John C, Nürnberger A, Pedrosa J, Ferreira C, Aresta G, Cunha A, Campilho A, Suter Y, Garcia J, Lalande A, Vandenbossche V, Van Oevelen A, Duquesne K, Mekhzoum H, Vandemeulebroucke J, Audenaert E, Krebs C, van Leeuwen T, Vereecke E, Heidemeyer H, Röhrig R, Hölzle F, Badeli V, Krieger K, Gunzer M, Chen J, van Meegdenburg T, Dada A, Balzer M, Fragemann J, Jonske F, Rempe M, Malorodov S, Bahnsen FH, Seibold C, Jaus A, Marinov Z, Jaeger PF, Stiefelhagen R, Santos AS, Lindo M, Ferreira A, Alves V, Kamp M, Abourayya A, Nensa F, Hörst F, Brehmer A, Heine L, Hanusrichter Y, Weßling M, Dudda M, Podleska LE, Fink MA, Keyl J, Tserpes K, Kim MS, Elhabian S, Lamecker H, Zukić D, Paniagua B, Wachinger C, Urschler M, Duong L, Wasserthal J, Hoyer PF, Basu O, Maal T, Witjes MJH, Schiele G, Chang TC, Ahmadi SA, Luo P, Menze B, Reyes M, Deserno TM, Davatzikos C, Puladi B, Fua P, Yuille AL, Kleesiek J, Egger J. MedShapeNet - a large-scale dataset of 3D medical shapes for computer vision. BIOMED ENG-BIOMED TE 2025; 70:71-90. [PMID: 39733351 DOI: 10.1515/bmt-2024-0396] [Show More Authors] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 09/21/2024] [Indexed: 12/31/2024]
Abstract
OBJECTIVES The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. METHODS We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. RESULTS By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. CONCLUSIONS MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/.
Collapse
Affiliation(s)
- Jianning Li
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
| | - Zongwei Zhou
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Jiancheng Yang
- Computer Vision Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
| | - Antonio Pepe
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Gijs Luijten
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University Medicine Essen, Essen, Germany
| | - Chongyu Qu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Tiezheng Zhang
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Xiaoxi Chen
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Wenxuan Li
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | - Paul Friedrich
- Center for Medical Image Analysis & Navigation (CIAN), Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Kangxian Xie
- Department of Computer Science and Engineering, University at Buffalo, SUNY, NY, 14260, USA
| | - Yuan Jin
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, Hangzhou, Zhejiang, China
| | - Narmada Ambigapathy
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Enrico Nasca
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Naida Solak
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
| | - Gian Marco Melito
- Institute of Mechanics, Graz University of Technology, Graz, Austria
| | - Viet Duc Vu
- Department of Diagnostic and Interventional Radiology, University Hospital Giessen, Justus-Liebig-University Giessen, Giessen, Germany
| | - Afaque R Memon
- Department of Mechanical Engineering, Mehran University of Engineering and Technology, Jamshoro, Sindh, Pakistan
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Christopher Schlachta
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Sandrine De Ribaupierre
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Rajnikant Patel
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Roy Eagleson
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Xiaojun Chen
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Institute of Biomedical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Heinrich Mächler
- Department of Cardiac Surgery, Medical University Graz, Graz, Austria
| | - Jan Stefan Kirschke
- Geschäftsführender Oberarzt Abteilung für Interventionelle und Diagnostische Neuroradiologie, Universitätsklinikum der Technischen Universität München, München, Germany
| | - Ezequiel de la Rosa
- icometrix, Leuven, Belgium
- Department of Informatics, Technical University of Munich, Garching bei München, Germany
| | | | - Hongwei Bran Li
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - David G Ellis
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, USA
| | - Michele R Aizenberg
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, USA
| | - Sergios Gatidis
- University Hospital of Tuebingen Diagnostic and Interventional Radiology Medical Image and Data Analysis (MIDAS.lab), Tuebingen, Germany
| | - Thomas Küstner
- University Hospital of Tuebingen Diagnostic and Interventional Radiology Medical Image and Data Analysis (MIDAS.lab), Tuebingen, Germany
| | - Nadya Shusharina
- Division of Radiation Biophysics, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | | | - Vincent Andrearczyk
- Institute of Informatics, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
| | - Adrien Depeursinge
- Institute of Informatics, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital (CHUV), Lausanne, Switzerland
| | - Mathieu Hatt
- LaTIM, INSERM UMR 1101, Univ Brest, Brest, France
| | - Anjany Sekuboyina
- Department of Informatics, Technical University of Munich, Garching bei München, Germany
| | | | - Hans Liebl
- Department of Neuroradiology, Klinikum Rechts der Isar, Munich, Germany
| | - Reuben Dorent
- King's College London, Strand, London, UK
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | | | | | | | - Stefan Cornelissen
- Elisabeth-TweeSteden Hospital, Tilburg, Netherlands
- Video Coding & Architectures Research Group, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Patrick Langenhuizen
- Elisabeth-TweeSteden Hospital, Tilburg, Netherlands
- Video Coding & Architectures Research Group, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Achraf Ben-Hamadou
- Centre de Recherche en Numérique de Sfax, Laboratory of Signals, Systems, Artificial Intelligence and Networks, Sfax, Tunisia
- Udini, Aix-en-Provence, France
| | - Ahmed Rekik
- Centre de Recherche en Numérique de Sfax, Laboratory of Signals, Systems, Artificial Intelligence and Networks, Sfax, Tunisia
- Udini, Aix-en-Provence, France
| | - Sergi Pujades
- Inria, Université Grenoble Alpes, CNRS, Grenoble, France
| | - Edmond Boyer
- Inria, Université Grenoble Alpes, CNRS, Grenoble, France
| | - Federico Bolelli
- "Enzo Ferrari" Department of Engineering, University of Modena and Reggio Emilia, Modena, Italy
| | - Costantino Grana
- "Enzo Ferrari" Department of Engineering, University of Modena and Reggio Emilia, Modena, Italy
| | - Luca Lumetti
- "Enzo Ferrari" Department of Engineering, University of Modena and Reggio Emilia, Modena, Italy
| | - Hamidreza Salehi
- Department of Artificial Intelligence in Medical Sciences, Faculty of Advanced Technologies in Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Jun Ma
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, Canada
- Peter Munk Cardiac Centre, University Health Network, Toronto, ON, Canada
- Vector Institute, Toronto, ON, Canada
| | - Yao Zhang
- Shanghai AI Laboratory, Shanghai, People's Republic of China
| | - Ramtin Gharleghi
- School of Mechanical and Manufacturing Engineering, UNSW, Sydney, NSW, Australia
| | - Susann Beier
- School of Mechanical and Manufacturing Engineering, UNSW, Sydney, NSW, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, UNSW, Sydney, NSW, Australia
| | | | - Thania Balducci
- Institute of Neurobiology, Universidad Nacional Autónoma de México Campus Juriquilla, Querétaro, Mexico
| | - Diego Angeles-Valdez
- Institute of Neurobiology, Universidad Nacional Autónoma de México Campus Juriquilla, Querétaro, Mexico
- Department of Biomedical Sciences of Cells and Systems, Cognitive Neuroscience Center, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Roberto Souza
- Advanced Imaging and Artificial Intelligence Lab, Electrical and Software Engineering Department, The Hotchkiss Brain Institute, University of Calgary, Calgary, Canada
| | - Leticia Rittner
- Medical Image Computing Lab, School of Electrical and Computer Engineering (FEEC), University of Campinas, Campinas, Brazil
| | - Richard Frayne
- Radiology and Clinical Neurosciences Departments, The Hotchkiss Brain Institute, University of Calgary, Calgary, Canada
- Seaman Family MR Research Centre, Foothills Medical Center, Calgary, Canada
| | - Yuanfeng Ji
- University of Hongkong, Pok Fu Lam, Hong Kong, People's Republic of China
| | - Vincenzo Ferrari
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
- EndoCAS Center, Department of Translational Research and of New Surgical and Medical Technologies, University of Pisa, Pisa, Italy
| | - Soumick Chatterjee
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
- Genomics Research Centre, Human Technopole, Milan, Italy
| | | | - Stefanie Schreiber
- German Centre for Neurodegenerative Disease, Magdeburg, Germany
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Department of Neurology, Medical Faculty, University Hospital of Magdeburg, Magdeburg, Germany
| | - Hendrik Mattern
- German Centre for Neurodegenerative Disease, Magdeburg, Germany
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Department of Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Oliver Speck
- German Centre for Neurodegenerative Disease, Magdeburg, Germany
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Department of Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Daniel Haehn
- University of Massachusetts Boston, Boston, MA, USA
| | | | - Andreas Nürnberger
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - João Pedrosa
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Faculty of Engineering, University of Porto (FEUP), Porto, Portugal
| | - Carlos Ferreira
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Faculty of Engineering, University of Porto (FEUP), Porto, Portugal
| | - Guilherme Aresta
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - António Cunha
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Universidade of Trás-os-Montes and Alto Douro (UTAD), Vila Real, Portugal
| | - Aurélio Campilho
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Faculty of Engineering, University of Porto (FEUP), Porto, Portugal
| | - Yannick Suter
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Jose Garcia
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA
| | - Alain Lalande
- ICMUB Laboratory, Faculty of Medicine, CNRS UMR 6302, University of Burgundy, Dijon, France
- Medical Imaging Department, University Hospital of Dijon, Dijon, France
| | | | - Aline Van Oevelen
- Department of Human Structure and Repair, Ghent University, Ghent, Belgium
| | - Kate Duquesne
- Department of Human Structure and Repair, Ghent University, Ghent, Belgium
| | - Hamza Mekhzoum
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium
| | - Jef Vandemeulebroucke
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium
| | - Emmanuel Audenaert
- Department of Human Structure and Repair, Ghent University, Ghent, Belgium
| | - Claudia Krebs
- Department of Cellular and Physiological Sciences, Life Sciences Centre, University of British Columbia, Vancouver, BC, Canada
| | - Timo van Leeuwen
- Department of Development & Regeneration, KU Leuven Campus Kulak, Kortrijk, Belgium
| | - Evie Vereecke
- Department of Development & Regeneration, KU Leuven Campus Kulak, Kortrijk, Belgium
| | - Hauke Heidemeyer
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Rainer Röhrig
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Vahid Badeli
- Institute of Fundamentals and Theory in Electrical Engineering, Graz University of Technology, Graz, Austria
| | - Kathrin Krieger
- Leibniz-Institut für Analytische Wissenschaften-ISAS-e.V., Dortmund, Germany
| | - Matthias Gunzer
- Leibniz-Institut für Analytische Wissenschaften-ISAS-e.V., Dortmund, Germany
- Institute for Experimental Immunology and Imaging, University Hospital, University Duisburg-Essen, Essen, Germany
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften-ISAS-e.V., Dortmund, Germany
| | - Timo van Meegdenburg
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Faculty of Statistics, Technical University Dortmund, Dortmund, Germany
| | - Amin Dada
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Miriam Balzer
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Jana Fragemann
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Frederic Jonske
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Moritz Rempe
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Stanislav Malorodov
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Fin H Bahnsen
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Constantin Seibold
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Alexander Jaus
- Computer Vision for Human-Computer Interaction Lab, Department of Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Zdravko Marinov
- Computer Vision for Human-Computer Interaction Lab, Department of Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Paul F Jaeger
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Heidelberg, Germany
- Helmholtz Imaging, DKFZ Heidelberg, Heidelberg, Germany
| | - Rainer Stiefelhagen
- Computer Vision for Human-Computer Interaction Lab, Department of Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Ana Sofia Santos
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - Mariana Lindo
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - André Ferreira
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - Victor Alves
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - Michael Kamp
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
- Institute for Neuroinformatics, Ruhr University Bochum, Bochum, Germany
- Department of Data Science & AI, Monash University, Clayton, VIC, Australia
| | - Amr Abourayya
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute for Neuroinformatics, Ruhr University Bochum, Bochum, Germany
| | - Felix Nensa
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany
| | - Fabian Hörst
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Alexander Brehmer
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Lukas Heine
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Yannik Hanusrichter
- Department of Tumour Orthopaedics and Revision Arthroplasty, Orthopaedic Hospital Volmarstein, Wetter, Germany
- Center for Musculoskeletal Surgery, University Hospital of Essen, Essen, Germany
| | - Martin Weßling
- Department of Tumour Orthopaedics and Revision Arthroplasty, Orthopaedic Hospital Volmarstein, Wetter, Germany
- Center for Musculoskeletal Surgery, University Hospital of Essen, Essen, Germany
| | - Marcel Dudda
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Essen, Essen, Germany
- Department of Orthopaedics and Trauma Surgery, BG-Klinikum Duisburg, University of Duisburg-Essen, Essen , Germany
| | - Lars E Podleska
- Department of Tumor Orthopedics and Sarcoma Surgery, University Hospital Essen (AöR), Essen, Germany
| | - Matthias A Fink
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Julius Keyl
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Konstantinos Tserpes
- Department of Informatics and Telematics, Harokopio University of Athens, Tavros, Greece
| | - Moon-Sung Kim
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Shireen Elhabian
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, USA
| | | | - Dženan Zukić
- Medical Computing, Kitware Inc., Carrboro, NC, USA
| | | | - Christian Wachinger
- Lab for Artificial Intelligence in Medical Imaging, Department of Radiology, Technical University Munich, Munich, Germany
| | - Martin Urschler
- Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Graz, Austria
| | - Luc Duong
- Department of Software and IT Engineering, Ecole de Technologie Superieure, Montreal, Quebec, Canada
| | - Jakob Wasserthal
- Clinic of Radiology & Nuclear Medicine, University Hospital Basel, Basel, Switzerland
| | - Peter F Hoyer
- Pediatric Clinic II, University Children's Hospital Essen, University Duisburg-Essen, Essen, Germany
| | - Oliver Basu
- Pediatric Clinic III, University Children's Hospital Essen, University Duisburg-Essen, Essen, Germany
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University Medicine Essen, Essen, Germany
| | - Thomas Maal
- Radboudumc 3D-Lab , Department of Oral and Maxillofacial Surgery , Radboud University Nijmegen Medical Centre, Nijmegen , The Netherlands
| | - Max J H Witjes
- 3D Lab, Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, Groningen, the Netherlands
| | - Gregor Schiele
- Intelligent Embedded Systems Lab, University of Duisburg-Essen, Bismarckstraße 90, 47057 Duisburg, Germany
| | | | | | - Ping Luo
- University of Hongkong, Pok Fu Lam, Hong Kong, People's Republic of China
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Thomas M Deserno
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Braunschweig, Germany
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics , Penn Neurodegeneration Genomics Center , University of Pennsylvania, Philadelphia , PA , USA ; and Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Pascal Fua
- Computer Vision Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
| | - Alan L Yuille
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Essen, Germany
- Department of Physics, TU Dortmund University, Dortmund, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University Medicine Essen, Essen, Germany
| |
Collapse
|
12
|
Caldarelli P, Deininger L, Zhao S, Panda P, Yang C, Mikut R, Zernicka-Goetz M. AI-based approach to dissect the variability of mouse stem cell-derived embryo models. Nat Commun 2025; 16:1772. [PMID: 39971935 PMCID: PMC11839995 DOI: 10.1038/s41467-025-56908-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Accepted: 02/05/2025] [Indexed: 02/21/2025] Open
Abstract
Recent advances in stem cell-derived embryo models have transformed developmental biology, offering insights into embryogenesis without the constraints of natural embryos. However, variability in their development challenges research standardization. To address this, we use deep learning to enhance the reproducibility of selecting stem cell-derived embryo models. Through live imaging and AI-based models, we classify 900 mouse post-implantation stem cell-derived embryo-like structures (ETiX-embryos) into normal and abnormal categories. Our best-performing model achieves 88% accuracy at 90 h post-cell seeding and 65% accuracy at the initial cell-seeding stage, forecasting developmental trajectories. Our analysis reveals that normally developed ETiX-embryos have higher cell counts and distinct morphological features such as larger size and more compact shape. Perturbation experiments increasing initial cell numbers further supported this finding by improving normal development outcomes. This study demonstrates deep learning's utility in improving embryo model selection and reveals critical features of ETiX-embryo self-organization, advancing consistency in this evolving field.
Collapse
Affiliation(s)
- Paolo Caldarelli
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Luca Deininger
- Group for Automated Image and Data Analysis, Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein‑Leopoldshafen, Germany
- Division of Pediatric Neurology and Metabolic Medicine, Department I, Center for Pediatric and Adolescent Medicine, Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Shi Zhao
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Pallavi Panda
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Changhuei Yang
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Ralf Mikut
- Group for Automated Image and Data Analysis, Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein‑Leopoldshafen, Germany.
| | - Magdalena Zernicka-Goetz
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
- Mammalian Embryo and Stem Cell Group, Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, UK.
| |
Collapse
|
13
|
Keyl J, Keyl P, Montavon G, Hosch R, Brehmer A, Mochmann L, Jurmeister P, Dernbach G, Kim M, Koitka S, Bauer S, Bechrakis N, Forsting M, Führer-Sakel D, Glas M, Grünwald V, Hadaschik B, Haubold J, Herrmann K, Kasper S, Kimmig R, Lang S, Rassaf T, Roesch A, Schadendorf D, Siveke JT, Stuschke M, Sure U, Totzeck M, Welt A, Wiesweg M, Baba HA, Nensa F, Egger J, Müller KR, Schuler M, Klauschen F, Kleesiek J. Decoding pan-cancer treatment outcomes using multimodal real-world data and explainable artificial intelligence. NATURE CANCER 2025; 6:307-322. [PMID: 39885364 PMCID: PMC11864985 DOI: 10.1038/s43018-024-00891-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 12/06/2024] [Indexed: 02/01/2025]
Abstract
Despite advances in precision oncology, clinical decision-making still relies on limited variables and expert knowledge. To address this limitation, we combined multimodal real-world data and explainable artificial intelligence (xAI) to introduce AI-derived (AID) markers for clinical decision support. We used xAI to decode the outcome of 15,726 patients across 38 solid cancer entities based on 350 markers, including clinical records, image-derived body compositions, and mutational tumor profiles. xAI determined the prognostic contribution of each clinical marker at the patient level and identified 114 key markers that accounted for 90% of the neural network's decision process. Moreover, xAI enabled us to uncover 1,373 prognostic interactions between markers. Our approach was validated in an independent cohort of 3,288 patients with lung cancer from a US nationwide electronic health record-derived database. These results show the potential of xAI to transform the assessment of clinical variables and enable personalized, data-driven cancer care.
Collapse
Affiliation(s)
- Julius Keyl
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany
- Institute of Pathology, University Hospital Essen (AöR), Essen, Germany
| | - Philipp Keyl
- Institute of Pathology, Ludwig-Maximilians-University Munich, Munich, Germany
- BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany
| | - Grégoire Montavon
- BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany
- Machine Learning Group, Technical University of Berlin, Berlin, Germany
- Department of Mathematics and Computer Science, Freie Universität Berlin, Berlin, Germany
| | - René Hosch
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany
| | - Alexander Brehmer
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany
| | - Liliana Mochmann
- Institute of Pathology, Ludwig-Maximilians-University Munich, Munich, Germany
| | - Philipp Jurmeister
- Institute of Pathology, Ludwig-Maximilians-University Munich, Munich, Germany
| | - Gabriel Dernbach
- Machine Learning Group, Technical University of Berlin, Berlin, Germany
| | - Moon Kim
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany
| | - Sven Koitka
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany
| | - Sebastian Bauer
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
| | - Nikolaos Bechrakis
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
- Department of Ophthalmology, University Hospital Essen (AöR), Essen, Germany
| | - Michael Forsting
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
| | - Dagmar Führer-Sakel
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- Department of Endocrinology, Diabetes and Metabolism, University Hospital Essen (AöR), Essen, Germany
| | - Martin Glas
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
- Division of Clinical Neurooncology, Department of Neurology and Center for Translational Neuro- and Behavioral Sciences (C-TNBS), University Medicine Essen, University Duisburg-Essen, Essen, Germany
| | - Viktor Grünwald
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
- Department of Urology, University Hospital Essen (AöR), Essen, Germany
| | - Boris Hadaschik
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
- Department of Urology, University Hospital Essen (AöR), Essen, Germany
| | - Johannes Haubold
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
| | - Ken Herrmann
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
- Department of Nuclear Medicine, University Hospital Essen (AöR), Essen, Germany
| | - Stefan Kasper
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
| | - Rainer Kimmig
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- Department of Gynecology and Obstetrics, University Hospital Essen (AöR), Essen, Germany
| | - Stephan Lang
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- Department of Otorhinolaryngology, University Hospital Essen (AöR), Essen, Germany
| | - Tienush Rassaf
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- Department of Cardiology and Vascular Medicine, West German Heart and Vascular Center Essen, University Hospital Essen (AöR), Essen, Germany
| | - Alexander Roesch
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
- Department of Dermatology, University Hospital Essen (AöR), Essen, Germany
| | - Dirk Schadendorf
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
- Department of Dermatology, University Hospital Essen (AöR), Essen, Germany
- Research Alliance Ruhr, Research Center One Health, University of Duisburg-Essen, Essen, Germany
| | - Jens T Siveke
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
- Bridge Institute of Experimental Tumor Therapy, West German Cancer Center, University Hospital Essen (AöR), University of Duisburg-Essen, Essen, Germany
- Division of Solid Tumor Translational Oncology, German Cancer Consortium (DKTK Partner Site Essen) and German Cancer Research Center, DKFZ, Heidelberg, Germany
| | - Martin Stuschke
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
- Department of Radiotherapy, University Hospital Essen (AöR), Essen, Germany
| | - Ulrich Sure
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
- Department of Neurosurgery and Spine Surgery, University Hospital Essen (AöR), Essen, Germany
| | - Matthias Totzeck
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- Department of Cardiology and Vascular Medicine, West German Heart and Vascular Center Essen, University Hospital Essen (AöR), Essen, Germany
| | - Anja Welt
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
| | - Marcel Wiesweg
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
| | - Hideo A Baba
- Institute of Pathology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
| | - Felix Nensa
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University of Duisburg-Essen, Essen, Germany
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany
| | - Klaus-Robert Müller
- BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany.
- Machine Learning Group, Technical University of Berlin, Berlin, Germany.
- Department of Artificial Intelligence, Korea University, Seoul, South Korea.
- MPI for Informatics, Saarbrücken, Germany.
| | - Martin Schuler
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany.
- Medical Faculty, University of Duisburg-Essen, Essen, Germany.
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany.
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany.
| | - Frederick Klauschen
- Institute of Pathology, Ludwig-Maximilians-University Munich, Munich, Germany.
- BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany.
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Berlin partner site, Berlin, Germany.
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Munich partner site, Munich, Germany.
- Bavarian Cancer Research Center (BZKF), Erlangen, Germany.
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany.
- Medical Faculty, University of Duisburg-Essen, Essen, Germany.
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany.
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany.
| |
Collapse
|
14
|
Small SL. Precision neurology. Ageing Res Rev 2025; 104:102632. [PMID: 39657848 DOI: 10.1016/j.arr.2024.102632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 11/23/2024] [Accepted: 12/05/2024] [Indexed: 12/12/2024]
Abstract
Over the past several decades, high-resolution brain imaging, blood and cerebrospinal fluid analyses, and other advanced technologies have changed diagnosis from an exercise depending primarily on the history and physical examination to a computer- and online resource-aided process that relies on larger and larger quantities of data. In addition, randomized controlled trials (RCT) at a population level have led to many new drugs and devices to treat neurological disease, including disease-modifying therapies. We are now at a crossroads. Combinatorially profound increases in data about individuals has led to an alternative to population-based RCTs. Genotyping and comprehensive "deep" phenotyping can sort individuals into smaller groups, enabling precise medical decisions at a personal level. In neurology, precision medicine that includes prediction, prevention and personalization requires that genomic and phenomic information further incorporate imaging and behavioral data. In this article, we review the genomic, phenomic, and computational aspects of precision medicine for neurology. After defining biological markers, we discuss some applications of these "-omic" and neuroimaging measures, and then outline the role of computation and ultimately brain simulation. We conclude the article with a discussion of the relation between precision medicine and value-based care.
Collapse
Affiliation(s)
- Steven L Small
- Department of Neuroscience, University of Texas at Dallas, Dallas, TX, USA; Department of Neurology, University of Texas Southwestern Medical Center, Dallas, TX, USA; Department of Neurology, The University of Chicago, Chicago, IL, USA; Department of Neurology, University of California, Irvine, Orange, CA, USA.
| |
Collapse
|
15
|
Peng M, Fan X, Hu Q, Mei X, Wang B, Wu Z, Hu H, Tang L, Hu X, Yang Y, Qin C, Zhang H, Liu Q, Chen X, Yu F. Deep-learning based electromagnetic navigation system for transthoracic percutaneous puncture of small pulmonary nodules. Sci Rep 2025; 15:2547. [PMID: 39833245 PMCID: PMC11747146 DOI: 10.1038/s41598-025-85209-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Accepted: 01/01/2025] [Indexed: 01/22/2025] Open
Abstract
Percutaneous transthoracic puncture of small pulmonary nodules is technically challenging. We developed a novel electromagnetic navigation puncture system for the puncture of sub-centimeter lung nodules by combining multiple deep learning models with electromagnetic and spatial localization technologies. We compared the performance of DL-EMNS and conventional CT-guided methods in percutaneous lung punctures using phantom and animal models. In the phantom study, the DL-EMNS group showed a higher technical success rate (95.6% vs. 77.8%, p = 0.027), smaller error (1.47 ± 1.62 mm vs. 3.98 ± 2.58 mm, p < 0.001), shorter procedure duration (291.56 ± 150.30 vs. 676.44 ± 246.12 s, p < 0.001), and fewer number of CT acquisitions (1.2 ± 0.66 vs. 2.93 ± 0.98, p < 0.001) compared to the traditional CT-guided group. In the animal study, DL-EMNS significantly improved technical success rate (100% vs. 84.0%, p = 0.015), reduced operation time (121.36 ± 38.87 s vs. 321.60 ± 129.12 s, p < 0.001), number of CT acquisitions (1.09 ± 0.29 vs. 2.96 ± 0.73, p < 0.001) and complication rate (0% vs. 20%, p = 0.002). In conclusion, with the assistance of DL-EMNS, the operators got better performance in the percutaneous puncture of small pulmonary nodules.
Collapse
Affiliation(s)
- Muyun Peng
- Department of Thoracic Surgery, The Second Xiangya Hospital of Central South University, 139 Renmin Road, Changsha, 410011, Hunan, China
- Hunan Key Laboratory of Early Diagnosis and Precise Treatment of Lung Cancer, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Xinyi Fan
- Infervision Medical Technology Co., Ltd., Beijing, China
| | - Qikang Hu
- Department of Thoracic Surgery, The Second Xiangya Hospital of Central South University, 139 Renmin Road, Changsha, 410011, Hunan, China
- Hunan Key Laboratory of Early Diagnosis and Precise Treatment of Lung Cancer, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Xilong Mei
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Bin Wang
- Department of Thoracic Surgery, The Second Xiangya Hospital of Central South University, 139 Renmin Road, Changsha, 410011, Hunan, China
- Hunan Key Laboratory of Early Diagnosis and Precise Treatment of Lung Cancer, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Zeyu Wu
- Department of Thoracic Surgery, The Second Xiangya Hospital of Central South University, 139 Renmin Road, Changsha, 410011, Hunan, China
- Hunan Key Laboratory of Early Diagnosis and Precise Treatment of Lung Cancer, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Huali Hu
- Department of Thoracic Surgery, Hunan Rehabilitation Hospital, Changsha, China
| | - Lei Tang
- Department of Thoracic Surgery, The Second Xiangya Hospital of Central South University, 139 Renmin Road, Changsha, 410011, Hunan, China
- Hunan Key Laboratory of Early Diagnosis and Precise Treatment of Lung Cancer, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Xinhang Hu
- Department of Thoracic Surgery, The Second Xiangya Hospital of Central South University, 139 Renmin Road, Changsha, 410011, Hunan, China
- Hunan Key Laboratory of Early Diagnosis and Precise Treatment of Lung Cancer, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Yanyi Yang
- Heath Management Center, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Chunxia Qin
- Infervision Medical Technology Co., Ltd., Beijing, China
| | - Huajie Zhang
- Infervision Medical Technology Co., Ltd., Beijing, China
| | - Qun Liu
- Infervision Medical Technology Co., Ltd., Beijing, China
| | - Xiaofeng Chen
- Hunan Key Laboratory of Early Diagnosis and Precise Treatment of Lung Cancer, The Second Xiangya Hospital of Central South University, Changsha, China.
- Department of Anesthesiology, The Second Xiangya Hospital of Central South University, 139 Renmin Road, Changsha, 410011, Hunan, China.
| | - Fenglei Yu
- Department of Thoracic Surgery, The Second Xiangya Hospital of Central South University, 139 Renmin Road, Changsha, 410011, Hunan, China.
- Hunan Key Laboratory of Early Diagnosis and Precise Treatment of Lung Cancer, The Second Xiangya Hospital of Central South University, Changsha, China.
| |
Collapse
|
16
|
Albuquerque C, Henriques R, Castelli M. Deep learning-based object detection algorithms in medical imaging: Systematic review. Heliyon 2025; 11:e41137. [PMID: 39758372 PMCID: PMC11699422 DOI: 10.1016/j.heliyon.2024.e41137] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 12/04/2024] [Accepted: 12/10/2024] [Indexed: 01/06/2025] Open
Abstract
Over the past decade, Deep Learning (DL) techniques have demonstrated remarkable advancements across various domains, driving their widespread adoption. Particularly in medical image analysis, DL received greater attention for tasks like image segmentation, object detection, and classification. This paper provides an overview of DL-based object recognition in medical images, exploring recent methods and emphasizing different imaging techniques and anatomical applications. Utilizing a meticulous quantitative and qualitative analysis following PRISMA guidelines, we examined publications based on citation rates to explore into the utilization of DL-based object detectors across imaging modalities and anatomical domains. Our findings reveal a consistent rise in the utilization of DL-based object detection models, indicating unexploited potential in medical image analysis. Predominantly within Medicine and Computer Science domains, research in this area is most active in the US, China, and Japan. Notably, DL-based object detection methods have gotten significant interest across diverse medical imaging modalities and anatomical domains. These methods have been applied to a range of techniques including CR scans, pathology images, and endoscopic imaging, showcasing their adaptability. Moreover, diverse anatomical applications, particularly in digital pathology and microscopy, have been explored. The analysis underscores the presence of varied datasets, often with significant discrepancies in size, with a notable percentage being labeled as private or internal, and with prospective studies in this field remaining scarce. Our review of existing trends in DL-based object detection in medical images offers insights for future research directions. The continuous evolution of DL algorithms highlighted in the literature underscores the dynamic nature of this field, emphasizing the need for ongoing research and fitted optimization for specific applications.
Collapse
|
17
|
Si F, Liu Q, Yu J. A prediction study on the occurrence risk of heart disease in older hypertensive patients based on machine learning. BMC Geriatr 2025; 25:27. [PMID: 39799333 PMCID: PMC11724603 DOI: 10.1186/s12877-025-05679-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 01/02/2025] [Indexed: 01/15/2025] Open
Abstract
OBJECTIVE Constructing a predictive model for the occurrence of heart disease in elderly hypertensive individuals, aiming to provide early risk identification. METHODS A total of 934 participants aged 60 and above from the China Health and Retirement Longitudinal Study with a 7-year follow-up (2011-2018) were included. Machine learning methods (logistic regression, XGBoost, DNN) were employed to build a model predicting heart disease risk in hypertensive patients. Model performance was comprehensively assessed using discrimination, calibration, and clinical decision curves. RESULTS After a 7-year follow-up of 934 older hypertensive patients, 243 individuals (26.03%) developed heart disease. Older hypertensive patients with baseline comorbid dyslipidemia, chronic pulmonary diseases, arthritis or rheumatic diseases faced a higher risk of future heart disease. Feature selection significantly improved predictive performance compared to the original variable set. The ROC-AUC for logistic regression, XGBoost, and DNN were 0.60 (95% CI: 0.53-0.68), 0.64 (95% CI: 0.57-0.71), and 0.67 (95% CI: 0.60-0.73), respectively, with logistic regression achieving optimal calibration. XGBoost demonstrated the most noticeable clinical benefit as the threshold increased. CONCLUSION Machine learning effectively identifies the risk of heart disease in older hypertensive patients based on data from the CHARLS cohort. The results suggest that older hypertensive patients with comorbid dyslipidemia, chronic pulmonary diseases, and arthritis or rheumatic diseases have a higher risk of developing heart disease. This information could facilitate early risk identification for future heart disease in older hypertensive patients.
Collapse
Affiliation(s)
- Fei Si
- Department of Cardiology, The Second Hospital & Clinical Medical School, Lanzhou University, No. 82 Cuiyingmen, Lanzhou, 730000, China
| | - Qian Liu
- Department of Cardiology, The Second Hospital & Clinical Medical School, Lanzhou University, No. 82 Cuiyingmen, Lanzhou, 730000, China
| | - Jing Yu
- Department of Cardiology, The Second Hospital & Clinical Medical School, Lanzhou University, No. 82 Cuiyingmen, Lanzhou, 730000, China.
| |
Collapse
|
18
|
Lee JW, Woo D, Kim KO, Kim ES, Kim SK, Lee HS, Kang B, Lee YJ, Kim J, Jang BI, Kim EY, Jo HH, Chung YJ, Ryu H, Park SK, Park DI, Yu H, Jeong S. Deep Learning Model Using Stool Pictures for Predicting Endoscopic Mucosal Inflammation in Patients With Ulcerative Colitis. Am J Gastroenterol 2025; 120:213-224. [PMID: 39051648 PMCID: PMC11676591 DOI: 10.14309/ajg.0000000000002978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 07/16/2024] [Indexed: 07/27/2024]
Abstract
INTRODUCTION Stool characteristics may change depending on the endoscopic activity of ulcerative colitis (UC). We developed a deep learning model using stool photographs of patients with UC (DLSUC) to predict endoscopic mucosal inflammation. METHODS This was a prospective multicenter study conducted in 6 tertiary referral hospitals. Patients scheduled to undergo endoscopy for mucosal inflammation monitoring were asked to take photographs of their stool using smartphones within 1 week before the day of endoscopy. DLSUC was developed using 2,161 stool pictures from 306 patients and tested on 1,047 stool images from 126 patients. The UC endoscopic index of severity was used to define endoscopic activity. The performance of DLSUC in endoscopic activity prediction was compared with that of fecal calprotectin (Fcal). RESULTS The area under the receiver operating characteristic curve (AUC) of DLSUC for predicting endoscopic activity was 0.801 (95% confidence interval [CI] 0.717-0.873), which was not statistically different from the AUC of Fcal (0.837 [95% CI, 0.767-0.899, DeLong P = 0.458]). When rectal-sparing cases (23/126, 18.2%) were excluded, the AUC of DLSUC increased to 0.849 (95% CI, 0.760-0.919). The accuracy, sensitivity, and specificity of DLSUC in predicting endoscopic activity were 0.746, 0.662, and 0.877 in all patients and 0.845, 0.745, and 0.958 in patients without rectal sparing, respectively. Active patients classified by DLSUC were more likely to experience disease relapse during a median 8-month follow-up (log-rank test, P = 0.002). DISCUSSION DLSUC demonstrated a good discriminating power similar to that of Fcal in predicting endoscopic activity with improved accuracy in patients without rectal sparing. This study implies that stool photographs are a useful monitoring tool for typical UC.
Collapse
Affiliation(s)
- Jung Won Lee
- Division of Gastroenterology, Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, Korea;
| | - Dongwon Woo
- Research Center for Artificial Intelligence in Medicine, Kyungpook National University Hospital, Daegu, Korea;
| | - Kyeong Ok Kim
- Division of Gastroenterology, Department of Internal Medicine, Yeungnam University College of Medicine, Daegu, Korea;
| | - Eun Soo Kim
- Division of Gastroenterology, Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, Korea;
| | - Sung Kook Kim
- Division of Gastroenterology, Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, Korea;
| | - Hyun Seok Lee
- Division of Gastroenterology, Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, Korea;
| | - Ben Kang
- Department of Pediatrics, School of Medicine, Kyungpook National University, Daegu, Korea;
| | - Yoo Jin Lee
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keimyung University School of Medicine, Daegu, Korea;
| | - Jeongseok Kim
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keimyung University School of Medicine, Daegu, Korea;
- Zane Cohen Centre for Digestive Diseases, Mount Sinai Hospital, Toronto, Ontario, Canada;
| | - Byung Ik Jang
- Division of Gastroenterology, Department of Internal Medicine, Yeungnam University College of Medicine, Daegu, Korea;
| | - Eun Young Kim
- Department of Internal Medicine, Daegu Catholic University School of Medicine, Daegu, Korea;
| | - Hyeong Ho Jo
- Department of Internal Medicine, Daegu Catholic University School of Medicine, Daegu, Korea;
| | - Yun Jin Chung
- Division of Gastroenterology, Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, Korea;
| | - Hanjun Ryu
- Department of Internal Medicine, Daegu Fatima Hospital, Daegu, Korea
| | - Soo-Kyung Park
- Division of Gastroenterology, Department of Internal Medicine and Inflammatory Bowel Disease Center, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul
| | - Dong-Il Park
- Division of Gastroenterology, Department of Internal Medicine and Inflammatory Bowel Disease Center, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul
| | - Hosang Yu
- Research Center for Artificial Intelligence in Medicine, Kyungpook National University Hospital, Daegu, Korea;
| | - Sungmoon Jeong
- Research Center for Artificial Intelligence in Medicine, Kyungpook National University Hospital, Daegu, Korea;
- Department of Medical Informatics, School of Medicine, Kyungpook National University, Daegu, Korea;
- AICU Corp., Daegu, South Korea.
| |
Collapse
|
19
|
Ozturk L, Laclau C, Boulon C, Mangin M, Braz-Ma E, Constans J, Dari L, Le Hello C. Analysis of nailfold capillaroscopy images with artificial intelligence: Data from literature and performance of machine learning and deep learning from images acquired in the SCLEROCAP study. Microvasc Res 2025; 157:104753. [PMID: 39389419 DOI: 10.1016/j.mvr.2024.104753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2024] [Revised: 09/04/2024] [Accepted: 10/06/2024] [Indexed: 10/12/2024]
Abstract
OBJECTIVE To evaluate the performance of machine learning and then deep learning to detect a systemic scleroderma (SSc) landscape from the same set of nailfold capillaroscopy (NC) images from the French prospective multicenter observational study SCLEROCAP. METHODS NC images from the first 100 SCLEROCAP patients were analyzed to assess the performance of machine learning and then deep learning in identifying the SSc landscape, the NC images having previously been independently and consensually labeled by expert clinicians. Images were divided into a training set (70 %) and a validation set (30 %). After features extraction from the NC images, we tested six classifiers (random forests (RF), support vector machine (SVM), logistic regression (LR), light gradient boosting (LGB), extreme gradient boosting (XGB), K-nearest neighbors (KNN)) on the training set with five different combinations of the images. The performance of each classifier was evaluated by the F1 score. In the deep learning section, we tested three pre-trained models from the TIMM library (ResNet-18, DenseNet-121 and VGG-16) on raw NC images after applying image augmentation methods. RESULTS With machine learning, performance ranged from 0.60 to 0.73 for each variable, with Hu and Haralick moments being the most discriminating. Performance was highest with the RF, LGB and XGB models (F1 scores: 0.75-0.79). The highest score was obtained by combining all variables and using the LGB model (F1 score: 0.79 ± 0.05, p < 0.01). With deep learning, performance reached a minimum accuracy of 0.87. The best results were obtained with the DenseNet-121 model (accuracy 0.94 ± 0.02, F1 score 0.94 ± 0.02, AUC 0.95 ± 0.03) as compared to ResNet-18 (accuracy 0.87 ± 0.04, F1 score 0.85 ± 0.03, AUC 0.87 ± 0.04) and VGG-16 (accuracy 0.90 ± 0.03, F1 score 0.91 ± 0.02, AUC 0.91 ± 0.04). CONCLUSION By using machine learning and then deep learning on the same set of labeled NC images from the SCLEROCAP study, the highest performances to detect SSc landscape were obtained with deep learning and in particular DenseNet-121. This pre-trained model could therefore be used to automatically interpret NC images in case of suspected SSc. This result nevertheless needs to be confirmed on a larger number of NC images.
Collapse
Affiliation(s)
- Lutfi Ozturk
- CHU de Saint-Etienne, Médecine Vasculaire et Thérapeutique, Saint-Etienne, France.
| | - Charlotte Laclau
- Université Jean Monnet, Laboratoire Hubert Curien, Saint-Etienne, France
| | | | | | - Etheve Braz-Ma
- Université Jean Monnet, Laboratoire Hubert Curien, Saint-Etienne, France
| | | | - Loubna Dari
- CHU St-André, Médecine Vasculaire, Bordeaux, France
| | - Claire Le Hello
- CHU de Saint-Etienne, Médecine Vasculaire et Thérapeutique, Saint-Etienne, France; Université Jean Monnet, CHU Saint-Etienne, Médecine Vasculaire et Thérapeutique, Mines Saint-Etienne, INSERM, SAINBIOSE U1059, Saint-Etienne, France
| |
Collapse
|
20
|
Tüdös Z, Veverková L, Baxa J, Hartmann I, Čtvrtlík F. The current and upcoming era of radiomics in phaeochromocytoma and paraganglioma. Best Pract Res Clin Endocrinol Metab 2025; 39:101923. [PMID: 39227277 DOI: 10.1016/j.beem.2024.101923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
Abstract
The topic of the diagnosis of phaeochromocytomas remains highly relevant because of advances in laboratory diagnostics, genetics, and therapeutic options and also the development of imaging methods. Computed tomography still represents an essential tool in clinical practice, especially in incidentally discovered adrenal masses; it allows morphological evaluation, including size, shape, necrosis, and unenhanced attenuation. More advanced post-processing tools to analyse digital images, such as texture analysis and radiomics, are currently being studied. Radiomic features utilise digital image pixels to calculate parameters and relations undetectable by the human eye. On the other hand, the amount of radiomic data requires massive computer capacity. Radiomics, together with machine learning and artificial intelligence in general, has the potential to improve not only the differential diagnosis but also the prediction of complications and therapy outcomes of phaeochromocytomas in the future. Currently, the potential of radiomics and machine learning does not match expectations and awaits its fulfilment.
Collapse
Affiliation(s)
- Zbyněk Tüdös
- Department of Radiology, University Hospital and Faculty of Medicine and Dentistry, Palacky University, Olomouc, Czech Republic
| | - Lucia Veverková
- Department of Radiology, University Hospital and Faculty of Medicine and Dentistry, Palacky University, Olomouc, Czech Republic
| | - Jan Baxa
- Department of Imaging Methods, Faculty Hospital Pilsen and Faculty of Medicine in Pilsen, Charles University, Czech Republic
| | - Igor Hartmann
- Department of Urology, University Hospital and Faculty of Medicine and Dentistry, Palacky University, Olomouc, Czech Republic
| | - Filip Čtvrtlík
- Department of Radiology, University Hospital and Faculty of Medicine and Dentistry, Palacky University, Olomouc, Czech Republic.
| |
Collapse
|
21
|
Khan K, Katarya R. MCBERT: A multi-modal framework for the diagnosis of autism spectrum disorder. Biol Psychol 2025; 194:108976. [PMID: 39722324 DOI: 10.1016/j.biopsycho.2024.108976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 11/28/2024] [Accepted: 12/16/2024] [Indexed: 12/28/2024]
Abstract
Within the domain of neurodevelopmental disorders, autism spectrum disorder (ASD) emerges as a distinctive neurological condition characterized by multifaceted challenges. The delayed identification of ASD poses a considerable hurdle in effectively managing its impact and mitigating its severity. Addressing these complexities requires a nuanced understanding of data modalities and the underlying patterns. Existing studies have focused on a single data modality for ASD diagnosis. Recently, there has been a significant shift towards multimodal architectures with deep learning strategies due to their ability to handle and incorporate complex data modalities. In this paper, we developed a novel multimodal ASD diagnosis architecture, referred to as Multi-Head CNN with BERT (MCBERT), which integrates bidirectional encoder representations from transformers (BERT) for meta-features and a multi-head convolutional neural network (MCNN) for the brain image modality. The MCNN incorporates two attention mechanisms to capture spatial (SAC) and channel (CAC) features. The outputs of BERT and MCNN are then fused and processed through a classification module to generate the final diagnosis. We employed the ABIDE-I dataset, a multimodal dataset, and conducted a leave-one-site-out classification to assess the model's effectiveness comprehensively. Experimental simulations demonstrate that the proposed architecture achieves a high accuracy of 93.4 %. Furthermore, the exploration of functional MRI data may provide a deeper understanding of the underlying characteristics of ASD.
Collapse
Affiliation(s)
- Kainat Khan
- Big Data Analytics and Web Intelligence Laboratory, Department of Computer Science & Engineering, Delhi Technological University, New Delhi, India.
| | - Rahul Katarya
- Big Data Analytics and Web Intelligence Laboratory, Department of Computer Science & Engineering, Delhi Technological University, New Delhi, India.
| |
Collapse
|
22
|
Zhao J, Liu J, Wang S, Zhang P, Yu W, Yang C, Zhang Y, Chen Y. PIAA: Pre-imaging all-round assistant for digital radiography. Technol Health Care 2025; 33:127-142. [PMID: 39240596 DOI: 10.3233/thc-240639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
BACKGROUND In radiography procedures, radiographers' suboptimal positioning and exposure parameter settings may necessitate image retakes, subjecting patients to unnecessary ionizing radiation exposure. Reducing retakes is crucial to minimize patient X-ray exposure and conserve medical resources. OBJECTIVE We propose a Digital Radiography (DR) Pre-imaging All-round Assistant (PIAA) that leverages Artificial Intelligence (AI) technology to enhance traditional DR. METHODS PIAA consists of an RGB-Depth (RGB-D) multi-camera array, an embedded computing platform, and multiple software components. It features an Adaptive RGB-D Image Acquisition (ARDIA) module that automatically selects the appropriate RGB camera based on the distance between the cameras and patients. It includes a 2.5D Selective Skeletal Keypoints Estimation (2.5D-SSKE) module that fuses depth information with 2D keypoints to estimate the pose of target body parts. Thirdly, it also uses a Domain expertise (DE) embedded Full-body Exposure Parameter Estimation (DFEPE) module that combines 2.5D-SSKE and DE to accurately estimate parameters for full-body DR views. RESULTS Optimizes DR workflow, significantly enhancing operational efficiency. The average time required for positioning patients and preparing exposure parameters was reduced from 73 seconds to 8 seconds. CONCLUSIONS PIAA shows significant promise for extension to full-body examinations.
Collapse
Affiliation(s)
- Jie Zhao
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
- Careray Digital Medical Technology Co., Ltd., Suzhou, China
| | - Jianqiang Liu
- Careray Digital Medical Technology Co., Ltd., Suzhou, China
| | - Shijie Wang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Pinzheng Zhang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Wenxue Yu
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Chunfeng Yang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Yudong Zhang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Yang Chen
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| |
Collapse
|
23
|
Shams Alden ZNAM, Ata O. A comprehensive analysis and performance evaluation for osteoporosis prediction models. PeerJ Comput Sci 2024; 10:e2338. [PMID: 39896405 PMCID: PMC11784534 DOI: 10.7717/peerj-cs.2338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 08/28/2024] [Indexed: 02/04/2025]
Abstract
Medical data analysis is an expanding area of study that holds the promise of transforming the healthcare landscape. The use of available data by researchers gives guidelines to improve health practitioners' decision-making capacity, thus enhancing patients' lives. The study looks at using deep learning techniques to predict the onset of osteoporosis from the NHANES 2017-2020 dataset that was preprocessed and arranged into SpineOsteo and FemurOsteo datasets. Two feature selection methods, namely mutual information (MI) and recursive feature elimination (RFE), were applied to sequential deep neural network models, convolutional neural network models, and recurrent neural network models. It can be concluded from the models that the mutual information method achieved higher accuracy than recursive feature elimination, and the MI feature selection CNN model showed better performance by showing 99.15% accuracy for the SpineOsteo dataset and 99.94% classification accuracy for the FemurOsteo dataset. Key findings of this study include family medical history, cases of fractures in patients and parental hip fractures, and regular use of medications like prednisone or cortisone. The research underscores the potential for deep learning in medical data processing, which eventually opens the way for enhanced models for diagnosis and prognosis based on non-image medical data. The implications of the study shall then be important for healthcare providers to be more informed in their decision-making processes for patients' outcomes.
Collapse
Affiliation(s)
- Zahraa Noor Aldeen M. Shams Alden
- Faculty of Tourism Science, University of Kerbala, Kerbala, Iraq
- Department of Electrical and Computer Engineering, Altinbas University, Istanbul, Turkey
| | - Oguz Ata
- Department of Software Engineering, Engineering and Architecture Faculty, Altinbas University, İstanbul, Turkey
| |
Collapse
|
24
|
Tran L, Kandel H, Sari D, Chiu CH, Watson SL. Artificial Intelligence and Ophthalmic Clinical Registries. Am J Ophthalmol 2024; 268:263-274. [PMID: 39111520 DOI: 10.1016/j.ajo.2024.07.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 07/30/2024] [Accepted: 07/31/2024] [Indexed: 09/03/2024]
Abstract
PURPOSE The recent advances in artificial intelligence (AI) represent a promising solution to increasing clinical demand and ever limited health resources. Whilst powerful, AI models require vast amounts of representative training data to output meaningful predictions in the clinical environment. Clinical registries represent a promising source of large volume real-world data which could be used to train more accurate and widely applicable AI models. This review aims to provide an overview of the current applications of AI to ophthalmic clinical registry data. DESIGN AND METHODS A systematic search of EMBASE, Medline, PubMed, Scopus and Web of Science for primary research articles that applied AI to ophthalmic clinical registry data was conducted in July 2024. RESULTS Twenty-three primary research articles applying AI to ophthalmic clinic registries (n = 14) were found. Registries were primarily defined by the condition captured and the most common conditions where AI was applied were glaucoma (n = 3) and neovascular age-related macular degeneration (n = 3). Tabular clinical data was the most common form of input into AI algorithms and outputs were primarily classifiers (n = 8, 40%) and risk quantifier models (n = 7, 35%). The AI algorithms applied were almost exclusively supervised conventional machine learning models (n = 39, 85%) such as decision tree classifiers and logistic regression, with only 7 applications of deep learning or natural language processing algorithms. Significant heterogeneity was found with regards to model validation methodology and measures of performance. CONCLUSIONS Limited applications of deep learning algorithms to clinical registry data have been reported. The lack of standardized validation methodology and heterogeneity of performance outcome reporting suggests that the application of AI to clinical registries is still in its infancy constrained by the poor accessibility of registry data and reflecting the need for a standardization of methodology and greater involvement of domain experts in the future development of clinically deployable AI.
Collapse
Affiliation(s)
- Luke Tran
- From the Faculty of Medicine and Health, Save Sight Institute, The University of Sydney, (L.T., H.K., D.S., C.H.C., S.L.W.) Sydney, New South Wales, Australia.
| | - Himal Kandel
- From the Faculty of Medicine and Health, Save Sight Institute, The University of Sydney, (L.T., H.K., D.S., C.H.C., S.L.W.) Sydney, New South Wales, Australia
| | - Daliya Sari
- From the Faculty of Medicine and Health, Save Sight Institute, The University of Sydney, (L.T., H.K., D.S., C.H.C., S.L.W.) Sydney, New South Wales, Australia
| | - Christopher Hy Chiu
- From the Faculty of Medicine and Health, Save Sight Institute, The University of Sydney, (L.T., H.K., D.S., C.H.C., S.L.W.) Sydney, New South Wales, Australia
| | - Stephanie L Watson
- From the Faculty of Medicine and Health, Save Sight Institute, The University of Sydney, (L.T., H.K., D.S., C.H.C., S.L.W.) Sydney, New South Wales, Australia
| |
Collapse
|
25
|
Najafi H, Savoji K, Mirzaeibonehkhater M, Moravvej SV, Alizadehsani R, Pedrammehr S. A Novel Method for 3D Lung Tumor Reconstruction Using Generative Models. Diagnostics (Basel) 2024; 14:2604. [PMID: 39594270 PMCID: PMC11592759 DOI: 10.3390/diagnostics14222604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Revised: 11/02/2024] [Accepted: 11/12/2024] [Indexed: 11/28/2024] Open
Abstract
BACKGROUND Lung cancer remains a significant health concern, and the effectiveness of early detection significantly enhances patient survival rates. Identifying lung tumors with high precision is a challenge due to the complex nature of tumor structures and the surrounding lung tissues. METHODS To address these hurdles, this paper presents an innovative three-step approach that leverages Generative Adversarial Networks (GAN), Long Short-Term Memory (LSTM), and VGG16 algorithms for the accurate reconstruction of three-dimensional (3D) lung tumor images. The first challenge we address is the accurate segmentation of lung tissues from CT images, a task complicated by the overwhelming presence of non-lung pixels, which can lead to classifier imbalance. Our solution employs a GAN model trained with a reinforcement learning (RL)-based algorithm to mitigate this imbalance and enhance segmentation accuracy. The second challenge involves precisely detecting tumors within the segmented lung regions. We introduce a second GAN model with a novel loss function that significantly improves tumor detection accuracy. Following successful segmentation and tumor detection, the VGG16 algorithm is utilized for feature extraction, preparing the data for the final 3D reconstruction. These features are then processed through an LSTM network and converted into a format suitable for the reconstructive GAN. This GAN, equipped with dilated convolution layers in its discriminator, captures extensive contextual information, enabling the accurate reconstruction of the tumor's 3D structure. RESULTS The effectiveness of our method is demonstrated through rigorous evaluation against established techniques using the LIDC-IDRI dataset and standard performance metrics, showcasing its superior performance and potential for enhancing early lung cancer detection. CONCLUSIONS This study highlights the benefits of combining GANs, LSTM, and VGG16 into a unified framework. This approach significantly improves the accuracy of detecting and reconstructing lung tumors, promising to enhance diagnostic methods and patient results in lung cancer treatment.
Collapse
Affiliation(s)
- Hamidreza Najafi
- Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology, Tehran 16846-13114, Iran;
| | - Kimia Savoji
- Biomedical Data Science and Informatics, School of Computing, Clemson University, Clemson, SC 29634, USA;
| | - Marzieh Mirzaeibonehkhater
- Department of Electrical and Computer Engineering, Indiana University-Purdue University, Indianapolis, IN 46202, USA;
| | - Seyed Vahid Moravvej
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran;
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, VIC 3216, Australia;
| | - Siamak Pedrammehr
- Faculty of Design, Tabriz Islamic Art University, Tabriz 51647-36931, Iran
| |
Collapse
|
26
|
Devault-Tousignant C, Harvie M, Bissada E, Christopoulos A, Tabet P, Guertin L, Bahig H, Ayad T. The use of artificial intelligence in reconstructive surgery for head and neck cancer: a systematic review. Eur Arch Otorhinolaryngol 2024; 281:6057-6068. [PMID: 38662215 DOI: 10.1007/s00405-024-08663-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 04/05/2024] [Indexed: 04/26/2024]
Abstract
OBJECTIVES The popularity of artificial intelligence (AI) in head and neck cancer (HNC) management is increasing, but postoperative complications remain prevalent and are the main factor that impact prognosis after surgery. Hence, recent studies aim to assess new AI models to evaluate their ability to predict free flap complications more effectively than traditional algorithms. This systematic review aims to summarize current evidence on the utilization of AI models to predict complications following reconstructive surgery for HNC. METHODS A combination of MeSH terms and keywords was used to cover the following three subjects: "HNC," "artificial intelligence," and "free flap or reconstructive surgery." The electronic literature search was performed in three relevant databases: Medline (Ovid), Embase (Ovid), and Cochrane. Quality appraisal of the included study was conducted using the TRIPOD Statement. RESULTS The review included a total of 5 manuscripts (n = 5) for a total of 7524 patients. Across studies, the highest area under the receiver operating characteristic (AUROC) value achieved was 0.824 by the Auto-WEKA model. However, only 20% of reported AUROCs exceeded 0.70. One study concluded that most AI models were comparable or inferior in performance to conventional logistic regression. The highest predictors of complications were flap type, smoking status, tumour location, and age. DISCUSSION Some models showed promising results. Predictors identified across studies were different than those found in existing literature, showing the added value of AI models. However, the algorithms showed inconsistent results, underlying the need for better-powered studies with larger databases before clinical implementation.
Collapse
Affiliation(s)
- Cyril Devault-Tousignant
- Faculty of Medicine, McGill University, 3605 de la Montagne Street, Montreal, QC, H3G 2M1, Canada.
| | - Myriam Harvie
- Faculty of Medicine, University of Montreal, Montreal, QC, Canada
| | - Eric Bissada
- Division of Otolaryngology Head and Neck Surgery, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Apostolos Christopoulos
- Division of Otolaryngology Head and Neck Surgery, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Paul Tabet
- Division of Otolaryngology Head and Neck Surgery, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Louis Guertin
- Division of Otolaryngology Head and Neck Surgery, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Houda Bahig
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Tareck Ayad
- Division of Otolaryngology Head and Neck Surgery, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
27
|
Kral J, Hradis M, Buzga M, Kunovsky L. Exploring the benefits and challenges of AI-driven large language models in gastroenterology: Think out of the box. Biomed Pap Med Fac Univ Palacky Olomouc Czech Repub 2024; 168:277-283. [PMID: 39234774 DOI: 10.5507/bp.2024.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Accepted: 08/16/2024] [Indexed: 09/06/2024] Open
Abstract
Artificial Intelligence (AI) has evolved significantly over the past decades, from its early concepts in the 1950s to the present era of deep learning and natural language processing. Advanced large language models (LLMs), such as Chatbot Generative Pre-Trained Transformer (ChatGPT) is trained to generate human-like text responses. This technology has the potential to revolutionize various aspects of gastroenterology, including diagnosis, treatment, education, and decision-making support. The benefits of using LLMs in gastroenterology could include accelerating diagnosis and treatment, providing personalized care, enhancing education and training, assisting in decision-making, and improving communication with patients. However, drawbacks and challenges such as limited AI capability, training on possibly biased data, data errors, security and privacy concerns, and implementation costs must be addressed to ensure the responsible and effective use of this technology. The future of LLMs in gastroenterology relies on the ability to process and analyse large amounts of data, identify patterns, and summarize information and thus assist physicians in creating personalized treatment plans. As AI advances, LLMs will become more accurate and efficient, allowing for faster diagnosis and treatment of gastroenterological conditions. Ensuring effective collaboration between AI developers, healthcare professionals, and regulatory bodies is essential for the responsible and effective use of this technology. By finding the right balance between AI and human expertise and addressing the limitations and risks associated with its use, LLMs can play an increasingly significant role in gastroenterology, contributing to better patient care and supporting doctors in their work.
Collapse
Affiliation(s)
- Jan Kral
- Department of Internal Medicine, University Hospital Motol and Second Faculty of Medicine, Charles University, Prague, Czech Republic
- Department of Hepatogastroenterology, Institute for Clinical and Experimental Medicine, Prague, Czech Republic
| | - Michal Hradis
- MAIA LABS s.r.o., Brno, Czech Republic
- Faculty of Information Technology, University of Technology, Brno, Czech Republic
| | - Marek Buzga
- Department of Physiology and Pathophysiology, Faculty of Medicine, University of Ostrava, Ostrava, Czech Republic
- Institute of Laboratory Medicine, University Hospital Ostrava, Ostrava, Czech Republic
| | - Lumir Kunovsky
- 2nd Department of Internal Medicine - Gastroenterology and Geriatrics, University Hospital Olomouc and Faculty of Medicine and Dentistry, Palacky University Olomouc, Olomouc, Czech Republic
- Department of Surgery, University Hospital Brno and Faculty of Medicine, Masaryk University, Brno, Czech Republic
- Department of Gastroenterology and Digestive Endoscopy, Masaryk Memorial Cancer Institute, Brno, Czech Republic
| |
Collapse
|
28
|
Avanzo M, Stancanello J, Pirrone G, Drigo A, Retico A. The Evolution of Artificial Intelligence in Medical Imaging: From Computer Science to Machine and Deep Learning. Cancers (Basel) 2024; 16:3702. [PMID: 39518140 PMCID: PMC11545079 DOI: 10.3390/cancers16213702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Revised: 10/26/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024] Open
Abstract
Artificial intelligence (AI), the wide spectrum of technologies aiming to give machines or computers the ability to perform human-like cognitive functions, began in the 1940s with the first abstract models of intelligent machines. Soon after, in the 1950s and 1960s, machine learning algorithms such as neural networks and decision trees ignited significant enthusiasm. More recent advancements include the refinement of learning algorithms, the development of convolutional neural networks to efficiently analyze images, and methods to synthesize new images. This renewed enthusiasm was also due to the increase in computational power with graphical processing units and the availability of large digital databases to be mined by neural networks. AI soon began to be applied in medicine, first through expert systems designed to support the clinician's decision and later with neural networks for the detection, classification, or segmentation of malignant lesions in medical images. A recent prospective clinical trial demonstrated the non-inferiority of AI alone compared with a double reading by two radiologists on screening mammography. Natural language processing, recurrent neural networks, transformers, and generative models have both improved the capabilities of making an automated reading of medical images and moved AI to new domains, including the text analysis of electronic health records, image self-labeling, and self-reporting. The availability of open-source and free libraries, as well as powerful computing resources, has greatly facilitated the adoption of deep learning by researchers and clinicians. Key concerns surrounding AI in healthcare include the need for clinical trials to demonstrate efficacy, the perception of AI tools as 'black boxes' that require greater interpretability and explainability, and ethical issues related to ensuring fairness and trustworthiness in AI systems. Thanks to its versatility and impressive results, AI is one of the most promising resources for frontier research and applications in medicine, in particular for oncological applications.
Collapse
Affiliation(s)
- Michele Avanzo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | | | - Giovanni Pirrone
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | - Annalisa Drigo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | - Alessandra Retico
- National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy;
| |
Collapse
|
29
|
Jia H, Tang S, Guo W, Pan P, Qian Y, Hu D, Dai Y, Yang Y, Geng C, Lv H. Differential diagnosis of congenital ventricular septal defect and atrial septal defect in children using deep learning-based analysis of chest radiographs. BMC Pediatr 2024; 24:661. [PMID: 39407181 PMCID: PMC11476512 DOI: 10.1186/s12887-024-05141-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 10/09/2024] [Indexed: 10/20/2024] Open
Abstract
BACKGROUND Children with atrial septal defect (ASD) and ventricular septal defect (VSD) are frequently examined for respiratory symptoms, even when the underlying disease is not found. Chest radiographs often serve as the primary imaging modality. It is crucial to differentiate between ASD and VSD due to their distinct treatment. PURPOSE To assess whether deep learning analysis of chest radiographs can more effectively differentiate between ASD and VSD in children. METHODS In this retrospective study, chest radiographs and corresponding radiology reports from 1,194 patients were analyzed. The cases were categorized into a training set and a validation set, comprising 480 cases of ASD and 480 cases of VSD, and a test set with 115 cases of ASD and 119 cases of VSD. Four deep learning network models-ResNet-CBAM, InceptionV3, EfficientNet, and ViT-were developed for training, and a fivefold cross-validation method was employed to optimize the models. Receiver operating characteristic (ROC) curve analyses were conducted to assess the performance of each model. The most effective algorithm was compared with the interpretations provided by two radiologists on 234 images from the test group. RESULTS The average accuracy, sensitivity, and specificity of the four deep learning models in the differential diagnosis of VSD and ASD were higher than 70%. The AUC values of ResNet-CBAM, IncepetionV3, EfficientNet, and ViT were 0.87, 0.91, 0.90, and 0.66, respectively. Statistical analysis showed that the differential diagnosis efficiency of InceptionV3 was the highest, reaching 87% classification accuracy. The accuracy of InceptionV3 in the differential diagnosis of VSD and ASD was higher than that of the radiologists. CONCLUSIONS Deep learning methods such as IncepetionV3 based on chest radiographs in the study showed good performance for differential diagnosis of congenital VSD and ASD, which may be able to assist radiologists in diagnosis, education, and training, and reduce missed diagnosis and misdiagnosis.
Collapse
Affiliation(s)
- Huihui Jia
- Department of Radiology, Children ' s Hospital of Soochow University, 215025, Suzhou, P. R. China
| | - Songqiao Tang
- School of Electronic & Information Engineering, Suzhou University of Science and Technology, 215009, Suzhou, China
| | - Wanliang Guo
- Department of Radiology, Children ' s Hospital of Soochow University, 215025, Suzhou, P. R. China
| | - Peng Pan
- Department of Radiology, Children ' s Hospital of Soochow University, 215025, Suzhou, P. R. China
| | - Yufeng Qian
- Department of Radiology, Children ' s Hospital of Soochow University, 215025, Suzhou, P. R. China
| | - Dongliang Hu
- Department of Radiology, Children ' s Hospital of Soochow University, 215025, Suzhou, P. R. China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 215163, Suzhou, China
| | - Yang Yang
- Department of Radiology, Children ' s Hospital of Soochow University, 215025, Suzhou, P. R. China
| | - Chen Geng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 215163, Suzhou, China.
- Jinan Guoke Medical Technology Development Co., Ltd, 250102, Shandong, China.
| | - Haitao Lv
- Department of Pediatric Cardiology, Children ' s Hospital of Soochow University, 215025, Suzhou, P. R. China.
| |
Collapse
|
30
|
Givnish TJ. Deep learning sharpens vistas on biodiversity mapping. Proc Natl Acad Sci U S A 2024; 121:e2416358121. [PMID: 39348547 PMCID: PMC11474093 DOI: 10.1073/pnas.2416358121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/02/2024] Open
Affiliation(s)
- Thomas J. Givnish
- Department of Botany, University of Wisconsin-Madison, Madison, WI53706
| |
Collapse
|
31
|
Tanaka K, Kato K, Nonaka N, Seita J. Efficient HLA imputation from sequential SNPs data by transformer. J Hum Genet 2024; 69:533-540. [PMID: 39095607 PMCID: PMC11422163 DOI: 10.1038/s10038-024-01278-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Revised: 07/18/2024] [Accepted: 07/19/2024] [Indexed: 08/04/2024]
Abstract
Human leukocyte antigen (HLA) genes are associated with a variety of diseases, yet the direct typing of HLA alleles is both time-consuming and costly. Consequently, various imputation methods leveraging sequential single nucleotide polymorphisms (SNPs) data have been proposed, employing either statistical or deep learning models, such as the convolutional neural network (CNN)-based model, DEEP*HLA. However, these methods exhibit limited imputation efficiency for infrequent alleles and necessitate a large size of reference dataset. In this context, we have developed a Transformer-based model to HLA allele imputation, named "HLA Reliable IMpuatioN by Transformer (HLARIMNT)" designed to exploit the sequential nature of SNPs data. We evaluated HLARIMNT's performance using two distinct reference panels; Pan-Asian reference panel (n = 530) and Type 1 Diabetes genetics Consortium (T1DGC) reference panel (n = 5225), alongside a combined panel (n = 1060). HLARIMNT demonstrated superior accuracy to DEEP*HLA across several indices, particularly for infrequent alleles. Furthermore, we explored the impact of varying training data sizes on imputation accuracy, finding that HLARIMNT consistently outperformed across all data size. These findings suggest that Transformer-based models can efficiently impute not only HLA types but potentially other gene types from sequential SNPs data.
Collapse
Affiliation(s)
- Kaho Tanaka
- Faculty of Engineering, Kyoto University, Kyoto, Japan
- Advanced Data Science Project, RIKEN Information R&D and Strategy Headquarters, RIKEN, Tokyo, Japan
| | - Kosuke Kato
- Advanced Data Science Project, RIKEN Information R&D and Strategy Headquarters, RIKEN, Tokyo, Japan
| | - Naoki Nonaka
- Advanced Data Science Project, RIKEN Information R&D and Strategy Headquarters, RIKEN, Tokyo, Japan
| | - Jun Seita
- Advanced Data Science Project, RIKEN Information R&D and Strategy Headquarters, RIKEN, Tokyo, Japan.
| |
Collapse
|
32
|
Egger J, Gsaxner C, Luijten G, Chen J, Chen X, Bian J, Kleesiek J, Puladi B. Is the Apple Vision Pro the Ultimate Display? A First Perspective and Survey on Entering the Wonderland of Precision Medicine. JMIR Serious Games 2024; 12:e52785. [PMID: 39292499 PMCID: PMC11447423 DOI: 10.2196/52785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 03/26/2024] [Accepted: 07/02/2024] [Indexed: 09/19/2024] Open
Abstract
At the Worldwide Developers Conference in June 2023, Apple introduced the Vision Pro. The Apple Vision Pro (AVP) is a mixed reality headset; more specifically, it is a virtual reality device with an additional video see-through capability. The video see-through capability turns the AVP into an augmented reality (AR) device. The AR feature is enabled by streaming the real world via cameras on the (virtual reality) screens in front of the user's eyes. This is, of course, not unique and is similar to other devices, such as the Varjo XR-3 (Varjo Technologies Oy). Nevertheless, the AVP has some interesting features, such as an inside-out screen that can show the headset wearer's eyes to "outsiders," and a button on the top, called the "digital crown," that allows a seamless blend of digital content with the user's physical space by turning it. In addition, it is untethered, except for the cable to the battery, which makes the headset more agile, compared to the Varjo XR-3. This could actually come closer to "The Ultimate Display," which Ivan Sutherland had already sketched in 1965. After a great response from the media and social networks to the release, we were able to test and review the new AVP ourselves in March 2024. Including an expert survey with 13 of our colleagues after testing the AVP in our institute, this Viewpoint explores whether the AVP can overcome clinical challenges that AR especially still faces in the medical domain; we also go beyond this and discuss whether the AVP could support clinicians in essential tasks to allow them to spend more time with their patients.
Collapse
Affiliation(s)
- Jan Egger
- Institute for Artificial Intelligence in Medicine, Essen University Hospital (AöR), Essen, Germany
- Center for Virtual and Extended Reality in Medicine (ZvRM), Essen University Hospital (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Christina Gsaxner
- Institute for Artificial Intelligence in Medicine, Essen University Hospital (AöR), Essen, Germany
- Department of Oral and Maxillofacial Surgery & Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Gijs Luijten
- Institute for Artificial Intelligence in Medicine, Essen University Hospital (AöR), Essen, Germany
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften (ISAS), Dortmund, Germany
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Institute of Medical Robotic, Shanghai Jiao Tong University, Shanghai, China
| | - Jiang Bian
- Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, United States
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine, Essen University Hospital (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Essen, Germany
- Department of Physics, TU Dortmund University, Dortmund, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery & Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| |
Collapse
|
33
|
Wang W, Chen J, Han G, Shi X, Qian G. Application of Object Detection Algorithms in Non-Destructive Testing of Pressure Equipment: A Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:5944. [PMID: 39338689 PMCID: PMC11435956 DOI: 10.3390/s24185944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 09/12/2024] [Accepted: 09/12/2024] [Indexed: 09/30/2024]
Abstract
Non-destructive testing (NDT) techniques play a crucial role in industrial production, aerospace, healthcare, and the inspection of special equipment, serving as an indispensable part of assessing the safety condition of pressure equipment. Among these, the analysis of NDT data stands as a critical link in evaluating equipment safety. In recent years, object detection techniques have gradually been applied to the analysis of NDT data in pressure equipment inspection, yielding significant results. This paper comprehensively reviews the current applications and development trends of object detection algorithms in NDT technology for pressure-bearing equipment, focusing on algorithm selection, data augmentation, and intelligent defect recognition based on object detection algorithms. Additionally, it explores open research challenges of integrating GAN-based data augmentation and unsupervised learning to further enhance the intelligent application and performance of object detection technology in NDT for pressure-bearing equipment while discussing techniques and methods to improve the interpretability of deep learning models. Finally, by summarizing current research and offering insights for future directions, this paper aims to provide researchers and engineers with a comprehensive perspective to advance the application and development of object detection technology in NDT for pressure-bearing equipment.
Collapse
Affiliation(s)
- Weihua Wang
- State Key Laboratory of Low-Carbon Thermal Power Generation Technology and Equipments, China Special Equipment Inspection and Research Institute, Beijing 100029, China
- China Special Equipment Inspection and Research Institute, Beijing 100029, China
| | - Jiugong Chen
- State Key Laboratory of Low-Carbon Thermal Power Generation Technology and Equipments, China Special Equipment Inspection and Research Institute, Beijing 100029, China
- China Special Equipment Inspection and Research Institute, Beijing 100029, China
| | - Gangsheng Han
- State Key Laboratory of Low-Carbon Thermal Power Generation Technology and Equipments, China Special Equipment Inspection and Research Institute, Beijing 100029, China
- China Special Equipment Inspection and Research Institute, Beijing 100029, China
| | - Xiushan Shi
- State Key Laboratory of Low-Carbon Thermal Power Generation Technology and Equipments, China Special Equipment Inspection and Research Institute, Beijing 100029, China
- China Special Equipment Inspection and Research Institute, Beijing 100029, China
| | - Gong Qian
- State Key Laboratory of Low-Carbon Thermal Power Generation Technology and Equipments, China Special Equipment Inspection and Research Institute, Beijing 100029, China
- China Special Equipment Inspection and Research Institute, Beijing 100029, China
| |
Collapse
|
34
|
Koyama H. Machine learning application in otology. Auris Nasus Larynx 2024; 51:666-673. [PMID: 38704894 DOI: 10.1016/j.anl.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 03/13/2024] [Accepted: 04/02/2024] [Indexed: 05/07/2024]
Abstract
This review presents a comprehensive history of Artificial Intelligence (AI) in the context of the revolutionary application of machine learning (ML) to medical research and clinical utilization, particularly for the benefit of researchers interested in the application of ML in otology. To this end, we discuss the key components of ML-input, output, and algorithms. In particular, some representation algorithms commonly used in medical research are discussed. Subsequently, we review ML applications in otology research, including diagnosis, influential identification, and surgical outcome prediction. In the context of surgical outcome prediction, specific surgical treatments, including cochlear implantation, active middle ear implantation, tympanoplasty, and vestibular schwannoma resection, are considered. Finally, we highlight the obstacles and challenges that need to be overcome in future research.
Collapse
Affiliation(s)
- Hajime Koyama
- Department of Otorhinolaryngology and Head and Neck Surgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
35
|
Zhao Y, Li X, Zhou C, Peng H, Zheng Z, Chen J, Ding W. A review of cancer data fusion methods based on deep learning. INFORMATION FUSION 2024; 108:102361. [DOI: 10.1016/j.inffus.2024.102361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
36
|
Jia PF, Li YR, Wang LY, Lu XR, Guo X. Radiomics in esophagogastric junction cancer: A scoping review of current status and advances. Eur J Radiol 2024; 177:111577. [PMID: 38905802 DOI: 10.1016/j.ejrad.2024.111577] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 06/03/2024] [Accepted: 06/14/2024] [Indexed: 06/23/2024]
Abstract
PURPOSE This scoping review aimed to understand the advances in radiomics in esophagogastric junction (EGJ) cancer and assess the current status of radiomics in EGJ cancer. METHODS We conducted systematic searches of PubMed, Embase, and Web of Science databases from January 18, 2012, to January 15, 2023, to identify radiomics articles related to EGJ cancer. Two researchers independently screened the literature, extracted data, and assessed the quality of the studies using the Radiomics Quality Score (RQS) and the METhodological RadiomICs Score (METRICS) tool, respectively. RESULTS A total of 120 articles were retrieved from the three databases, and after screening, only six papers met the inclusion criteria. These studies investigated the role of radiomics in differentiating adenocarcinoma from squamous carcinoma, diagnosing T-stage, evaluating HER2 overexpression, predicting response to neoadjuvant therapy, and prognosis in EGJ cancer. The median score percentage of RQS was 34.7% (range from 22.2% to 38.9%). The median score percentage of METRICS was 71.2% (range from 58.2% to 84.9%). CONCLUSION Although there is a considerable difference between the RQS and METRICS scores of the included literature, we believe that the research value of radiomics in EGJ cancer has been revealed. In the future, while actively exploring more diagnostic, prognostic, and biological correlation studies in EGJ cancer, greater emphasis should be placed on the standardization and clinical application of radiomics.
Collapse
Affiliation(s)
- Ping-Fan Jia
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Yu-Ru Li
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Lu-Yao Wang
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Xiao-Rui Lu
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Xing Guo
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China.
| |
Collapse
|
37
|
Bellmann L, Wiederhold AJ, Trübe L, Twerenbold R, Ückert F, Gottfried K. Introducing Attribute Association Graphs to Facilitate Medical Data Exploration: Development and Evaluation Using Epidemiological Study Data. JMIR Med Inform 2024; 12:e49865. [PMID: 39046780 PMCID: PMC11306949 DOI: 10.2196/49865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 10/11/2023] [Accepted: 05/04/2024] [Indexed: 07/25/2024] Open
Abstract
BACKGROUND Interpretability and intuitive visualization facilitate medical knowledge generation through big data. In addition, robustness to high-dimensional and missing data is a requirement for statistical approaches in the medical domain. A method tailored to the needs of physicians must meet all the abovementioned criteria. OBJECTIVE This study aims to develop an accessible tool for visual data exploration without the need for programming knowledge, adjusting complex parameterizations, or handling missing data. We sought to use statistical analysis using the setting of disease and control cohorts familiar to clinical researchers. We aimed to guide the user by identifying and highlighting data patterns associated with disease and reveal relations between attributes within the data set. METHODS We introduce the attribute association graph, a novel graph structure designed for visual data exploration using robust statistical metrics. The nodes capture frequencies of participant attributes in disease and control cohorts as well as deviations between groups. The edges represent conditional relations between attributes. The graph is visualized using the Neo4j (Neo4j, Inc) data platform and can be interactively explored without the need for technical knowledge. Nodes with high deviations between cohorts and edges of noticeable conditional relationship are highlighted to guide the user during the exploration. The graph is accompanied by a dashboard visualizing variable distributions. For evaluation, we applied the graph and dashboard to the Hamburg City Health Study data set, a large cohort study conducted in the city of Hamburg, Germany. All data structures can be accessed freely by researchers, physicians, and patients. In addition, we developed a user test conducted with physicians incorporating the System Usability Scale, individual questions, and user tasks. RESULTS We evaluated the attribute association graph and dashboard through an exemplary data analysis of participants with a general cardiovascular disease in the Hamburg City Health Study data set. All results extracted from the graph structure and dashboard are in accordance with findings from the literature, except for unusually low cholesterol levels in participants with cardiovascular disease, which could be induced by medication. In addition, 95% CIs of Pearson correlation coefficients were calculated for all associations identified during the data analysis, confirming the results. In addition, a user test with 10 physicians assessing the usability of the proposed methods was conducted. A System Usability Scale score of 70.5% and average successful task completion of 81.4% were reported. CONCLUSIONS The proposed attribute association graph and dashboard enable intuitive visual data exploration. They are robust to high-dimensional as well as missing data and require no parameterization. The usability for clinicians was confirmed via a user test, and the validity of the statistical results was confirmed by associations known from literature and standard statistical inference.
Collapse
Affiliation(s)
- Louis Bellmann
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | | | - Leona Trübe
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Raphael Twerenbold
- Department of Cardiology, University Heart & Vascular Center Hamburg, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- German Center for Cardiovascular Research (DZHK) Partner Site Hamburg-Kiel-Lübeck, Hamburg, Germany
- University Center of Cardiovascular Science, University Heart & Vascular Center Hamburg, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Frank Ückert
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Karl Gottfried
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
38
|
Fedorin I, Smielova A, Nastenko M, Krasnoshchok I. From Sprint to Recovery: LSTM-Powered Heart Rate Recovery Forecasting in HIIT Sessions. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039643 DOI: 10.1109/embc53108.2024.10781668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
In recent years, the growing interest in applying artificial intelligence to the healthcare domain, especially in the monitoring and control of health status during fitness activities, has opened new opportunities in understanding and enhancing human performance and health. This interdisciplinary approach, merging cutting-edge AI with exercise physiology, offers promising avenues for personalized healthcare, optimized athletic training, and advanced health monitoring techniques. Current study addressing a critical aspect of exercise physiology: the forecasting of heart rate (HR) recovery patterns following high-intensity intervals. In pursuit of this objective, a comprehensive deep learning framework is developed, designed to forecast HR recovery patterns. This system integrates signal processing techniques combined with advanced deep learning architectures to facilitate real-time HR measurements and predict future HR dynamics during high-intensity interval training. Central to the proposed approach is a long short-term memory (LSTM) based encoder-decoder architecture. To enhance the model's accuracy and robustness, a task-specific loss function is employed. This function not only calculates standard HR errors but also incorporates HR pattern slopes and angles. This approach has achieved promising results, with the model demonstrating strong performance. The mean absolute error in HR forecasting is 3.5 bpm for the encoder and 3.8 bpm for the decoder parts.
Collapse
|
39
|
Allam AH, Eltewacy NK, Alabdallat YJ, Owais TA, Salman S, Ebada MA. Knowledge, attitude, and perception of Arab medical students towards artificial intelligence in medicine and radiology: A multi-national cross-sectional study. Eur Radiol 2024; 34:1-14. [PMID: 38150076 PMCID: PMC11213794 DOI: 10.1007/s00330-023-10509-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 09/26/2023] [Accepted: 11/02/2023] [Indexed: 12/28/2023]
Abstract
OBJECTIVES We aimed to assess undergraduate medical students' knowledge, attitude, and perception regarding artificial intelligence (AI) in medicine. METHODS A multi-national, multi-center cross-sectional study was conducted from March to April 2022, targeting undergraduate medical students in nine Arab countries. The study utilized a web-based questionnaire, with data collection carried out with the help of national leaders and local collaborators. Logistic regression analysis was performed to identify predictors of knowledge, attitude, and perception among the participants. Additionally, cluster analysis was employed to identify shared patterns within their responses. RESULTS Of the 4492 students surveyed, 92.4% had not received formal AI training. Regarding AI and deep learning (DL), 87.1% exhibited a low level of knowledge. Most students (84.9%) believed AI would revolutionize medicine and radiology, with 48.9% agreeing that it could reduce the need for radiologists. Students with high/moderate AI knowledge and training had higher odds of agreeing to endorse AI replacing radiologists, reducing their numbers, and being less likely to consider radiology as a career compared to those with low knowledge/no AI training. Additionally, the majority agreed that AI would aid in the automated detection and diagnosis of pathologies. CONCLUSIONS Arab medical students exhibit a notable deficit in their knowledge and training pertaining to AI. Despite this, they hold a positive perception of AI implementation in medicine and radiology, demonstrating a clear understanding of its significance for the healthcare system and medical curriculum. CLINICAL RELEVANCE STATEMENT This study highlights the need for widespread education and training in artificial intelligence for Arab medical students, indicating its significance for healthcare systems and medical curricula. KEY POINTS • Arab medical students demonstrate a significant knowledge and training gap when it comes to using AI in the fields of medicine and radiology. • Arab medical students recognize the importance of integrating AI into the medical curriculum. Students with a deeper understanding of AI were more likely to agree that all medical students should receive AI education. However, those with previous AI training were less supportive of this idea. • Students with moderate/high AI knowledge and training displayed increased odds of agreeing that AI has the potential to replace radiologists, reduce the demand for their services, and were less inclined to pursue a career in radiology, when compared to students with low knowledge/no AI training.
Collapse
Affiliation(s)
- Ahmed Hafez Allam
- Faculty of Medicine, Menoufia University, Shebin El-Kom, Menoufia, Egypt.
- Eltewacy Arab Research Group, Cairo, Egypt.
| | - Nael Kamel Eltewacy
- Eltewacy Arab Research Group, Cairo, Egypt
- Faculty of Pharmacy, Beni-Suef University, Beni-Suef, Egypt
| | - Yasmeen Jamal Alabdallat
- Eltewacy Arab Research Group, Cairo, Egypt
- Faculty of Medicine, Hashemite University, Zarqa, Jordan
| | - Tarek A Owais
- Eltewacy Arab Research Group, Cairo, Egypt
- Faculty of Pharmacy, Beni-Suef University, Beni-Suef, Egypt
| | - Saif Salman
- Eltewacy Arab Research Group, Cairo, Egypt
- Mayo Clinic College of Medicine, Jacksonville, FL, USA
| | - Mahmoud A Ebada
- Eltewacy Arab Research Group, Cairo, Egypt
- Faculty of Medicine, Zagazig University, Zagazig, El-Sharkia, Egypt
- Egyptian Fellowship of Neurology, Nasr City Hospital for Health Insurance, Nasr City, Cairo, Egypt
| |
Collapse
|
40
|
Papachristou P, Söderholm M, Pallon J, Taloyan M, Polesie S, Paoli J, Anderson CD, Falk M. Evaluation of an artificial intelligence-based decision support for the detection of cutaneous melanoma in primary care: a prospective real-life clinical trial. Br J Dermatol 2024; 191:125-133. [PMID: 38234043 DOI: 10.1093/bjd/ljae021] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 01/12/2024] [Accepted: 01/13/2024] [Indexed: 01/19/2024]
Abstract
BACKGROUND Use of artificial intelligence (AI), or machine learning, to assess dermoscopic images of skin lesions to detect melanoma has, in several retrospective studies, shown high levels of diagnostic accuracy on par with - or even outperforming - experienced dermatologists. However, the enthusiasm around these algorithms has not yet been matched by prospective clinical trials performed in authentic clinical settings. In several European countries, including Sweden, the initial clinical assessment of suspected skin cancer is principally conducted in the primary healthcare setting by primary care physicians, with or without access to teledermoscopic support from dermatology clinics. OBJECTIVES To determine the diagnostic performance of an AI-based clinical decision support tool for cutaneous melanoma detection, operated by a smartphone application (app), when used prospectively by primary care physicians to assess skin lesions of concern due to some degree of melanoma suspicion. METHODS This prospective multicentre clinical trial was conducted at 36 primary care centres in Sweden. Physicians used the smartphone app on skin lesions of concern by photographing them dermoscopically, which resulted in a dichotomous decision support text regarding evidence for melanoma. Regardless of the app outcome, all lesions underwent standard diagnostic procedures (surgical excision or referral to a dermatologist). After investigations were complete, lesion diagnoses were collected from the patients' medical records and compared with the app's outcome and other lesion data. RESULTS In total, 253 lesions of concern in 228 patients were included, of which 21 proved to be melanomas, with 11 thin invasive melanomas and 10 melanomas in situ. The app's accuracy in identifying melanomas was reflected in an area under the receiver operating characteristic (AUROC) curve of 0.960 [95% confidence interval (CI) 0.928-0.980], corresponding to a maximum sensitivity and specificity of 95.2% and 84.5%, respectively. For invasive melanomas alone, the AUROC was 0.988 (95% CI 0.965-0.997), corresponding to a maximum sensitivity and specificity of 100% and 92.6%, respectively. CONCLUSIONS The clinical decision support tool evaluated in this investigation showed high diagnostic accuracy when used prospectively in primary care patients, which could add significant clinical value for primary care physicians assessing skin lesions for melanoma.
Collapse
Affiliation(s)
- Panagiotis Papachristou
- Division of Family Medicine and Primary Care, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Atrium Healthcare Centre, Region Stockholm, Sweden
| | - My Söderholm
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
- Ekholmen Primary Healthcare Centre, Region Östergötland, Linköping, Sweden
| | - Jon Pallon
- Department of Clinical Sciences in Malmö, Family Medicine, Lund University, Malmö, Sweden
- Department of Research and Development, Region Kronoberg, Växjö, Sweden
| | - Marina Taloyan
- Division of Family Medicine and Primary Care, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Atrium Healthcare Centre, Region Stockholm, Sweden
| | - Sam Polesie
- Region Västra Götaland, Sahlgrenska University Hospital, Department of Dermatology and Venereology, Gothenburg, Sweden
- Department of Dermatology and Venereology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - John Paoli
- Region Västra Götaland, Sahlgrenska University Hospital, Department of Dermatology and Venereology, Gothenburg, Sweden
- Department of Dermatology and Venereology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Chris D Anderson
- Department of Biomedical and Clinical Sciences, Division of Dermatology and Venereology, Linköping University, Linköping, Sweden
| | - Magnus Falk
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
- Region Östergötland, Kärna Primary Healthcare Centre, Linköping, Sweden
| |
Collapse
|
41
|
Chen Y, Lin F, Wang K, Chen F, Wang R, Lai M, Chen C, Wang R. Development of a predictive model for 1-year postoperative recovery in patients with lumbar disk herniation based on deep learning and machine learning. Front Neurol 2024; 15:1255780. [PMID: 38919973 PMCID: PMC11197993 DOI: 10.3389/fneur.2024.1255780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 05/23/2024] [Indexed: 06/27/2024] Open
Abstract
Background The aim of this study is to develop a predictive model utilizing deep learning and machine learning techniques that will inform clinical decision-making by predicting the 1-year postoperative recovery of patients with lumbar disk herniation. Methods The clinical data of 470 inpatients who underwent tubular microdiscectomy (TMD) between January 2018 and January 2021 were retrospectively analyzed as variables. The dataset was randomly divided into a training set (n = 329) and a test set (n = 141) using a 10-fold cross-validation technique. Various deep learning and machine learning algorithms including Random Forests, Extreme Gradient Boosting, Support Vector Machines, Extra Trees, K-Nearest Neighbors, Logistic Regression, Light Gradient Boosting Machine, and MLP (Artificial Neural Networks) were employed to develop predictive models for the recovery of patients with lumbar disk herniation 1 year after surgery. The cure rate score of lumbar JOA score 1 year after TMD was used as an outcome indicator. The primary evaluation metric was the area under the receiver operating characteristic curve (AUC), with additional measures including decision curve analysis (DCA), accuracy, sensitivity, specificity, and others. Results The heat map of the correlation matrix revealed low inter-feature correlation. The predictive model employing both machine learning and deep learning algorithms was constructed using 15 variables after feature engineering. Among the eight algorithms utilized, the MLP algorithm demonstrated the best performance. Conclusion Our study findings demonstrate that the MLP algorithm provides superior predictive performance for the recovery of patients with lumbar disk herniation 1 year after surgery.
Collapse
Affiliation(s)
- Yan Chen
- Pingtan Comprehensive Experimentation Area Hospital, Pingtan, China
- Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Fabin Lin
- Pingtan Comprehensive Experimentation Area Hospital, Pingtan, China
- Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Kaifeng Wang
- Fujian Medical University, Fuzhou, Fujian, China
| | - Feng Chen
- Fujian Medical University, Fuzhou, Fujian, China
| | - Ruxian Wang
- Fujian Medical University, Fuzhou, Fujian, China
| | - Minyun Lai
- Fujian Medical University, Fuzhou, Fujian, China
| | - Chunmei Chen
- Pingtan Comprehensive Experimentation Area Hospital, Pingtan, China
- Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Rui Wang
- Pingtan Comprehensive Experimentation Area Hospital, Pingtan, China
- Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| |
Collapse
|
42
|
Cheng CT, Lin HH, Hsu CP, Chen HW, Huang JF, Hsieh CH, Fu CY, Chung IF, Liao CH. Deep Learning for Automated Detection and Localization of Traumatic Abdominal Solid Organ Injuries on CT Scans. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1113-1123. [PMID: 38366294 PMCID: PMC11169164 DOI: 10.1007/s10278-024-01038-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 01/31/2024] [Accepted: 02/01/2024] [Indexed: 02/18/2024]
Abstract
Computed tomography (CT) is the most commonly used diagnostic modality for blunt abdominal trauma (BAT), significantly influencing management approaches. Deep learning models (DLMs) have shown great promise in enhancing various aspects of clinical practice. There is limited literature available on the use of DLMs specifically for trauma image evaluation. In this study, we developed a DLM aimed at detecting solid organ injuries to assist medical professionals in rapidly identifying life-threatening injuries. The study enrolled patients from a single trauma center who received abdominal CT scans between 2008 and 2017. Patients with spleen, liver, or kidney injury were categorized as the solid organ injury group, while others were considered negative cases. Only images acquired from the trauma center were enrolled. A subset of images acquired in the last year was designated as the test set, and the remaining images were utilized to train and validate the detection models. The performance of each model was assessed using metrics such as the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, positive predictive value, and negative predictive value based on the best Youden index operating point. The study developed the models using 1302 (87%) scans for training and tested them on 194 (13%) scans. The spleen injury model demonstrated an accuracy of 0.938 and a specificity of 0.952. The accuracy and specificity of the liver injury model were reported as 0.820 and 0.847, respectively. The kidney injury model showed an accuracy of 0.959 and a specificity of 0.989. We developed a DLM that can automate the detection of solid organ injuries by abdominal CT scans with acceptable diagnostic accuracy. It cannot replace the role of clinicians, but we can expect it to be a potential tool to accelerate the process of therapeutic decisions for trauma care.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Hou-Hsien Lin
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Chih-Po Hsu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Huan-Wu Chen
- Department of Medical Imaging & Intervention, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Jen-Fu Huang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Chi-Hsun Hsieh
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Chih-Yuan Fu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - I-Fang Chung
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan.
| |
Collapse
|
43
|
Saha A, Ganie SM, Dutta Pramanik PK, Yadav RK, Mallik S, Zhao Z. Correction: VER-Net: a hybrid transfer learning model for lung cancer detection using CT scan images. BMC Med Imaging 2024; 24:128. [PMID: 38822231 PMCID: PMC11140995 DOI: 10.1186/s12880-024-01315-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2024] Open
Affiliation(s)
- Anindita Saha
- Department of Computing Science and Engineering, IFTM University, Moradabad, Uttar Pradesh, India
| | - Shahid Mohammad Ganie
- AI Research Centre, Department of Analytics, School of Business, Woxsen University, Hyderabad, Telangana, 502345, India
| | - Pijush Kanti Dutta Pramanik
- School of Computer Applications and Technology, Galgotias University, Greater Noida, Uttar Pradesh, 203201, India.
| | - Rakesh Kumar Yadav
- Department of Computer Science & Engineering, MSOET, Maharishi University of Information Technology, Lucknow, Uttar Pradesh, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, USA
| | - Zhongming Zhao
- Center for Precision Health, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA.
| |
Collapse
|
44
|
Botnari A, Kadar M, Patrascu JM. A Comprehensive Evaluation of Deep Learning Models on Knee MRIs for the Diagnosis and Classification of Meniscal Tears: A Systematic Review and Meta-Analysis. Diagnostics (Basel) 2024; 14:1090. [PMID: 38893617 PMCID: PMC11172202 DOI: 10.3390/diagnostics14111090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 05/19/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024] Open
Abstract
OBJECTIVES This study delves into the cutting-edge field of deep learning techniques, particularly deep convolutional neural networks (DCNNs), which have demonstrated unprecedented potential in assisting radiologists and orthopedic surgeons in precisely identifying meniscal tears. This research aims to evaluate the effectiveness of deep learning models in recognizing, localizing, describing, and categorizing meniscal tears in magnetic resonance images (MRIs). MATERIALS AND METHODS This systematic review was rigorously conducted, strictly following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Extensive searches were conducted on MEDLINE (PubMed), Web of Science, Cochrane Library, and Google Scholar. All identified articles underwent a comprehensive risk of bias analysis. Predictive performance values were either extracted or calculated for quantitative analysis, including sensitivity and specificity. The meta-analysis was performed for all prediction models that identified the presence and location of meniscus tears. RESULTS This study's findings underscore that a range of deep learning models exhibit robust performance in detecting and classifying meniscal tears, in one case surpassing the expertise of musculoskeletal radiologists. Most studies in this review concentrated on identifying tears in the medial or lateral meniscus and even precisely locating tears-whether in the anterior or posterior horn-with exceptional accuracy, as demonstrated by AUC values ranging from 0.83 to 0.94. CONCLUSIONS Based on these findings, deep learning models have showcased significant potential in analyzing knee MR images by learning intricate details within images. They offer precise outcomes across diverse tasks, including segmenting specific anatomical structures and identifying pathological regions. Contributions: This study focused exclusively on DL models for identifying and localizing meniscus tears. It presents a meta-analysis that includes eight studies for detecting the presence of a torn meniscus and a meta-analysis of three studies with low heterogeneity that localize and classify the menisci. Another novelty is the analysis of arthroscopic surgery as ground truth. The quality of the studies was assessed against the CLAIM checklist, and the risk of bias was determined using the QUADAS-2 tool.
Collapse
Affiliation(s)
- Alexei Botnari
- Department of Orthopedics, Faculty of Medicine, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania
| | - Manuella Kadar
- Department of Computer Science, Faculty of Informatics and Engineering, “1 Decembrie 1918” University of Alba Iulia, 510009 Alba Iulia, Romania
| | - Jenel Marian Patrascu
- Department of Orthopedics-Traumatology, Faculty of Medicine, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania;
| |
Collapse
|
45
|
Cheng C, Liang X, Guo D, Xie D. Application of Artificial Intelligence in Shoulder Pathology. Diagnostics (Basel) 2024; 14:1091. [PMID: 38893618 PMCID: PMC11171621 DOI: 10.3390/diagnostics14111091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 05/16/2024] [Accepted: 05/20/2024] [Indexed: 06/21/2024] Open
Abstract
Artificial intelligence (AI) refers to the science and engineering of creating intelligent machines for imitating and expanding human intelligence. Given the ongoing evolution of the multidisciplinary integration trend in modern medicine, numerous studies have investigated the power of AI to address orthopedic-specific problems. One particular area of investigation focuses on shoulder pathology, which is a range of disorders or abnormalities of the shoulder joint, causing pain, inflammation, stiffness, weakness, and reduced range of motion. There has not yet been a comprehensive review of the recent advancements in this field. Therefore, the purpose of this review is to evaluate current AI applications in shoulder pathology. This review mainly summarizes several crucial stages of the clinical practice, including predictive models and prognosis, diagnosis, treatment, and physical therapy. In addition, the challenges and future development of AI technology are also discussed.
Collapse
Affiliation(s)
- Cong Cheng
- Department of Orthopaedics, People’s Hospital of Longhua, Shenzhen 518000, China;
- Department of Joint Surgery and Sports Medicine, Center for Orthopedic Surgery, Orthopedic Hospital of Guangdong Province, The Third Affiliated Hospital of Southern Medical University, Guangzhou 510630, China; (X.L.); (D.G.)
| | - Xinzhi Liang
- Department of Joint Surgery and Sports Medicine, Center for Orthopedic Surgery, Orthopedic Hospital of Guangdong Province, The Third Affiliated Hospital of Southern Medical University, Guangzhou 510630, China; (X.L.); (D.G.)
| | - Dong Guo
- Department of Joint Surgery and Sports Medicine, Center for Orthopedic Surgery, Orthopedic Hospital of Guangdong Province, The Third Affiliated Hospital of Southern Medical University, Guangzhou 510630, China; (X.L.); (D.G.)
| | - Denghui Xie
- Department of Joint Surgery and Sports Medicine, Center for Orthopedic Surgery, Orthopedic Hospital of Guangdong Province, The Third Affiliated Hospital of Southern Medical University, Guangzhou 510630, China; (X.L.); (D.G.)
- Guangdong Provincial Key Laboratory of Bone and Joint Degeneration Diseases, The Third Affiliated Hospital of Southern Medical University, Guangzhou 510630, China
| |
Collapse
|
46
|
Saha A, Ganie SM, Pramanik PKD, Yadav RK, Mallik S, Zhao Z. VER-Net: a hybrid transfer learning model for lung cancer detection using CT scan images. BMC Med Imaging 2024; 24:120. [PMID: 38789925 PMCID: PMC11127393 DOI: 10.1186/s12880-024-01238-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 03/05/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND Lung cancer is the second most common cancer worldwide, with over two million new cases per year. Early identification would allow healthcare practitioners to handle it more effectively. The advancement of computer-aided detection systems significantly impacted clinical analysis and decision-making on human disease. Towards this, machine learning and deep learning techniques are successfully being applied. Due to several advantages, transfer learning has become popular for disease detection based on image data. METHODS In this work, we build a novel transfer learning model (VER-Net) by stacking three different transfer learning models to detect lung cancer using lung CT scan images. The model is trained to map the CT scan images with four lung cancer classes. Various measures, such as image preprocessing, data augmentation, and hyperparameter tuning, are taken to improve the efficacy of VER-Net. All the models are trained and evaluated using multiclass classifications chest CT images. RESULTS The experimental results confirm that VER-Net outperformed the other eight transfer learning models compared with. VER-Net scored 91%, 92%, 91%, and 91.3% when tested for accuracy, precision, recall, and F1-score, respectively. Compared to the state-of-the-art, VER-Net has better accuracy. CONCLUSION VER-Net is not only effectively used for lung cancer detection but may also be useful for other diseases for which CT scan images are available.
Collapse
Affiliation(s)
- Anindita Saha
- Department of Computing Science and Engineering, IFTM University, Moradabad, Uttar Pradesh, India
| | - Shahid Mohammad Ganie
- AI Research Centre, Department of Analytics, School of Business, Woxsen University, Hyderabad, Telangana, 502345, India
| | - Pijush Kanti Dutta Pramanik
- School of Computer Applications and Technology, Galgotias University, Greater Noida, Uttar Pradesh, 203201, India.
| | - Rakesh Kumar Yadav
- Department of Computer Science & Engineering, MSOET, Maharishi University of Information Technology, Lucknow, Uttar Pradesh, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, USA
| | - Zhongming Zhao
- Center for Precision Health, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA.
| |
Collapse
|
47
|
Xie P, Wang H, Xiao J, Xu F, Liu J, Chen Z, Zhao W, Hou S, Wu D, Ma Y, Xiao J. Development and Validation of an Explainable Deep Learning Model to Predict In-Hospital Mortality for Patients With Acute Myocardial Infarction: Algorithm Development and Validation Study. J Med Internet Res 2024; 26:e49848. [PMID: 38728685 PMCID: PMC11127140 DOI: 10.2196/49848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 10/02/2023] [Accepted: 04/02/2024] [Indexed: 05/12/2024] Open
Abstract
BACKGROUND Acute myocardial infarction (AMI) is one of the most severe cardiovascular diseases and is associated with a high risk of in-hospital mortality. However, the current deep learning models for in-hospital mortality prediction lack interpretability. OBJECTIVE This study aims to establish an explainable deep learning model to provide individualized in-hospital mortality prediction and risk factor assessment for patients with AMI. METHODS In this retrospective multicenter study, we used data for consecutive patients hospitalized with AMI from the Chongqing University Central Hospital between July 2016 and December 2022 and the Electronic Intensive Care Unit Collaborative Research Database. These patients were randomly divided into training (7668/10,955, 70%) and internal test (3287/10,955, 30%) data sets. In addition, data of patients with AMI from the Medical Information Mart for Intensive Care database were used for external validation. Deep learning models were used to predict in-hospital mortality in patients with AMI, and they were compared with linear and tree-based models. The Shapley Additive Explanations method was used to explain the model with the highest area under the receiver operating characteristic curve in both the internal test and external validation data sets to quantify and visualize the features that drive predictions. RESULTS A total of 10,955 patients with AMI who were admitted to Chongqing University Central Hospital or included in the Electronic Intensive Care Unit Collaborative Research Database were randomly divided into a training data set of 7668 (70%) patients and an internal test data set of 3287 (30%) patients. A total of 9355 patients from the Medical Information Mart for Intensive Care database were included for independent external validation. In-hospital mortality occurred in 8.74% (670/7668), 8.73% (287/3287), and 9.12% (853/9355) of the patients in the training, internal test, and external validation cohorts, respectively. The Self-Attention and Intersample Attention Transformer model performed best in both the internal test data set and the external validation data set among the 9 prediction models, with the highest area under the receiver operating characteristic curve of 0.86 (95% CI 0.84-0.88) and 0.85 (95% CI 0.84-0.87), respectively. Older age, high heart rate, and low body temperature were the 3 most important predictors of increased mortality, according to the explanations of the Self-Attention and Intersample Attention Transformer model. CONCLUSIONS The explainable deep learning model that we developed could provide estimates of mortality and visual contribution of the features to the prediction for a patient with AMI. The explanations suggested that older age, unstable vital signs, and metabolic disorders may increase the risk of mortality in patients with AMI.
Collapse
Affiliation(s)
- Puguang Xie
- Chongqing Emergency Medical Centre, Chongqing University Central Hospital, School of Medicine, Chongqing University, Chongqing, China
| | - Hao Wang
- Chongqing Emergency Medical Centre, Chongqing University Central Hospital, School of Medicine, Chongqing University, Chongqing, China
| | - Jun Xiao
- Chongqing Emergency Medical Centre, Chongqing University Central Hospital, School of Medicine, Chongqing University, Chongqing, China
| | - Fan Xu
- Chongqing Emergency Medical Centre, Chongqing University Central Hospital, School of Medicine, Chongqing University, Chongqing, China
| | - Jingyang Liu
- Chongqing Emergency Medical Centre, Chongqing University Central Hospital, School of Medicine, Chongqing University, Chongqing, China
| | - Zihang Chen
- Bioengineering College, Chongqing University, Chongqing, China
| | - Weijie Zhao
- Bioengineering College, Chongqing University, Chongqing, China
| | - Siyu Hou
- Bio-Med Informatics Research Centre & Clinical Research Centre, Xinqiao Hospital, Army Medical University, Chongqing, China
| | - Dongdong Wu
- Medical Big Data Research Centre, Chinese People's Liberation Army General Hospital, Beijing, China
| | - Yu Ma
- Chongqing Emergency Medical Centre, Chongqing University Central Hospital, School of Medicine, Chongqing University, Chongqing, China
| | - Jingjing Xiao
- Bio-Med Informatics Research Centre & Clinical Research Centre, Xinqiao Hospital, Army Medical University, Chongqing, China
| |
Collapse
|
48
|
Ferreira A, Li J, Pomykala KL, Kleesiek J, Alves V, Egger J. GAN-based generation of realistic 3D volumetric data: A systematic review and taxonomy. Med Image Anal 2024; 93:103100. [PMID: 38340545 DOI: 10.1016/j.media.2024.103100] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 11/20/2023] [Accepted: 01/30/2024] [Indexed: 02/12/2024]
Abstract
With the massive proliferation of data-driven algorithms, such as deep learning-based approaches, the availability of high-quality data is of great interest. Volumetric data is very important in medicine, as it ranges from disease diagnoses to therapy monitoring. When the dataset is sufficient, models can be trained to help doctors with these tasks. Unfortunately, there are scenarios where large amounts of data is unavailable. For example, rare diseases and privacy issues can lead to restricted data availability. In non-medical fields, the high cost of obtaining enough high-quality data can also be a concern. A solution to these problems can be the generation of realistic synthetic data using Generative Adversarial Networks (GANs). The existence of these mechanisms is a good asset, especially in healthcare, as the data must be of good quality, realistic, and without privacy issues. Therefore, most of the publications on volumetric GANs are within the medical domain. In this review, we provide a summary of works that generate realistic volumetric synthetic data using GANs. We therefore outline GAN-based methods in these areas with common architectures, loss functions and evaluation metrics, including their advantages and disadvantages. We present a novel taxonomy, evaluations, challenges, and research opportunities to provide a holistic overview of the current state of volumetric GANs.
Collapse
Affiliation(s)
- André Ferreira
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal; Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, 52074 Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, 52074 Aachen, Germany.
| | - Jianning Li
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany.
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany.
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, Essen, 45147, Germany; TU Dortmund University, Department of Physics, Otto-Hahn-Straße 4, 44227 Dortmund, Germany.
| | - Victor Alves
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal.
| | - Jan Egger
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz, 801, Austria.
| |
Collapse
|
49
|
Yi X, He Y, Gao S, Li M. A review of the application of deep learning in obesity: From early prediction aid to advanced management assistance. Diabetes Metab Syndr 2024; 18:103000. [PMID: 38604060 DOI: 10.1016/j.dsx.2024.103000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 01/23/2024] [Accepted: 03/29/2024] [Indexed: 04/13/2024]
Abstract
BACKGROUND AND AIMS Obesity is a chronic disease which can cause severe metabolic disorders. Machine learning (ML) techniques, especially deep learning (DL), have proven to be useful in obesity research. However, there is a dearth of systematic reviews of DL applications in obesity. This article aims to summarize the current trend of DL usage in obesity research. METHODS An extensive literature review was carried out across multiple databases, including PubMed, Embase, Web of Science, Scopus, and Medline, to collate relevant studies published from January 2018 to September 2023. The focus was on research detailing the application of DL in the context of obesity. We have distilled critical insights pertaining to the utilized learning models, encompassing aspects of their development, principal results, and foundational methodologies. RESULTS Our analysis culminated in the synthesis of new knowledge regarding the application of DL in the context of obesity. Finally, 40 research articles were included. The final collection of these research can be divided into three categories: obesity prediction (n = 16); obesity management (n = 13); and body fat estimation (n = 11). CONCLUSIONS This is the first review to examine DL applications in obesity. It reveals DL's superiority in obesity prediction over traditional ML methods, showing promise for multi-omics research. DL also innovates in obesity management through diet, fitness, and environmental analyses. Additionally, DL improves body fat estimation, offering affordable and precise monitoring tools. The study is registered with PROSPERO (ID: CRD42023475159).
Collapse
Affiliation(s)
- Xinghao Yi
- Department of Endocrinology, NHC Key Laboratory of Endocrinology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
| | - Yangzhige He
- Department of Medical Research Center, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing 100730, China
| | - Shan Gao
- Department of Endocrinology, Xuan Wu Hospital, Capital Medical University, Beijing 10053, China
| | - Ming Li
- Department of Endocrinology, NHC Key Laboratory of Endocrinology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China.
| |
Collapse
|
50
|
Park D, Kim Y, Kang H, Lee J, Choi J, Kim T, Lee S, Son S, Kim M, Kim I. PECI-Net: Bolus segmentation from video fluoroscopic swallowing study images using preprocessing ensemble and cascaded inference. Comput Biol Med 2024; 172:108241. [PMID: 38489987 DOI: 10.1016/j.compbiomed.2024.108241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 01/30/2024] [Accepted: 02/27/2024] [Indexed: 03/17/2024]
Abstract
Bolus segmentation is crucial for the automated detection of swallowing disorders in videofluoroscopic swallowing studies (VFSS). However, it is difficult for the model to accurately segment a bolus region in a VFSS image because VFSS images are translucent, have low contrast and unclear region boundaries, and lack color information. To overcome these challenges, we propose PECI-Net, a network architecture for VFSS image analysis that combines two novel techniques: the preprocessing ensemble network (PEN) and the cascaded inference network (CIN). PEN enhances the sharpness and contrast of the VFSS image by combining multiple preprocessing algorithms in a learnable way. CIN reduces ambiguity in bolus segmentation by using context from other regions through cascaded inference. Moreover, CIN prevents undesirable side effects from unreliably segmented regions by referring to the context in an asymmetric way. In experiments, PECI-Net exhibited higher performance than four recently developed baseline models, outperforming TernausNet, the best among the baseline models, by 4.54% and the widely used UNet by 10.83%. The results of the ablation studies confirm that CIN and PEN are effective in improving bolus segmentation performance.
Collapse
Affiliation(s)
- Dougho Park
- Pohang Stroke and Spine Hospital, Pohang, Republic of Korea; School of Convergence Science and Technology, Pohang University of Science and Technology, Pohang, Republic of Korea
| | - Younghun Kim
- School of CSEE, Handong Global University, Pohang, Republic of Korea
| | - Harim Kang
- School of CSEE, Handong Global University, Pohang, Republic of Korea
| | - Junmyeoung Lee
- School of CSEE, Handong Global University, Pohang, Republic of Korea
| | - Jinyoung Choi
- School of CSEE, Handong Global University, Pohang, Republic of Korea
| | - Taeyeon Kim
- Pohang Stroke and Spine Hospital, Pohang, Republic of Korea
| | - Sangeok Lee
- Pohang Stroke and Spine Hospital, Pohang, Republic of Korea
| | - Seokil Son
- Pohang Stroke and Spine Hospital, Pohang, Republic of Korea
| | - Minsol Kim
- Pohang Stroke and Spine Hospital, Pohang, Republic of Korea
| | - Injung Kim
- School of CSEE, Handong Global University, Pohang, Republic of Korea.
| |
Collapse
|