1
|
Jiang J, Li C, Lu J, Sun J, Sun X, Yang J, Wang L, Zuo C, Shi K. Using interpretable deep learning radiomics model to diagnose and predict progression of early AD disease spectrum: a preliminary [ 18F]FDG PET study. Eur Radiol 2025; 35:2620-2633. [PMID: 39477837 DOI: 10.1007/s00330-024-11158-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 08/12/2024] [Accepted: 09/17/2024] [Indexed: 01/29/2025]
Abstract
OBJECTIVES In this study, we propose an interpretable deep learning radiomics (IDLR) model based on [18F]FDG PET images to diagnose the clinical spectrum of Alzheimer's disease (AD) and predict the progression from mild cognitive impairment (MCI) to AD. METHODS This multicentre study included 1962 subjects from two ethnically diverse, independent cohorts (a Caucasian cohort from ADNI and an Asian cohort merged from two hospitals in China). The IDLR model involved feature extraction, feature selection, and classification/prediction. We evaluated the IDLR model's ability to distinguish between subjects with different cognitive statuses and MCI trajectories (sMCI and pMCI) and compared results with radiomic and deep learning (DL) models. A Cox model tested the IDLR signature's predictive capability for MCI to AD progression. Correlation analyses identified critical IDLR features and verified their clinical diagnostic value. RESULTS The IDLR model achieved the best classification results for subjects with different cognitive statuses as well as in those with MCI with distinct trajectories, with an accuracy of 76.51% [72.88%, 79.60%], (95% confidence interval, CI) while those of radiomic and DL models were 69.13% [66.28%, 73.12%] and 73.89% [68.99%, 77.89%], respectively. According to the Cox model, the hazard ratio (HR) of the IDLR model was 1.465 (95% CI: 1.236-1.737, p < 0.001). Moreover, three crucial IDLR features were significantly different across cognitive stages and were significantly correlated with cognitive scale scores (p < 0.01). CONCLUSIONS Preliminary results demonstrated that the IDLR model based on [18F]FDG PET images enhanced accuracy in diagnosing the clinical spectrum of AD. KEY POINTS Question The study addresses the lack of interpretability in existing DL classification models for diagnosing the AD spectrum. Findings The proposed interpretable DL radiomics model, using radiomics-supervised DL features, enhances interpretability from traditional DL models and improves classification accuracy. Clinical relevance The IDLR model interprets DL features through radiomics supervision, potentially advancing the application of DL in clinical classification tasks.
Collapse
Affiliation(s)
- Jiehui Jiang
- Institute of Biomedical Engineering, School of Life Sciences, Shanghai University, Shanghai, China.
| | - Chenyang Li
- Institute of Biomedical Engineering, School of Life Sciences, Shanghai University, Shanghai, China
| | - Jiaying Lu
- Department of Nuclear Medicine & PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Jie Sun
- Institute of Biomedical Engineering, School of Life Sciences, Shanghai University, Shanghai, China
| | - Xiaoming Sun
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Jiacheng Yang
- Institute of Biomedical Engineering, School of Life Sciences, Shanghai University, Shanghai, China
| | - Luyao Wang
- Institute of Biomedical Engineering, School of Life Sciences, Shanghai University, Shanghai, China
| | - Chuantao Zuo
- Department of Nuclear Medicine & PET Center, Huashan Hospital, Fudan University, Shanghai, China.
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland
| |
Collapse
|
2
|
Vempuluru VS, Patil G, Viriyala R, Dhara KK, Kaliki S. Artificial intelligence and machine learning in ocular oncology, retinoblastoma (ArMOR). Indian J Ophthalmol 2025; 73:741-743. [PMID: 40272303 DOI: 10.4103/ijo.ijo_1768_24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2024] [Accepted: 01/13/2025] [Indexed: 04/25/2025] Open
Abstract
PURPOSE To test the accuracy of a trained artificial intelligence and machine learning (AI/ML) model in the diagnosis and grouping of intraocular retinoblastoma (iRB) based on the International Classification of Retinoblastoma (ICRB) in a larger cohort. METHODS Retrospective observational study that employed AI, ML, and open computer vision techniques. RESULTS For 1266 images, the AI/ML model displayed accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 95%, 94%, 98%, 99%, and 80%, respectively, for the detection of RB. For 173 eyes, the accuracy, sensitivity, specificity, PPV, and NPV of the AI/ML model were 85%, 98%, 94%, 98%, and 94% for detecting RB. Of 173 eyes classified based on the ICRB by two independent ocular oncologists, 9 (5%) were Group A, 32 (19%) were Group B, 21 (12%) were Group C, 37 (21%) were Group D, 38 (22%) were Group E, and 36 (21%) were classified as normal. Based on the ICRB classification of 173 eyes, the AI/ML model displayed accuracy, sensitivity, specificity, PPV, and NPV of 98%, 94%, 99%, 94%, and 99% for normal; 97%, 56%, 99%, 71% and 98% for Group A; 95%, 75%, 99%, 96%, and 95% for Group B; 95%, 86%, 96%, 75%, and 98% for Group C; 92%, 76%, 96%, 85%, and 94% for Group D; and 94%, 100%, 93%, 79%, 100% for Group E, respectively. CONCLUSION These observations show that expanding the image datasets, as well as testing and retesting AI models, helps identify deficiencies in the AI/ML model and improves its accuracy.
Collapse
Affiliation(s)
- Vijitha S Vempuluru
- Ocular Oncology Services, Operation Eyesight Universal Institute for Eye Cancer, L. V. Prasad Eye Institute, Hyderabad, Telangana, India
| | | | | | | | | |
Collapse
|
3
|
Pallavi R, Soni BL, Jha GK, Sanyal S, Fatima A, Kaliki S. Tumor heterogeneity in retinoblastoma: a literature review. Cancer Metastasis Rev 2025; 44:46. [PMID: 40259075 PMCID: PMC12011974 DOI: 10.1007/s10555-025-10263-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2024] [Accepted: 04/06/2025] [Indexed: 04/23/2025]
Abstract
Tumor heterogeneity, characterized by the presence of diverse cell populations within a tumor, is a key feature of the complex nature of cancer. This diversity arises from the emergence of cells with varying genomic, epigenetic, transcriptomic, and phenotypic profiles over the course of the disease. Host factors and the tumor microenvironment play crucial roles in driving both inter-patient and intra-patient heterogeneity. These diverse cell populations can exhibit different behaviors, such as varying rates of proliferation, responses to treatment, and potential for metastasis. Both inter-patient heterogeneity and intra-patient heterogeneity pose significant challenges to cancer therapeutics and management. In retinoblastoma, while heterogeneity at the clinical presentation level has been recognized for some time, recent attention has shifted towards understanding the underlying cellular heterogeneity. This review primarily focuses on retinoblastoma heterogeneity and its implications for therapeutic strategies and disease management, emphasizing the need for further research and exploration in this complex and challenging area.
Collapse
Affiliation(s)
- Rani Pallavi
- The Operation Eyesight Universal Institute for Eye Cancer, LV Prasad Eye Institute, Hyderabad, Telangana, India.
- Prof. Brien Holden Eye Research Centre, LV Prasad Eye Institute, Hyderabad, Telangana, India.
| | - Bihari Lal Soni
- The Operation Eyesight Universal Institute for Eye Cancer, LV Prasad Eye Institute, Hyderabad, Telangana, India
- Prof. Brien Holden Eye Research Centre, LV Prasad Eye Institute, Hyderabad, Telangana, India
| | - Gaurab Kumar Jha
- The Operation Eyesight Universal Institute for Eye Cancer, LV Prasad Eye Institute, Hyderabad, Telangana, India
- Prof. Brien Holden Eye Research Centre, LV Prasad Eye Institute, Hyderabad, Telangana, India
| | - Shalini Sanyal
- The Operation Eyesight Universal Institute for Eye Cancer, LV Prasad Eye Institute, Hyderabad, Telangana, India
- Prof. Brien Holden Eye Research Centre, LV Prasad Eye Institute, Hyderabad, Telangana, India
| | - Azima Fatima
- The Operation Eyesight Universal Institute for Eye Cancer, LV Prasad Eye Institute, Hyderabad, Telangana, India
- Prof. Brien Holden Eye Research Centre, LV Prasad Eye Institute, Hyderabad, Telangana, India
| | - Swathi Kaliki
- The Operation Eyesight Universal Institute for Eye Cancer, LV Prasad Eye Institute, Hyderabad, Telangana, India.
- Prof. Brien Holden Eye Research Centre, LV Prasad Eye Institute, Hyderabad, Telangana, India.
| |
Collapse
|
4
|
Iglesias G, Menendez H, Talavera E. Improving explanations for medical X-ray diagnosis combining variational autoencoders and adversarial machine learning. Comput Biol Med 2025; 188:109857. [PMID: 39999495 DOI: 10.1016/j.compbiomed.2025.109857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Revised: 01/24/2025] [Accepted: 02/11/2025] [Indexed: 02/27/2025]
Abstract
Explainability in Medical Computer Vision is one of the most sensible implementations of Artificial Intelligence nowadays in healthcare. In this work, we propose a novel Deep Learning architecture for eXplainable Artificial Intelligence, specially designed for medical diagnostic. The proposed approach leverages Variational Autoencoders properties to produce linear modifications of images in a lower-dimensional embedded space, and then reconstructs these modifications into non-linear explanations in the original image space. The proposed approach is based on global and local regularisation of the latent space, which stores visual and semantic information about images. Specifically, a multi-objective genetic algorithm is designed for searching explanations, finding individuals that can misclassify the classification output of the network while producing the minimum number of changes in the image descriptor. The genetic algorithm is able to search for explanations without defining any hyperparameters, and uses only one individual to provide a complete explanation of the whole image. Furthermore, the explanations found by the proposed approach are compared with state-of-the-art eXplainable Artificial Intelligence systems and the results show an improvement in the precision of the explanation between 56.39 and 7.23 percentage points.
Collapse
Affiliation(s)
- Guillermo Iglesias
- Universidad Politécnica de Madrid, Calle de Alan Turin, s/n, Madrid, 28031, Spain.
| | - Hector Menendez
- King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Edgar Talavera
- Universidad Politécnica de Madrid, Calle de Alan Turin, s/n, Madrid, 28031, Spain.
| |
Collapse
|
5
|
Cruz-Abrams O, Dodds Rojas R, Abramson DH. Machine learning demonstrates clinical utility in distinguishing retinoblastoma from pseudo retinoblastoma with RetCam images. Ophthalmic Genet 2025; 46:180-185. [PMID: 39834033 DOI: 10.1080/13816810.2025.2455576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2024] [Revised: 12/15/2024] [Accepted: 01/14/2025] [Indexed: 01/22/2025]
Abstract
BACKGROUND Retinoblastoma is diagnosed and treated without biopsy based solely on appearance (with the indirect ophthalmoscope and imaging). More than 20 benign ophthalmic disorders resemble retinoblastoma and errors in diagnosis continue to be made worldwide. A better noninvasive method for distinguishing retinoblastoma from pseudo retinoblastoma is needed. METHODS RetCam imaging of retinoblastoma and pseudo retinoblastoma from the largest retinoblastoma center in the U.S. (Memorial Sloan Kettering Cancer Center, New York, NY) were used for this study. We used several neural networks (ResNet-18, ResNet-34, ResNet-50, ResNet-101, ResNet-152, and a Vision Image Transformer, or VIT), using 80% of images for training, 10% for validation, and 10% for testing. RESULTS Two thousand eight hundred eighty-two RetCam images from patients with retinoblastoma at diagnosis, 1,970 images from pseudo retinoblastoma at diagnosis, and 804 normal pediatric fundus images were included. The highest sensitivity (98.6%) was obtained with a ResNet-101 model, as were the highest accuracy and F1 scores of 97.3% and 97.7%. The highest specificity (97.0%) and precision (97.0%) was attained with a ResNet-152 model. CONCLUSION Our machine learning algorithm successfully distinguished retinoblastoma from retinoblastoma with high specificity and sensitivity and if implemented worldwide will prevent hundreds of eyes from incorrectly being surgically removed yearly.
Collapse
Affiliation(s)
- Owen Cruz-Abrams
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, N.Y, US
| | | | - David H Abramson
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, N.Y, US
| |
Collapse
|
6
|
An S, Teo K, McConnell MV, Marshall J, Galloway C, Squirrell D. AI explainability in oculomics: How it works, its role in establishing trust, and what still needs to be addressed. Prog Retin Eye Res 2025; 106:101352. [PMID: 40086660 DOI: 10.1016/j.preteyeres.2025.101352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2024] [Revised: 03/07/2025] [Accepted: 03/10/2025] [Indexed: 03/16/2025]
Abstract
Recent developments in artificial intelligence (AI) have seen a proliferation of algorithms that are now capable of predicting a range of systemic diseases from retinal images. Unlike traditional retinal disease detection AI models which are trained on well-recognised retinal biomarkers, systemic disease detection or "oculomics" models use a range of often poorly characterised retinal biomarkers to arrive at their predictions. As the retinal phenotype that oculomics models use may not be intuitive, clinicians have to rely on the developers' explanations of how these algorithms work in order to understand them. The discipline of understanding how AI algorithms work employs two similar but distinct terms: Explainable AI and Interpretable AI (iAI). Explainable AI describes the holistic functioning of an AI system, including its impact and potential biases. Interpretable AI concentrates solely on examining and understanding the workings of the AI algorithm itself. iAI tools are therefore what the clinician must rely on if they are to understand how the algorithm works and whether its predictions are reliable. The iAI tools that developers use can be delineated into two broad categories: Intrinsic methods that improve transparency through architectural changes and post-hoc methods that explain trained models via external algorithms. Currently post-hoc methods, class activation maps in particular, are far more widely used than other techniques but they have their limitations especially when applied to oculomics AI models. Aimed at clinicians, we examine how the key iAI methods work, what they are designed to do and what their limitations are when applied to oculomics AI. We conclude by discussing how combining existing iAI techniques with novel approaches could allow AI developers to better explain how their oculomics models work and reassure clinicians that the results issued are reliable.
Collapse
Affiliation(s)
- Songyang An
- School of Optometry and Vision Science, University of Auckland, Auckland, New Zealand; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand
| | - Kelvin Teo
- Singapore Eye Research Institute, The Academia, 20 College Road Discovery Tower Level 6, 169856, Singapore; Singapore National University, Singapore
| | - Michael V McConnell
- Division of Cardiovascular Medicine, Stanford University School of Medicine, Stanford, CA, USA; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand
| | - John Marshall
- Institute of Ophthalmology University College London, 11-43 Bath Street, London, EC1V 9EL, UK
| | - Christopher Galloway
- Department of Business and Communication, Massey University, East Precinct Albany Expressway, SH17, Albany, Auckland, 0632, New Zealand
| | - David Squirrell
- Department of Ophthalmology, University of the Sunshine Coast, Queensland, Australia; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand.
| |
Collapse
|
7
|
Gill SS, Ponniah HS, Giersztein S, Anantharaj RM, Namireddy SR, Killilea J, Ramsay D, Salih A, Thavarajasingam A, Scurtu D, Jankovic D, Russo S, Kramer A, Thavarajasingam SG. The diagnostic and prognostic capability of artificial intelligence in spinal cord injury: A systematic review. BRAIN & SPINE 2025; 5:104208. [PMID: 40027293 PMCID: PMC11871462 DOI: 10.1016/j.bas.2025.104208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2024] [Revised: 01/20/2025] [Accepted: 02/04/2025] [Indexed: 03/05/2025]
Abstract
Background Artificial intelligence (AI) models have shown potential for diagnosing and prognosticating traumatic spinal cord injury (tSCI), but their clinical utility remains uncertain. Method ology: The primary aim was to evaluate the performance of AI algorithms in diagnosing and prognosticating tSCI. Subsequent systematic searching of seven databases identified studies evaluating AI models. PROBAST and TRIPOD tools were used to assess the quality and reporting of included studies (PROSPERO: CRD42023464722). Fourteen studies, comprising 20 models and 280,817 pooled imaging datasets, were included. Analysis was conducted in line with the SWiM guidelines. Results For prognostication, 11 studies predicted outcomes including AIS improvement (30%), mortality and ambulatory ability (20% each), and discharge or length of stay (10%). The mean AUC was 0.770 (range: 0.682-0.902), indicating moderate predictive performance. Diagnostic models utilising DTI, CT, and T2-weighted MRI with CNN-based segmentation achieved a weighted mean accuracy of 0.898 (range: 0.813-0.938), outperforming prognostic models. Conclusion AI demonstrates strong diagnostic accuracy (mean accuracy: 0.898) and moderate prognostic capability (mean AUC: 0.770) for tSCI. However, the lack of standardised frameworks and external validation limits clinical applicability. Future models should integrate multimodal data, including imaging, patient characteristics, and clinician judgment, to improve utility and alignment with clinical practice.
Collapse
Affiliation(s)
- Saran Singh Gill
- Imperial Brain & Spine Initiative, Imperial College London, London, United Kingdom
- Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Hariharan Subbiah Ponniah
- Imperial Brain & Spine Initiative, Imperial College London, London, United Kingdom
- Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Sho Giersztein
- Imperial Brain & Spine Initiative, Imperial College London, London, United Kingdom
| | | | - Srikar Reddy Namireddy
- Imperial Brain & Spine Initiative, Imperial College London, London, United Kingdom
- Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Joshua Killilea
- Imperial Brain & Spine Initiative, Imperial College London, London, United Kingdom
| | - DanieleS.C. Ramsay
- Imperial Brain & Spine Initiative, Imperial College London, London, United Kingdom
| | - Ahmed Salih
- Imperial Brain & Spine Initiative, Imperial College London, London, United Kingdom
| | | | - Daniel Scurtu
- Department of Neurosurgery, Universitätsmedizin Mainz, Mainz, Germany
| | - Dragan Jankovic
- Department of Neurosurgery, LMU University Hospital, LMU, Munich, Germany
| | - Salvatore Russo
- Imperial College Healthcare NHS Trust, London, United Kingdom
| | - Andreas Kramer
- Department of Neurosurgery, LMU University Hospital, LMU, Munich, Germany
| | - Santhosh G. Thavarajasingam
- Imperial Brain & Spine Initiative, Imperial College London, London, United Kingdom
- Department of Neurosurgery, LMU University Hospital, LMU, Munich, Germany
| |
Collapse
|
8
|
Lima RV, Arruda MP, Muniz MCR, Filho HNF, Ferrerira DMR, Pereira SM. Artificial intelligence methods in diagnosis of retinoblastoma based on fundus imaging: a systematic review and meta-analysis. Graefes Arch Clin Exp Ophthalmol 2025; 263:547-553. [PMID: 39289309 DOI: 10.1007/s00417-024-06643-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 07/26/2024] [Accepted: 09/09/2024] [Indexed: 09/19/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) algorithms for the detection of retinoblastoma (RB) by fundus image analysis have been proposed as a potentially effective technique to facilitate diagnosis and screening programs. However, doubts remain about the accuracy of the technique, the best type of AI for this situation, and its feasibility for everyday use. Therefore, we performed a systematic review and meta-analysis to evaluate this issue. METHODS Following PRISMA 2020 guidelines, a comprehensive search of MEDLINE, Embase, ClinicalTrials.gov and IEEEX databases identified 494 studies whose titles and abstracts were screened for eligibility. We included diagnostic studies that evaluated the accuracy of AI in identifying retinoblastoma based on fundus imaging. Univariate and bivariate analysis was performed using the random effects model. The study protocol was registered in PROSPERO under CRD42024499221. RESULTS Six studies with 9902 fundus images were included, of which 5944 (60%) had confirmed RB. Only one dataset used a semi-supervised machine learning (ML) based method, all other studies used supervised ML, three using architectures requiring high computational power and two using more economical models. The pooled analysis of all models showed a sensitivity of 98.2% (95% CI: 0.947-0.994), a specificity of 98.5% (95% CI: 0.916-0.998) and an AUC of 0.986 (95% CI: 0.970-0.989). Subgroup analyses comparing models with high and low computational power showed no significant difference (p=0.824). CONCLUSIONS AI methods showed a high precision in the diagnosis of RB based on fundus images with no significant difference when comparing high and low computational power models, suggesting a viability of their use. Validation and cost-effectiveness studies are needed in different income countries. Subpopulations should also be analyzed, as AI may be useful as an initial screening tool in populations at high risk for RB, serving as a bridge to the pediatric ophthalmologist or ocular oncologist, who are scarce globally. KEY MESSAGES What is known Retinoblastoma is the most common intraocular cancer in childhood and diagnostic delay is the main factor leading to a poor prognosis. The application of machine learning techniques proposes reliable methods for screening and diagnosis of retinal diseases. What is new The meta-analysis of the diagnostic accuracy of artificial intelligence methods for diagnosing retinoblastoma based on fundus images showed a sensitivity of 98.2% (95% CI: 0.947-0.994) and a specificity of 98.5% (95% CI: 0.916-0.998). There was no statistically significant difference in the diagnostic accuracy of high and low computational power models. The overall performance of supervised machine learning was best than unsupervised, although few studies were available on the second type.
Collapse
Affiliation(s)
- Rian Vilar Lima
- Department of Medicine, University of Fortaleza, Av. Washington Soares, 1321 - Edson Queiroz, Fortaleza - CE, Ceará, 60811-905, Brazil.
| | | | - Maria Carolina Rocha Muniz
- Department of Medicine, University of Fortaleza, Av. Washington Soares, 1321 - Edson Queiroz, Fortaleza - CE, Ceará, 60811-905, Brazil
| | - Helvécio Neves Feitosa Filho
- Department of Medicine, University of Fortaleza, Av. Washington Soares, 1321 - Edson Queiroz, Fortaleza - CE, Ceará, 60811-905, Brazil
| | | | | |
Collapse
|
9
|
Hassan SU, Abdulkadir SJ, Zahid MSM, Al-Selwi SM. Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review. Comput Biol Med 2025; 185:109569. [PMID: 39705792 DOI: 10.1016/j.compbiomed.2024.109569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 10/30/2024] [Accepted: 12/10/2024] [Indexed: 12/23/2024]
Abstract
BACKGROUND The interpretability and explainability of machine learning (ML) and artificial intelligence systems are critical for generating trust in their outcomes in fields such as medicine and healthcare. Errors generated by these systems, such as inaccurate diagnoses or treatments, can have serious and even life-threatening effects on patients. Explainable Artificial Intelligence (XAI) is emerging as an increasingly significant area of research nowadays, focusing on the black-box aspect of sophisticated and difficult-to-interpret ML algorithms. XAI techniques such as Local Interpretable Model-Agnostic Explanations (LIME) can give explanations for these models, raising confidence in the systems and improving trust in their predictions. Numerous works have been published that respond to medical problems through the use of ML models in conjunction with XAI algorithms to give interpretability and explainability. The primary objective of the study is to evaluate the performance of the newly emerging LIME techniques within healthcare domains that require more attention in the realm of XAI research. METHOD A systematic search was conducted in numerous databases (Scopus, Web of Science, IEEE Xplore, ScienceDirect, MDPI, and PubMed) that identified 1614 peer-reviewed articles published between 2019 and 2023. RESULTS 52 articles were selected for detailed analysis that showed a growing trend in the application of LIME techniques in healthcare, with significant improvements in the interpretability of ML models used for diagnostic and prognostic purposes. CONCLUSION The findings suggest that the integration of XAI techniques, particularly LIME, enhances the transparency and trustworthiness of AI systems in healthcare, thereby potentially improving patient outcomes and fostering greater acceptance of AI-driven solutions among medical professionals.
Collapse
Affiliation(s)
- Shahab Ul Hassan
- Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Centre for Intelligent Signal & Imaging Research (CISIR), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.
| | - Said Jadid Abdulkadir
- Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Center for Research in Data Science (CeRDaS), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.
| | - M Soperi Mohd Zahid
- Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Centre for Intelligent Signal & Imaging Research (CISIR), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.
| | - Safwan Mahmood Al-Selwi
- Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Center for Research in Data Science (CeRDaS), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.
| |
Collapse
|
10
|
Buser MAD, van der Rest JK, Wijnen MHWA, de Krijger RR, van der Steeg AFW, van den Heuvel‐Eibrink MM, Reismann M, Veldhoen S, Pio L, Markel M. Deep Learning and Multidisciplinary Imaging in Pediatric Surgical Oncology: A Scoping Review. Cancer Med 2025; 14:e70574. [PMID: 39812075 PMCID: PMC11733598 DOI: 10.1002/cam4.70574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 12/15/2024] [Accepted: 12/24/2024] [Indexed: 01/16/2025] Open
Abstract
BACKGROUND Medical images play an important role in diagnosis and treatment of pediatric solid tumors. The field of radiology, pathology, and other image-based diagnostics are getting increasingly important and advanced. This indicates a need for advanced image processing technology such as Deep Learning (DL). AIM Our review focused on the use of DL in multidisciplinary imaging in pediatric surgical oncology. METHODS A search was conducted within three databases (Pubmed, Embase, and Scopus), and 2056 articles were identified. Three separate screenings were performed for each identified subfield. RESULTS In total, we identified 36 articles, divided between radiology (n = 22), pathology (n = 9), and other image-based diagnostics (n = 5). Four types of tasks were identified in our review: classification, prediction, segmentation, and synthesis. General statements about the studies'' performance could not be made due to the inhomogeneity of the included studies. To implement DL in pediatric clinical practice, both technical validation and clinical validation are of uttermost importance. CONCLUSION In conclusion, our review provided an overview of all DL research in the field of pediatric surgical oncology. The more advanced status of DL in adults should be used as guide to move the field of DL in pediatric oncology further, to keep improving the outcomes of children with cancer.
Collapse
Affiliation(s)
- M. A. D. Buser
- Princess Máxima Center for Pediatric OncologyUtrechtThe Netherlands
| | | | | | - R. R. de Krijger
- Princess Máxima Center for Pediatric OncologyUtrechtThe Netherlands
| | | | - M. M. van den Heuvel‐Eibrink
- Princess Máxima Center for Pediatric OncologyUtrechtThe Netherlands
- Wilhelmina Children's HospitalUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - M. Reismann
- Department of Pediatric SurgeryCharité‐Universitätsmedizin BerlinBerlinGermany
| | - S. Veldhoen
- Department of Pediatric RadiologyCharité‐Universitätsmedizin BerlinBerlinGermany
| | - L. Pio
- Pediatric Surgery UnitUniversité Paris‐Saclay, Assistance Publique‐Hôpitaux de Paris, Bicêtre HospitalLe Kremlin‐BicêtreFrance
| | - M. Markel
- Department of Pediatric SurgeryCharité‐Universitätsmedizin BerlinBerlinGermany
| |
Collapse
|
11
|
Xu X, Yang Y, Tan X, Zhang Z, Wang B, Yang X, Weng C, Yu R, Zhao Q, Quan S. Hepatic encephalopathy post-TIPS: Current status and prospects in predictive assessment. Comput Struct Biotechnol J 2024; 24:493-506. [PMID: 39076168 PMCID: PMC11284497 DOI: 10.1016/j.csbj.2024.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Revised: 07/05/2024] [Accepted: 07/05/2024] [Indexed: 07/31/2024] Open
Abstract
Transjugular intrahepatic portosystemic shunt (TIPS) is an essential procedure for the treatment of portal hypertension but can result in hepatic encephalopathy (HE), a serious complication that worsens patient outcomes. Investigating predictors of HE after TIPS is essential to improve prognosis. This review analyzes risk factors and compares predictive models, weighing traditional scores such as Child-Pugh, Model for End-Stage Liver Disease (MELD), and albumin-bilirubin (ALBI) against emerging artificial intelligence (AI) techniques. While traditional scores provide initial insights into HE risk, they have limitations in dealing with clinical complexity. Advances in machine learning (ML), particularly when integrated with imaging and clinical data, offer refined assessments. These innovations suggest the potential for AI to significantly improve the prediction of post-TIPS HE. The study provides clinicians with a comprehensive overview of current prediction methods, while advocating for the integration of AI to increase the accuracy of post-TIPS HE assessments. By harnessing the power of AI, clinicians can better manage the risks associated with TIPS and tailor interventions to individual patient needs. Future research should therefore prioritize the development of advanced AI frameworks that can assimilate diverse data streams to support clinical decision-making. The goal is not only to more accurately predict HE, but also to improve overall patient care and quality of life.
Collapse
Affiliation(s)
- Xiaowei Xu
- Department of Gastroenterology Nursing Unit, Ward 192, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, China
| | - Yun Yang
- School of Nursing, Wenzhou Medical University, Wenzhou 325001, China
| | - Xinru Tan
- The First School of Medicine, School of Information and Engineering, Wenzhou Medical University, Wenzhou 325001, China
| | - Ziyang Zhang
- School of Clinical Medicine, Guizhou Medical University, Guiyang 550025, China
| | - Boxiang Wang
- The First School of Medicine, School of Information and Engineering, Wenzhou Medical University, Wenzhou 325001, China
| | - Xiaojie Yang
- Wenzhou Medical University Renji College, Wenzhou 325000, China
| | - Chujun Weng
- The Fourth Affiliated Hospital Zhejiang University School of Medicine, Yiwu 322000, China
| | - Rongwen Yu
- Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou 325000, China
| | - Qi Zhao
- School of Computer Science and Software Engineering, University of Science and Technology Liaoning, Anshan 114051, China
| | - Shichao Quan
- Department of Big Data in Health Science, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, China
| |
Collapse
|
12
|
Muhammad D, Bendechache M. Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis. Comput Struct Biotechnol J 2024; 24:542-560. [PMID: 39252818 PMCID: PMC11382209 DOI: 10.1016/j.csbj.2024.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 08/07/2024] [Accepted: 08/07/2024] [Indexed: 09/11/2024] Open
Abstract
This systematic literature review examines state-of-the-art Explainable Artificial Intelligence (XAI) methods applied to medical image analysis, discussing current challenges and future research directions, and exploring evaluation metrics used to assess XAI approaches. With the growing efficiency of Machine Learning (ML) and Deep Learning (DL) in medical applications, there's a critical need for adoption in healthcare. However, their "black-box" nature, where decisions are made without clear explanations, hinders acceptance in clinical settings where decisions have significant medicolegal consequences. Our review highlights the advanced XAI methods, identifying how they address the need for transparency and trust in ML/DL decisions. We also outline the challenges faced by these methods and propose future research directions to improve XAI in healthcare. This paper aims to bridge the gap between cutting-edge computational techniques and their practical application in healthcare, nurturing a more transparent, trustworthy, and effective use of AI in medical settings. The insights guide both research and industry, promoting innovation and standardisation in XAI implementation in healthcare.
Collapse
Affiliation(s)
- Dost Muhammad
- ADAPT Research Centre, School of Computer Science, University of Galway, Galway, Ireland
| | - Malika Bendechache
- ADAPT Research Centre, School of Computer Science, University of Galway, Galway, Ireland
| |
Collapse
|
13
|
Islam O, Assaduzzaman M, Hasan MZ. An explainable AI-based blood cell classification using optimized convolutional neural network. J Pathol Inform 2024; 15:100389. [PMID: 39161471 PMCID: PMC11332798 DOI: 10.1016/j.jpi.2024.100389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 06/16/2024] [Accepted: 06/24/2024] [Indexed: 08/21/2024] Open
Abstract
White blood cells (WBCs) are a vital component of the immune system. The efficient and precise classification of WBCs is crucial for medical professionals to diagnose diseases accurately. This study presents an enhanced convolutional neural network (CNN) for detecting blood cells with the help of various image pre-processing techniques. Various image pre-processing techniques, such as padding, thresholding, erosion, dilation, and masking, are utilized to minimize noise and improve feature enhancement. Additionally, performance is further enhanced by experimenting with various architectural structures and hyperparameters to optimize the proposed model. A comparative evaluation is conducted to compare the performance of the proposed model with three transfer learning models, including Inception V3, MobileNetV2, and DenseNet201.The results indicate that the proposed model outperforms existing models, achieving a testing accuracy of 99.12%, precision of 99%, and F1-score of 99%. In addition, We utilized SHAP (Shapley Additive explanations) and LIME (Local Interpretable Model-agnostic Explanations) techniques in our study to improve the interpretability of the proposed model, providing valuable insights into how the model makes decisions. Furthermore, the proposed model has been further explained using the Grad-CAM and Grad-CAM++ techniques, which is a class-discriminative localization approach, to improve trust and transparency. Grad-CAM++ performed slightly better than Grad-CAM in identifying the predicted area's location. Finally, the most efficient model has been integrated into an end-to-end (E2E) system, accessible through both web and Android platforms for medical professionals to classify blood cell.
Collapse
Affiliation(s)
- Oahidul Islam
- Dept. of EEE, Daffodil International University, Dhaka, Bangladesh
| | - Md Assaduzzaman
- Health Informatics Research Laboratory (HIRL), Dept. of CSE, Daffodil International University, Dhaka, Bangladesh
| | - Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Dept. of CSE, Daffodil International University, Dhaka, Bangladesh
| |
Collapse
|
14
|
Li X, Tian Y, Li S, Wu H, Wang T. Interpretable prediction of 30-day mortality in patients with acute pancreatitis based on machine learning and SHAP. BMC Med Inform Decis Mak 2024; 24:328. [PMID: 39501235 PMCID: PMC11539846 DOI: 10.1186/s12911-024-02741-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 10/24/2024] [Indexed: 11/08/2024] Open
Abstract
BACKGROUND Severe acute pancreatitis (SAP) can be fatal if left unrecognized and untreated. The purpose was to develop a machine learning (ML) model for predicting the 30-day all-cause mortality risk in SAP patients and to explain the most important predictors. METHODS This research utilized six ML methods, including logistic regression (LR), k-nearest neighbors(KNN), support vector machines (SVM), naive Bayes (NB), random forests(RF), and extreme gradient boosting(XGBoost), to construct six predictive models for SAP. An extensive evaluation was conducted to determine the most effective model and then the Shapley Additive exPlanations (SHAP) method was applied to visualize key variables. Utilizing the optimized model, stratified predictions were made for patients with SAP. Further, the study employed multivariable Cox regression analysis and Kaplan-Meier survival curves, along with subgroup analysis, to explore the relationship between the machine learning-based score and 30-day mortality. RESULTS Through LASSO regression and recursive feature elimination (RFE), 25 optimal feature variables are selected. The XGBoost model performed best, with an area under the curve (AUC) of 0.881, a sensitivity of 0.5714, a specificity of 0.9651 and an F1 score of 0.64. The first six most important feature variables were the use of vasopressor, high Charlson comorbidity index, low blood oxygen saturation, history of malignant tumor, hyperglycemia and high APSIII score. Based on the optimal threshold of 0.62, patients were divided into high and low-risk groups, and the 30-day survival rate in the high-risk group decreased significantly. COX regression analysis further confirmed the positive correlation between high-risk scores and 30-day mortality. In the subgroup analysis, the model showed good risk stratification ability in patients with different gender, renal replacement therapy and with or without a history of malignant tumor, but it was not effective in predicting peripheral vascular disease. CONCLUSIONS the XGBoost model effectively predicts the severity of SAP, serving as a valuable tool for clinicians to identify SAP early.
Collapse
Affiliation(s)
- Xiaojing Li
- Department of Emergency, the Eighth Affiliated Hospital of Sun Yat-sen University, Shenzhen, 518033, China
| | - Yueqin Tian
- Department of Rehabilitation Medicine, The Third Affiliated Hospital, Sun Yat-sen University, No. 600, Tianhe Road, Guangzhou, 510630, Guangdong, China
| | - Shuangmei Li
- Department of Emergency, the Eighth Affiliated Hospital of Sun Yat-sen University, Shenzhen, 518033, China
| | - Haidong Wu
- Department of Emergency, the Eighth Affiliated Hospital of Sun Yat-sen University, Shenzhen, 518033, China.
| | - Tong Wang
- Department of Emergency, the Eighth Affiliated Hospital of Sun Yat-sen University, Shenzhen, 518033, China.
| |
Collapse
|
15
|
Gašperlin Stepančič K, Ramovš A, Ramovš J, Košir A. A novel explainable machine learning-based healthy ageing scale. BMC Med Inform Decis Mak 2024; 24:317. [PMID: 39472925 PMCID: PMC11520378 DOI: 10.1186/s12911-024-02714-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Accepted: 10/08/2024] [Indexed: 11/02/2024] Open
Abstract
BACKGROUND Ageing is one of the most important challenges in our society. Evaluating how one is ageing is important in many aspects, from giving personalized recommendations to providing insight for long-term care eligibility. Machine learning can be utilized for that purpose, however, user reservations towards "black-box" predictions call for increased transparency and explainability of results. This study aimed to explore the potential of developing a machine learning-based healthy ageing scale that provides explainable results that could be trusted and understood by informal carers. METHODS In this study, we used data from 696 older adults collected via personal field interviews as part of independent research. Explanatory factor analysis was used to find candidate healthy ageing aspects. For visualization of key aspects, a web annotation application was developed. Key aspects were selected by gerontologists who later used web annotation applications to evaluate healthy ageing for each older adult on a Likert scale. Logistic Regression, Decision Tree Classifier, Random Forest, KNN, SVM and XGBoost were used for multi-classification machine learning. AUC OvO, AUC OvR, F1, Precision and Recall were used for evaluation. Finally, SHAP was applied to best model predictions to make them explainable. RESULTS The experimental results show that human annotations of healthy ageing could be modelled using machine learning where among several algorithms XGBoost showed superior performance. The use of XGBoost resulted in 0.92 macro-averaged AuC OvO and 0.76 macro-averaged F1. SHAP was applied to generate local explanations for predictions and shows how each feature is influencing the prediction. CONCLUSION The resulting explainable predictions make a step toward practical scale implementation into decision support systems. The development of such a decision support system that would incorporate an explainable model could reduce user reluctance towards the utilization of AI in healthcare and provide explainable and trusted insights to informal carers or healthcare providers as a basis to shape tangible actions for improving ageing. Furthermore, the cooperation with gerontology specialists throughout the process also indicates expert knowledge as integrated into the model.
Collapse
Affiliation(s)
| | - Ana Ramovš
- Anton Trstenjak Institute of Gerontology and Intergenerational Relations, Resljeva cesta 7, 1000, Ljubljana, Slovenia
| | - Jože Ramovš
- Anton Trstenjak Institute of Gerontology and Intergenerational Relations, Resljeva cesta 7, 1000, Ljubljana, Slovenia
| | - Andrej Košir
- Laboratory for user-adapted communications and ambient intelligence, Faculty of Electrical Engineering, Tržaška cesta 25, 1000, Ljubljana, Slovenia.
| |
Collapse
|
16
|
Vempuluru VS, Viriyala R, Ayyagari V, Bakal K, Bhamidipati P, Dhara KK, Ferenczy SR, Shields CL, Kaliki S. Artificial Intelligence and Machine Learning in Ocular Oncology, Retinoblastoma (ArMOR): Experience with a Multiracial Cohort. Cancers (Basel) 2024; 16:3516. [PMID: 39456609 PMCID: PMC11506485 DOI: 10.3390/cancers16203516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Revised: 10/04/2024] [Accepted: 10/08/2024] [Indexed: 10/28/2024] Open
Abstract
Background: The color variation in fundus images from differences in melanin concentrations across races can affect the accuracy of artificial intelligence and machine learning (AI/ML) models. Hence, we studied the performance of our AI model (with proven efficacy in an Asian-Indian cohort) in a multiracial cohort for detecting and classifying intraocular RB (iRB). Methods: Retrospective observational study. Results: Of 210 eyes, 153 (73%) belonged to White, 37 (18%) to African American, 9 (4%) to Asian, 6 (3%) to Hispanic races, based on the U.S. Office of Management and Budget's Statistical Policy Directive No.15 and 5 (2%) had no reported race. Of the 2473 images in 210 eyes, 427 had no tumor, and 2046 had iRB. After training the AI model based on race, the sensitivity and specificity for detection of RB in 2473 images were 93% and 96%, respectively. The sensitivity and specificity of the AI model were 74% and 100% for group A; 88% and 96% for group B; 88% and 100% for group C; 73% and 98% for group D, and 100% and 92% for group E, respectively. Conclusions: The AI models built on a single race do not work well for other races. When retrained for different races, our model exhibited high sensitivity and specificity in detecting RB and classifying RB.
Collapse
Affiliation(s)
- Vijitha S. Vempuluru
- The Operation Eyesight Universal Institute for Eye Cancer, LV Prasad Eye Institute, Hyderabad 500034, India; (V.S.V.); (R.V.); (V.A.); (K.B.)
| | - Rajiv Viriyala
- The Operation Eyesight Universal Institute for Eye Cancer, LV Prasad Eye Institute, Hyderabad 500034, India; (V.S.V.); (R.V.); (V.A.); (K.B.)
| | - Virinchi Ayyagari
- The Operation Eyesight Universal Institute for Eye Cancer, LV Prasad Eye Institute, Hyderabad 500034, India; (V.S.V.); (R.V.); (V.A.); (K.B.)
| | - Komal Bakal
- The Operation Eyesight Universal Institute for Eye Cancer, LV Prasad Eye Institute, Hyderabad 500034, India; (V.S.V.); (R.V.); (V.A.); (K.B.)
| | | | | | - Sandor R. Ferenczy
- Ocular Oncology Service, Wills Eye Hospital, Thomas Jefferson University, 840 Walnut Street, 14th Floor, Philadelphia, PA 19107, USA; (S.R.F.); (C.L.S.)
| | - Carol L. Shields
- Ocular Oncology Service, Wills Eye Hospital, Thomas Jefferson University, 840 Walnut Street, 14th Floor, Philadelphia, PA 19107, USA; (S.R.F.); (C.L.S.)
| | - Swathi Kaliki
- The Operation Eyesight Universal Institute for Eye Cancer, LV Prasad Eye Institute, Hyderabad 500034, India; (V.S.V.); (R.V.); (V.A.); (K.B.)
| |
Collapse
|
17
|
Guo Y, Li S, Na R, Guo L, Huo C, Zhu L, Shi C, Na R, Gu M, Zhang W. Comparative Transcriptome Analysis of Bovine, Porcine, and Sheep Muscle Using Interpretable Machine Learning Models. Animals (Basel) 2024; 14:2947. [PMID: 39457877 PMCID: PMC11506101 DOI: 10.3390/ani14202947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2024] [Revised: 10/08/2024] [Accepted: 10/10/2024] [Indexed: 10/28/2024] Open
Abstract
The growth and development of muscle tissue play a pivotal role in the economic value and quality of meat in agricultural animals, garnering close attention from breeders and researchers. The quality and palatability of muscle tissue directly determine the market competitiveness of meat products and the satisfaction of consumers. Therefore, a profound understanding and management of muscle growth is essential for enhancing the overall economic efficiency and product quality of the meat industry. Despite this, systematic research on muscle development-related genes across different species still needs to be improved. This study addresses this gap through extensive cross-species muscle transcriptome analysis, combined with interpretable machine learning models. Utilizing a comprehensive dataset of 275 publicly available transcriptomes derived from porcine, bovine, and ovine muscle tissues, encompassing samples from ten distinct muscle types such as the semimembranosus and longissimus dorsi, this study analyzes 113 porcine (n = 113), 94 bovine (n = 94), and 68 ovine (n = 68) specimens. We employed nine machine learning models, such as Support Vector Classifier (SVC) and Support Vector Machine (SVM). Applying the SHapley Additive exPlanations (SHAP) method, we analyzed the muscle transcriptome data of cattle, pigs, and sheep. The optimal model, adaptive boosting (AdaBoost), identified key genes potentially influencing muscle growth and development across the three species, termed SHAP genes. Among these, 41 genes (including NANOG, ADAMTS8, LHX3, and TLR9) were consistently expressed in all three species, designated as homologous genes. Specific candidate genes for cattle included SLC47A1, IGSF1, IRF4, EIF3F, CGAS, ZSWIM9, RROB1, and ABHD18; for pigs, DRP2 and COL12A1; and for sheep, only COL10A1. Through the analysis of SHAP genes utilizing Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways, relevant pathways such as ether lipid metabolism, cortisol synthesis and secretion, and calcium signaling pathways have been identified, revealing their pivotal roles in muscle growth and development.
Collapse
Affiliation(s)
- Yaqiang Guo
- College of Animal Science, Inner Mongolia Agricultural University, Hohhot 010010, China; (Y.G.); (S.L.); (R.N.); (L.G.); (C.H.); (L.Z.); (C.S.); (R.N.)
- Inner Mongolia Engineering Research Center of Genomic Big Data for Agriculture, Hohhot 010010, China
| | - Shuai Li
- College of Animal Science, Inner Mongolia Agricultural University, Hohhot 010010, China; (Y.G.); (S.L.); (R.N.); (L.G.); (C.H.); (L.Z.); (C.S.); (R.N.)
| | - Rigela Na
- College of Animal Science, Inner Mongolia Agricultural University, Hohhot 010010, China; (Y.G.); (S.L.); (R.N.); (L.G.); (C.H.); (L.Z.); (C.S.); (R.N.)
| | - Lili Guo
- College of Animal Science, Inner Mongolia Agricultural University, Hohhot 010010, China; (Y.G.); (S.L.); (R.N.); (L.G.); (C.H.); (L.Z.); (C.S.); (R.N.)
| | - Chenxi Huo
- College of Animal Science, Inner Mongolia Agricultural University, Hohhot 010010, China; (Y.G.); (S.L.); (R.N.); (L.G.); (C.H.); (L.Z.); (C.S.); (R.N.)
| | - Lin Zhu
- College of Animal Science, Inner Mongolia Agricultural University, Hohhot 010010, China; (Y.G.); (S.L.); (R.N.); (L.G.); (C.H.); (L.Z.); (C.S.); (R.N.)
| | - Caixia Shi
- College of Animal Science, Inner Mongolia Agricultural University, Hohhot 010010, China; (Y.G.); (S.L.); (R.N.); (L.G.); (C.H.); (L.Z.); (C.S.); (R.N.)
| | - Risu Na
- College of Animal Science, Inner Mongolia Agricultural University, Hohhot 010010, China; (Y.G.); (S.L.); (R.N.); (L.G.); (C.H.); (L.Z.); (C.S.); (R.N.)
| | - Mingjuan Gu
- College of Animal Science, Inner Mongolia Agricultural University, Hohhot 010010, China; (Y.G.); (S.L.); (R.N.); (L.G.); (C.H.); (L.Z.); (C.S.); (R.N.)
| | - Wenguang Zhang
- College of Animal Science, Inner Mongolia Agricultural University, Hohhot 010010, China; (Y.G.); (S.L.); (R.N.); (L.G.); (C.H.); (L.Z.); (C.S.); (R.N.)
| |
Collapse
|
18
|
Wangweera C, Zanini P. Comparison review of image classification techniques for early diagnosis of diabetic retinopathy. Biomed Phys Eng Express 2024; 10:062001. [PMID: 39173657 DOI: 10.1088/2057-1976/ad7267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 08/22/2024] [Indexed: 08/24/2024]
Abstract
Diabetic retinopathy (DR) is one of the leading causes of vision loss in adults and is one of the detrimental side effects of the mass prevalence of Diabetes Mellitus (DM). It is crucial to have an efficient screening method for early diagnosis of DR to prevent vision loss. This paper compares and analyzes the various Machine Learning (ML) techniques, from traditional ML to advanced Deep Learning models. We compared and analyzed the efficacy of Convolutional Neural Networks (CNNs), Capsule Networks (CapsNet), K-Nearest Neighbor (KNN), Support Vector Machine (SVM), decision trees, and Random Forests. This paper also considers determining factors in the evaluation, including contrast enhancements, noise reduction, grayscaling, etc We analyze recent research studies and compare methodologies and metrics, including accuracy, precision, sensitivity, and specificity. The findings highlight the advanced performance of Deep Learning (DL) models, with CapsNet achieving a remarkable accuracy of up to 97.98% and a high precision rate, outperforming other traditional ML methods. The Contrast Limited Adaptive Histogram Equalization (CLAHE) preprocessing technique substantially enhanced the model's efficiency. Each ML method's computational requirements are also considered. While most advanced deep learning methods performed better according to the metrics, they are more computationally complex, requiring more resources and data input. We also discussed how datasets like MESSIDOR could be more straightforward and contribute to highly evaluated performance and that there is a lack of consistency regarding benchmark datasets across papers in the field. Using the DL models facilitates accurate early detection for DR screening, can potentially reduce vision loss risks, and improves accessibility and cost-efficiency of eye screening. Further research is recommended to extend our findings by building models with public datasets, experimenting with ensembles of DL and traditional ML models, and considering testing high-performing models like CapsNet.
Collapse
Affiliation(s)
| | - Plinio Zanini
- Center of Engineering, Modeling and Applied Social Science, Federal University of ABC (UFABC), Santo André, Brazil
| |
Collapse
|
19
|
Niu S, Dong R, Jiang G, Zhang Y. Identification of diagnostic signature and immune microenvironment subtypes of venous thromboembolism. Cytokine 2024; 181:156685. [PMID: 38945040 DOI: 10.1016/j.cyto.2024.156685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 06/20/2024] [Accepted: 06/24/2024] [Indexed: 07/02/2024]
Abstract
The close link between immune and pathogenesis of venous thromboembolism (VTE) has been recognized, but not fully elucidated. The current study was designed to identify immune microenvironment related signature and subtypes using explainable machine learning in VTE. We first observed an alteration of immune microenvironment in VTE patients and identified eight key immune cells involved in VTE. Then PTPN6, ITGB2, CR2, FPR2, MMP9 and ISG15 were determined as key immune microenvironment-related genes, which could divide VTE patients into two subtypes with different immune and metabolic characteristics. Also, we found that prunetin and torin-2 may be most promising to treat VTE patients in Cluster 1 and 2, respectively. By comparing six machine learning models in both training and external validation sets, XGboost was identified as the best one to predict the risk of VTE, followed by the interpretation of each immune microenvironment-related gene contributing to the model. Moreover, CR2 and FPR2 had high accuracy in distinguishing VTE and control, which may act as diagnostic biomarkers of VTE, and their expressions were validated by qPCR. Collectively, immune microenvironment related PTPN6, ITGB2, CR2, FPR2, MMP9 and ISG15 are key genes involved in the pathogenesis of VTE. The VTE risk prediction model and immune microenvironment subtypes based on those genes might benefit prevention, diagnosis, and the individualized treatment strategy in clinical practice of VTE.
Collapse
Affiliation(s)
- Shuai Niu
- Department of Vascular Surgery, the Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China; Department of Vascular Surgery, Hebei General Hospital, Shijiazhuang, Hebei, China
| | - Ruoyu Dong
- Department of Vascular Surgery, Hebei General Hospital, Shijiazhuang, Hebei, China
| | - Guangwei Jiang
- Department of Vascular Surgery, Hebei General Hospital, Shijiazhuang, Hebei, China
| | - Yanrong Zhang
- Department of Vascular Surgery, the Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China.
| |
Collapse
|
20
|
Huang W, Wang C, Chen J. Reply-letter to the editor. Clin Nutr 2024; 43:2283-2284. [PMID: 39138078 DOI: 10.1016/j.clnu.2024.07.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2024] [Accepted: 07/31/2024] [Indexed: 08/15/2024]
Affiliation(s)
- Weijia Huang
- Department of Gastrointestinal Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China; Guangxi Key Laboratory of Enhanced Recovery after Surgery for Gastrointestinal Cancer, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Clinical Research Center for Enhanced Recovery after Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Zhuang Autonomous Region Engineering Research Center for Artificial Intelligence Analysis of Multimodal Tumor Images, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Congjun Wang
- Department of Gastrointestinal Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China; Guangxi Key Laboratory of Enhanced Recovery after Surgery for Gastrointestinal Cancer, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Clinical Research Center for Enhanced Recovery after Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Zhuang Autonomous Region Engineering Research Center for Artificial Intelligence Analysis of Multimodal Tumor Images, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Junqiang Chen
- Department of Gastrointestinal Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China; Guangxi Key Laboratory of Enhanced Recovery after Surgery for Gastrointestinal Cancer, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Clinical Research Center for Enhanced Recovery after Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Zhuang Autonomous Region Engineering Research Center for Artificial Intelligence Analysis of Multimodal Tumor Images, The First Affiliated Hospital of Guangxi Medical University, Nanning, China.
| |
Collapse
|
21
|
Lee SB. Development of a chest X-ray machine learning convolutional neural network model on a budget and using artificial intelligence explainability techniques to analyze patterns of machine learning inference. JAMIA Open 2024; 7:ooae035. [PMID: 38699648 PMCID: PMC11064095 DOI: 10.1093/jamiaopen/ooae035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/03/2024] [Accepted: 04/10/2024] [Indexed: 05/05/2024] Open
Abstract
Objective Machine learning (ML) will have a large impact on medicine and accessibility is important. This study's model was used to explore various concepts including how varying features of a model impacted behavior. Materials and Methods This study built an ML model that classified chest X-rays as normal or abnormal by using ResNet50 as a base with transfer learning. A contrast enhancement mechanism was implemented to improve performance. After training with a dataset of publicly available chest radiographs, performance metrics were determined with a test set. The ResNet50 base was substituted with deeper architectures (ResNet101/152) and visualization methods used to help determine patterns of inference. Results Performance metrics were an accuracy of 79%, recall 69%, precision 96%, and area under the curve of 0.9023. Accuracy improved to 82% and recall to 74% with contrast enhancement. When visualization methods were applied and the ratio of pixels used for inference measured, deeper architectures resulted in the model using larger portions of the image for inference as compared to ResNet50. Discussion The model performed on par with many existing models despite consumer-grade hardware and smaller datasets. Individual models vary thus a single model's explainability may not be generalizable. Therefore, this study varied architecture and studied patterns of inference. With deeper ResNet architectures, the machine used larger portions of the image to make decisions. Conclusion An example using a custom model showed that AI (Artificial Intelligence) can be accessible on consumer-grade hardware, and it also demonstrated an example of studying themes of ML explainability by varying ResNet architectures.
Collapse
Affiliation(s)
- Stephen B Lee
- Division of Infectious Diseases, Department of Medicine, College of Medicine, University of Saskatchewan, Regina, S4P 0W5, Canada
| |
Collapse
|
22
|
Kothari S, Sharma S, Shejwal S, Kazi A, D'Silva M, Karthikeyan M. An explainable AI-assisted web application in cancer drug value prediction. MethodsX 2024; 12:102696. [PMID: 38633421 PMCID: PMC11022087 DOI: 10.1016/j.mex.2024.102696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Accepted: 04/02/2024] [Indexed: 04/19/2024] Open
Abstract
In recent years, there has been an increase in the interest in adopting Explainable Artificial Intelligence (XAI) for healthcare. The proposed system includes•An XAI model for cancer drug value prediction. The model provides data that is easy to understand and explain, which is critical for medical decision-making. It also produces accurate projections.•A model outperformed existing models due to extensive training and evaluation on a large cancer medication chemical compounds dataset.•Insights into the causation and correlation between the dependent and independent actors in the chemical composition of the cancer cell. While the model is evaluated on Lung Cancer data, the architecture offered in the proposed solution is cancer agnostic. It may be scaled out to other cancer cell data if the properties are similar. The work presents a viable route for customizing treatments and improving patient outcomes in oncology by combining XAI with a large dataset. This research attempts to create a framework where a user can upload a test case and receive forecasts with explanations, all in a portable PDF report.
Collapse
Affiliation(s)
- Sonali Kothari
- Symbiosis Institute of Technology – Pune Campus, Symbiosis International (Deemed University), Pune, India
| | - Shivanandana Sharma
- Symbiosis Institute of Technology – Pune Campus, Symbiosis International (Deemed University), Pune, India
| | - Sanskruti Shejwal
- Symbiosis Institute of Technology – Pune Campus, Symbiosis International (Deemed University), Pune, India
| | - Aqsa Kazi
- Symbiosis Institute of Technology – Pune Campus, Symbiosis International (Deemed University), Pune, India
| | - Michela D'Silva
- Symbiosis Institute of Technology – Pune Campus, Symbiosis International (Deemed University), Pune, India
| | - M. Karthikeyan
- Senior Principal Scientist, Chemical Engineering and Process Development, NCL-CSIR, Pune, India
| |
Collapse
|
23
|
S S, S A, N SS, S DSS. Role of Explainable AI in Medical Diagnostics and Healthcare: A Pilot Study on Parkinson's Speech Detection. 2024 10TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS (ICCAR) 2024:289-294. [DOI: 10.1109/iccar61844.2024.10569414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Affiliation(s)
- Sasikala. S
- Kumaraguru College of Technology,Department of Electronics and Communication Engineering,Coimbatore,India
| | - Arunkumar. S
- Kumaraguru College of Technology,Department of Electronics and Communication Engineering,Coimbatore,India
| | - Shivappriya. S. N
- Electronic and Communication Engineering, Kumaraguru College of Technology,Coimbatore,India
| | - Dhivyaa Sakthi. S. S
- Amrita School of Engineering, Amrita Vishwa Vidyapeetham,Department of Mechanical Engineering,Coimbatore,India
| |
Collapse
|
24
|
Lo ZJ, Mak MHW, Liang S, Chan YM, Goh CC, Lai T, Tan A, Thng P, Rodriguez J, Weyde T, Smit S. Development of an explainable artificial intelligence model for Asian vascular wound images. Int Wound J 2024; 21:e14565. [PMID: 38146127 PMCID: PMC10961881 DOI: 10.1111/iwj.14565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Accepted: 12/04/2023] [Indexed: 12/27/2023] Open
Abstract
Chronic wounds contribute to significant healthcare and economic burden worldwide. Wound assessment remains challenging given its complex and dynamic nature. The use of artificial intelligence (AI) and machine learning methods in wound analysis is promising. Explainable modelling can help its integration and acceptance in healthcare systems. We aim to develop an explainable AI model for analysing vascular wound images among an Asian population. Two thousand nine hundred and fifty-seven wound images from a vascular wound image registry from a tertiary institution in Singapore were utilized. The dataset was split into training, validation and test sets. Wound images were classified into four types (neuroischaemic ulcer [NIU], surgical site infections [SSI], venous leg ulcers [VLU], pressure ulcer [PU]), measured with automatic estimation of width, length and depth and segmented into 18 wound and peri-wound features. Data pre-processing was performed using oversampling and augmentation techniques. Convolutional and deep learning models were utilized for model development. The model was evaluated with accuracy, F1 score and receiver operating characteristic (ROC) curves. Explainability methods were used to interpret AI decision reasoning. A web browser application was developed to demonstrate results of the wound AI model with explainability. After development, the model was tested on additional 15 476 unlabelled images to evaluate effectiveness. After the development on the training and validation dataset, the model performance on unseen labelled images in the test set achieved an AUROC of 0.99 for wound classification with mean accuracy of 95.9%. For wound measurements, the model achieved AUROC of 0.97 with mean accuracy of 85.0% for depth classification, and AUROC of 0.92 with mean accuracy of 87.1% for width and length determination. For wound segmentation, an AUROC of 0.95 and mean accuracy of 87.8% was achieved. Testing on unlabelled images, the model confidence score for wound classification was 82.8% with an explainability score of 60.6%. Confidence score was 87.6% for depth classification with 68.0% explainability score, while width and length measurement obtained 93.0% accuracy score with 76.6% explainability. Confidence score for wound segmentation was 83.9%, while explainability was 72.1%. Using explainable AI models, we have developed an algorithm and application for analysis of vascular wound images from an Asian population with accuracy and explainability. With further development, it can be utilized as a clinical decision support system and integrated into existing healthcare electronic systems.
Collapse
Affiliation(s)
- Zhiwen Joseph Lo
- Department of SurgeryWoodlands HealthSingaporeSingapore
- Lee Kong Chian School of MedicineNanyang Technological UniversitySingaporeSingapore
| | | | | | - Yam Meng Chan
- Department of General SurgeryTan Tock Seng HospitalSingaporeSingapore
| | - Cheng Cheng Goh
- Wound and Stoma Care, Nursing SpecialityTan Tock Seng HospitalSingaporeSingapore
| | - Tina Lai
- Wound and Stoma Care, Nursing SpecialityTan Tock Seng HospitalSingaporeSingapore
| | - Audrey Tan
- Wound and Stoma Care, Nursing SpecialityTan Tock Seng HospitalSingaporeSingapore
| | - Patrick Thng
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| | - Jorge Rodriguez
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| | - Tillman Weyde
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| | - Sylvia Smit
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| |
Collapse
|
25
|
Alnahedh TA, Taha M. Role of Machine Learning and Artificial Intelligence in the Diagnosis and Treatment of Refractive Errors for Enhanced Eye Care: A Systematic Review. Cureus 2024; 16:e57706. [PMID: 38711688 PMCID: PMC11071623 DOI: 10.7759/cureus.57706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2024] [Indexed: 05/08/2024] Open
Abstract
A significant contributor to blindness and visual impairment globally is uncorrected refractive error. To plan effective interventions, eye care professionals must promptly identify people at a high risk of acquiring myopia, and monitor disease progress. Artificial intelligence (AI) and machine learning (ML) have enormous potential to improve diagnosis and treatment. This systematic review explores the current state of ML and AI applications in the diagnoses and treatment of refractory errors in optometry. A systematic review and meta-analysis of studies evaluating the diagnostic performance of AI-based tools in PubMed was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. To find relevant studies on the use of ML or AI in the diagnosis or treatment of refractive errors in optometry, a thorough search was conducted in various electronic databases such as PubMed, Google Scholar, and Web of Science. The search was limited to studies published between January 2015 and December 2022. The search terms used were "refractive errors," "myopia," "optometry," "machine learning," "ophthalmology," and "artificial intelligence." A total of nine studies met the inclusion criteria and were included in the final analysis. ML is increasingly being utilized for automating clinical data processing as AI technology progresses, making the formerly labor-intensive work possible. AI models that primarily use a neural network demonstrated exceptional efficiency and performance in the analysis of vast medical data, rivaling board-certified, healthcare professionals. Several studies showed that ML models could support diagnosis and clinical decision-making. Moreover, an ML algorithm predicted future refraction values in patients with myopia. AI and ML models have great potential to improve the diagnosis and treatment of refractive errors in optometry.
Collapse
Affiliation(s)
- Taghreed A Alnahedh
- Optometry, King Abdullah International Medical Research Center (KAIMRC), National Guard Health Affairs, Riyadh, SAU
- Academic Affairs, King Saud Bin Abdulaziz University for Health Sciences College of Medicine, Riyadh, SAU
| | - Mohammed Taha
- Ophthalmology, King Saud Bin Abdulaziz University for Health Sciences College of Medicine, Riyadh, SAU
| |
Collapse
|
26
|
Xu Z, Liao H, Huang L, Chen Q, Lan W, Li S. IBPGNET: lung adenocarcinoma recurrence prediction based on neural network interpretability. Brief Bioinform 2024; 25:bbae080. [PMID: 38557672 PMCID: PMC10982951 DOI: 10.1093/bib/bbae080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/31/2024] [Accepted: 02/07/2024] [Indexed: 04/04/2024] Open
Abstract
Lung adenocarcinoma (LUAD) is the most common histologic subtype of lung cancer. Early-stage patients have a 30-50% probability of metastatic recurrence after surgical treatment. Here, we propose a new computational framework, Interpretable Biological Pathway Graph Neural Networks (IBPGNET), based on pathway hierarchy relationships to predict LUAD recurrence and explore the internal regulatory mechanisms of LUAD. IBPGNET can integrate different omics data efficiently and provide global interpretability. In addition, our experimental results show that IBPGNET outperforms other classification methods in 5-fold cross-validation. IBPGNET identified PSMC1 and PSMD11 as genes associated with LUAD recurrence, and their expression levels were significantly higher in LUAD cells than in normal cells. The knockdown of PSMC1 and PSMD11 in LUAD cells increased their sensitivity to afatinib and decreased cell migration, invasion and proliferation. In addition, the cells showed significantly lower EGFR expression, indicating that PSMC1 and PSMD11 may mediate therapeutic sensitivity through EGFR expression.
Collapse
Affiliation(s)
- Zhanyu Xu
- Department of Thoracic and Cardiovascular Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| | - Haibo Liao
- School of computer, Electronic and Information, Guangxi University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| | - Liuliu Huang
- Department of Thoracic and Cardiovascular Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| | - Qingfeng Chen
- School of computer, Electronic and Information, Guangxi University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| | - Wei Lan
- School of computer, Electronic and Information, Guangxi University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| | - Shikang Li
- Department of Thoracic and Cardiovascular Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| |
Collapse
|
27
|
Dhanalakshmi S, Maanasaa RS, Maalikaa RS, Senthil R. A review of emergent intelligent systems for the detection of Parkinson's disease. Biomed Eng Lett 2023; 13:591-612. [PMID: 37872986 PMCID: PMC10590348 DOI: 10.1007/s13534-023-00319-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 08/11/2023] [Accepted: 09/07/2023] [Indexed: 10/25/2023] Open
Abstract
Parkinson's disease (PD) is a neurodegenerative disorder affecting people worldwide. The PD symptoms are divided into motor and non-motor symptoms. Detection of PD is very crucial and essential. Such challenges can be overcome by applying artificial intelligence to diagnose PD. Many studies have also proposed the implementation of computer-aided diagnosis for the detection of PD. This systematic review comprehensively analyzed all appropriate algorithms for detecting and assessing PD based on the literature from 2012 to 2023 which are conducted as per PRISMA model. This review focused on motor symptoms, namely handwriting dynamics, voice impairments and gait, multimodal features, and brain observation using single photon emission computed tomography, magnetic resonance and electroencephalogram signals. The significant challenges are critically analyzed, and appropriate recommendations are provided. The critical discussion of this review article can be helpful in today's PD community in such a way that it allows clinicians to provide proper treatment and timely medication.
Collapse
Affiliation(s)
- Samiappan Dhanalakshmi
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203 India
| | - Ramesh Sai Maanasaa
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203 India
| | - Ramesh Sai Maalikaa
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203 India
| | - Ramalingam Senthil
- Department of Mechanical Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203 India
| |
Collapse
|
28
|
Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M. YOLOv5-FPN: A Robust Framework for Multi-Sized Cell Counting in Fluorescence Images. Diagnostics (Basel) 2023; 13:2280. [PMID: 37443674 DOI: 10.3390/diagnostics13132280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 06/02/2023] [Accepted: 06/11/2023] [Indexed: 07/15/2023] Open
Abstract
Cell counting in fluorescence microscopy is an essential task in biomedical research for analyzing cellular dynamics and studying disease progression. Traditional methods for cell counting involve manual counting or threshold-based segmentation, which are time-consuming and prone to human error. Recently, deep learning-based object detection methods have shown promising results in automating cell counting tasks. However, the existing methods mainly focus on segmentation-based techniques that require a large amount of labeled data and extensive computational resources. In this paper, we propose a novel approach to detect and count multiple-size cells in a fluorescence image slide using You Only Look Once version 5 (YOLOv5) with a feature pyramid network (FPN). Our proposed method can efficiently detect multiple cells with different sizes in a single image, eliminating the need for pixel-level segmentation. We show that our method outperforms state-of-the-art segmentation-based approaches in terms of accuracy and computational efficiency. The experimental results on publicly available datasets demonstrate that our proposed approach achieves an average precision of 0.8 and a processing time of 43.9 ms per image. Our approach addresses the research gap in the literature by providing a more efficient and accurate method for cell counting in fluorescence microscopy that requires less computational resources and labeled data.
Collapse
Affiliation(s)
- Bader Aldughayfiq
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| | - Farzeen Ashfaq
- School of Computer Science (SCS), Taylor's University, Subang Jaya 47500, Malaysia
| | - N Z Jhanjhi
- School of Computer Science (SCS), Taylor's University, Subang Jaya 47500, Malaysia
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| |
Collapse
|