1
|
Mahapatra C. Recent advances in medical gas sensing with artificial intelligence-enabled technology. Med Gas Res 2025; 15:318-326. [PMID: 39829167 PMCID: PMC11918459 DOI: 10.4103/mgr.medgasres-d-24-00113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Revised: 11/23/2024] [Accepted: 12/06/2024] [Indexed: 01/22/2025] Open
Abstract
Recent advancements in artificial intelligence-enabled medical gas sensing have led to enhanced accuracy, safety, and efficiency in healthcare. Medical gases, including oxygen, nitrous oxide, and carbon dioxide, are essential for various treatments but pose health risks if improperly managed. This review highlights the integration of artificial intelligence in medical gas sensing, enhancing traditional sensors through advanced data processing, pattern recognition, and real-time monitoring capabilities. Artificial intelligence improves the ability to detect harmful gas levels, enabling immediate intervention to prevent adverse health effects. Moreover, developments in nanotechnology have resulted in advanced materials, such as metal oxides and carbon-based nanomaterials, which increase sensitivity and selectivity. These innovations, combined with artificial intelligence, support continuous patient monitoring and predictive diagnostics, paving the way for future breakthroughs in medical care.
Collapse
|
2
|
Bani-Hani T, Wedyan M, Al-Fodeh R, Shuqeir R, Al Jundi S, Tewari N. Artificial intelligence model for application in dental traumatology. Eur Arch Paediatr Dent 2025:10.1007/s40368-025-01063-0. [PMID: 40448917 DOI: 10.1007/s40368-025-01063-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2024] [Accepted: 05/14/2025] [Indexed: 06/02/2025]
Abstract
BACKGROUND In recent years, healthcare systems have witnessed a tremendous advancement in diagnostic tools and technologies. The advent of artificial intelligence (AI) has enabled a paradigm shift in the practice of health sciences particularly in medicine. In the dental field, AI has been scarcely used in the various disciplines with no application in dental traumatology. This study proposes a deep-learning, convolutional neural networks (CNN)-based model for detection and classification of dental fractures. METHODS Plain periapical radiographs of injured teeth were retrieved from patients' records and annotated by two dentists trained in dental traumatology. The teeth were categorised into four groups: uncomplicated crown fractures, complicated crown fractures, crown-root fractures and root fractures. Data augmentation was done to enhance the power of the current dataset. Images were divided into training (80%) and test (20%) datasets. Python programming language was used to implement the CNN-based classification model. Cross validation was applied. RESULTS A total of 72 plain periapical radiographs of 108 fractured teeth were collected. The model achieved high accuracy in differentiating uncomplicated crown fractures from complicated ones (96.0%), from crown-root fractures (99.1%) and from root fractures (98.7%). Furthermore, the complicated injuries were distinguished from crown-root fractures and from root fractures with accuracy levels at 96.3% and 97.2% respectively. The model's overall accuracy in recognising the four classes was 78.7%. CONCLUSION The proposed model showed excellent performance in the classification of dental fractures. The application of AI in paediatric dentistry, particularly in the field of dental trauma, is innovative and highly relevant to current trends in healthcare technology. Expansion of the current model to a larger dataset that includes the various types of injuries is recommended in future research. Such models can be a great asset for the less-experienced dentists in making accurate diagnosis and timely decisions. Future models employing panoramic radiographs could also help the medical practitioners at emergency services.
Collapse
Affiliation(s)
- T Bani-Hani
- Division of Pediatric Dentistry, Preventive Dentistry Department, Faculty of Dentistry, Jordan University of Science and Technology, P.O.Box 3030, Irbid, 22110, Jordan.
| | - M Wedyan
- Department of Computer Sciences, Faculty of Information Technology and Computer Sciences, Yarmouk University, Irbid, 21163, Jordan
| | - R Al-Fodeh
- Department of Prosthodontics, Faculty of Dentistry, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - R Shuqeir
- Pediatric Dentistry, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - S Al Jundi
- Division of Pediatric Dentistry, Preventive Dentistry Department, Faculty of Dentistry, Jordan University of Science and Technology, P.O.Box 3030, Irbid, 22110, Jordan
| | - N Tewari
- Pediatric and Preventive Dentistry, Centre for Dental Education and Research, All India Institute of Medical Sciences, New Delhi, 110029, India
| |
Collapse
|
3
|
Mengistu AK, Assaye BT, Flatie AB, Mossie Z. Detecting microcephaly and macrocephaly from ultrasound images using artificial intelligence. BMC Med Imaging 2025; 25:183. [PMID: 40419983 PMCID: PMC12105205 DOI: 10.1186/s12880-025-01709-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2025] [Accepted: 05/05/2025] [Indexed: 05/28/2025] Open
Abstract
BACKGROUND Microcephaly and macrocephaly, which are abnormal congenital markers, are associated with developmental and neurologic deficits. Hence, there is a medically imperative need to conduct ultrasound imaging early on. However, resource-limited countries such as Ethiopia are confronted with inadequacies such that access to trained personnel and diagnostic machines inhibits the exact and continuous diagnosis from being met. OBJECTIVE This study aims to develop a fetal head abnormality detection model from ultrasound images via deep learning. METHODS Data were collected from three Ethiopian healthcare facilities to increase model generalizability. The recruitment period for this study started on November 9, 2024, and ended on November 30, 2024. Several preprocessing techniques have been performed, such as augmentation, noise reduction, and normalization. SegNet, UNet, FCN, MobileNetV2, and EfficientNet-B0 were applied to segment and measure fetal head structures using ultrasound images. The measurements were classified as microcephaly, macrocephaly, or normal using WHO guidelines for gestational age, and then the model performance was compared with that of existing industry experts. The metrics used for evaluation included accuracy, precision, recall, the F1 score, and the Dice coefficient. RESULTS This study was able to demonstrate the feasibility of using SegNet for automatic segmentation, measurement of abnormalities of the fetal head, and classification of macrocephaly and microcephaly, with an accuracy of 98% and a Dice coefficient of 0.97. Compared with industry experts, the model achieved accuracies of 92.5% and 91.2% for the BPD and HC measurements, respectively. CONCLUSION Deep learning models can enhance prenatal diagnosis workflows, especially in resource-constrained settings. Future work needs to be done on optimizing model performance, trying complex models, and expanding datasets to improve generalizability. If these technologies are adopted, they can be used in prenatal care delivery. CLINICAL TRIAL NUMBER Not applicable.
Collapse
Affiliation(s)
- Abraham Keffale Mengistu
- Department of Health Informatics, College of Medicine Health Science, Debre Markos University, Debre Markos, Ethiopia.
| | - Bayou Tilahun Assaye
- Department of Health Informatics, College of Medicine Health Science, Debre Markos University, Debre Markos, Ethiopia
| | - Addisu Baye Flatie
- Department of Health Informatics, College of Medicine Health Science, Debre Markos University, Debre Markos, Ethiopia
| | - Zewdie Mossie
- Department of Information Technology, Institute of Technology, Debre Markos University, Debre Markos, Ethiopia
| |
Collapse
|
4
|
Pereira CP, Correia M, Augusto D, Coutinho F, Salvado Silva F, Santos R. Forensic sex classification by convolutional neural network approach by VGG16 model: accuracy, precision and sensitivity. Int J Legal Med 2025; 139:1381-1393. [PMID: 39853362 DOI: 10.1007/s00414-025-03416-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Accepted: 01/07/2025] [Indexed: 01/26/2025]
Abstract
INTRODUCTION In the reconstructive phase of medico-legal human identification, the sex estimation is crucial in the reconstruction of the biological profile and can be applied both in identifying victims of mass disasters and in the autopsy room. Due to the inherent subjectivity associated with traditional methods, artificial intelligence, specifically, convolutional neural networks (CNN) may present a competitive option. OBJECTIVES This study evaluates the reliability of VGG16 model as an accurate forensic sex prediction algorithm and its performance using orthopantomography (OPGs). MATERIALS AND METHODS This study included 1050 OPGs from patients at the Santa Maria Local Health Unit Stomatology Department. Using Python, the OPGs were pre-processed, resized and similar copies were created using data augmentation methods. The model was evaluated for precision, sensitivity, F1-score and accuracy, and heatmaps were created. RESULTS AND DISCUSSION The training revealed a discrepancy between the validation and training loss values. In the general test, the model showed a general balance between sexes, with F1-scores of 0.89. In the test by age group, contrary to expectations, the model was most accurate in the 16-20 age group (90%). Apart from the mandibular symphysis, analysis of the heatmaps showed that the model did not focus on anatomically relevant areas, possibly due to the lack of application of image extraction techniques. CONCLUSIONS The results indicate that CNNs are accurate in classifying human remains based on the generic factor sex for medico-legal identification, achieving an overall accuracy of 89%. However, further research is necessary to enhance the models' performance.
Collapse
Affiliation(s)
- Cristiana Palmela Pereira
- Centro de Estatística e Aplicações Universidade de Lisbao, CEAUL, Faculdade de Ciências da Universidade de Lisboa no Bloco C6 - Piso 4, Lisboa, 1749-016, Portugal.
- Grupo FORENSEMED, Centro UICOB, Faculdade de Medicina Dentária da Universidade de Lisboa. Cidade Universitária, Rua Professora Teresa Ambrósio, Lisboa, 1600-277, Portugal.
- Faculdade de Medicina Universidade de Lisboa, Avenida Professor Egas Moniz, Lisboa, 1649-028, Portugal.
| | - Mariana Correia
- Grupo FORENSEMED, Centro UICOB, Faculdade de Medicina Dentária da Universidade de Lisboa. Cidade Universitária, Rua Professora Teresa Ambrósio, Lisboa, 1600-277, Portugal
| | - Diana Augusto
- Grupo FORENSEMED, Centro UICOB, Faculdade de Medicina Dentária da Universidade de Lisboa. Cidade Universitária, Rua Professora Teresa Ambrósio, Lisboa, 1600-277, Portugal
| | - Francisco Coutinho
- Faculdade de Medicina Universidade de Lisboa, Avenida Professor Egas Moniz, Lisboa, 1649-028, Portugal
| | - Francisco Salvado Silva
- Faculdade de Medicina Universidade de Lisboa, Avenida Professor Egas Moniz, Lisboa, 1649-028, Portugal
| | - Rui Santos
- Centro de Estatística e Aplicações Universidade de Lisbao, CEAUL, Faculdade de Ciências da Universidade de Lisboa no Bloco C6 - Piso 4, Lisboa, 1749-016, Portugal
- Escola Superior de Tecnologia e Gestão, Instituto Politécnico de Leiria, Campus 2 - Morro do Lena, Alto do Vieiro, Apt 4163, Edifício D, Leiria, 2411-901, Portugal
| |
Collapse
|
5
|
Liawrungrueang W, Cholamjiak W, Promsri A, Jitpakdee K, Sunpaweravong S, Kotheeranurak V, Sarasombath P. Artificial Intelligence for Cervical Spine Fracture Detection: A Systematic Review of Diagnostic Performance and Clinical Potential. Global Spine J 2025; 15:2547-2558. [PMID: 39800538 PMCID: PMC11726500 DOI: 10.1177/21925682251314379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2024] [Revised: 12/13/2024] [Accepted: 01/06/2025] [Indexed: 01/16/2025] Open
Abstract
Study DesignSystematic review.ObjectiveArtificial intelligence (AI) and deep learning (DL) models have recently emerged as tools to improve fracture detection, mainly through imaging modalities such as computed tomography (CT) and radiographs. This systematic review evaluates the diagnostic performance of AI and DL models in detecting cervical spine fractures and assesses their potential role in clinical practice.MethodsA systematic search of PubMed/Medline, Embase, Scopus, and Web of Science was conducted for studies published between January 2000 and July 2024. Studies that evaluated AI models for cervical spine fracture detection were included. Diagnostic performance metrics were extracted and included sensitivity, specificity, accuracy, and area under the curve. The PROBAST tool assessed bias, and PRISMA criteria were used for study selection and reporting.ResultsEleven studies published between 2021 and 2024 were included in the review. AI models demonstrated variable performance, with sensitivity ranging from 54.9% to 100% and specificity from 72% to 98.6%. Models applied to CT imaging generally outperformed those applied to radiographs, with convolutional neural networks (CNN) and advanced architectures such as MobileNetV2 and Vision Transformer (ViT) achieving the highest accuracy. However, most studies lacked external validation, raising concerns about the generalizability of their findings.ConclusionsAI and DL models show significant potential in improving fracture detection, particularly in CT imaging. While these models offer high diagnostic accuracy, further validation and refinement are necessary before they can be widely integrated into clinical practice. AI should complement, rather than replace, human expertise in diagnostic workflows.
Collapse
Affiliation(s)
| | | | - Arunee Promsri
- Department of Physical Therapy, School of Allied Health Sciences, University of Phayao, Phayao, Thailand
| | - Khanathip Jitpakdee
- Department of Orthopedics, Queen Savang Vadhana Memorial Hospital, Sriracha, Chonburi, Thailand
| | - Sompoom Sunpaweravong
- Faculty of Medicine, Chulalongkorn University, and King Chulalongkorn Memorial Hospital, Bangkok, Thailand
| | - Vit Kotheeranurak
- Department of Orthopaedics, Faculty of Medicine, Chulalongkorn University, and King Chulalongkorn Memorial Hospital, Bangkok, Thailand
- Center of Excellence in Biomechanics and Innovative Spine Surgery, Chulalongkorn University, Bangkok, Thailand
| | - Peem Sarasombath
- Department of Orthopaedics, Phramongkutklao Hospital and College of Medicine, Bangkok, Thailand
| |
Collapse
|
6
|
Pandey RK, Rathore YK. Deep learning in 3D cardiac reconstruction: a systematic review of methodologies and dataset. Med Biol Eng Comput 2025; 63:1271-1287. [PMID: 39753994 DOI: 10.1007/s11517-024-03273-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Accepted: 12/18/2024] [Indexed: 05/10/2025]
Abstract
This study presents an advanced methodology for 3D heart reconstruction using a combination of deep learning models and computational techniques, addressing critical challenges in cardiac modeling and segmentation. A multi-dataset approach was employed, including data from the UK Biobank, MICCAI Multi-Modality Whole Heart Segmentation (MM-WHS) challenge, and clinical datasets of congenital heart disease. Preprocessing steps involved segmentation, intensity normalization, and mesh generation, while the reconstruction was performed using a blend of statistical shape modeling (SSM), graph convolutional networks (GCNs), and progressive GANs. The statistical shape models were utilized to capture anatomical variations through principal component analysis (PCA), while GCNs refined the meshes derived from segmented slices. Synthetic data generated by progressive GANs enabled augmentation, particularly useful for congenital heart conditions. Evaluation of the reconstruction accuracy was performed using metrics such as Dice similarity coefficient (DSC), Chamfer distance, and Hausdorff distance, with the proposed framework demonstrating superior anatomical precision and functional relevance compared to traditional methods. This approach highlights the potential for automated, high-resolution 3D heart reconstruction applicable in both clinical and research settings. The results emphasize the critical role of deep learning in enhancing anatomical accuracy, particularly for rare and complex cardiac conditions. This paper is particularly important for researchers wanting to utilize deep learning in cardiac imaging and 3D heart reconstruction, bringing insights into the integration of modern computational methods.
Collapse
Affiliation(s)
- Rajendra Kumar Pandey
- Department of Computer Science and Engineering, Shri Shankaracharya Institute of Professional Management and Technology, Raipur, (C.G.), India.
| | - Yogesh Kumar Rathore
- Department of Computer Science and Engineering, Shri Shankaracharya Institute of Professional Management and Technology, Raipur, (C.G.), India
| |
Collapse
|
7
|
Elhanashi A, Saponara S, Zheng Q, Almutairi N, Singh Y, Kuanar S, Ali F, Unal O, Faghani S. AI-Powered Object Detection in Radiology: Current Models, Challenges, and Future Direction. J Imaging 2025; 11:141. [PMID: 40422999 DOI: 10.3390/jimaging11050141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2025] [Revised: 04/24/2025] [Accepted: 04/25/2025] [Indexed: 05/28/2025] Open
Abstract
Artificial intelligence (AI)-based object detection in radiology can assist in clinical diagnosis and treatment planning. This article examines the AI-based object detection models currently used in many imaging modalities, including X-ray Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and Ultrasound (US). The key models from the convolutional neural network (CNN) as well as the contemporary transformer and hybrid models are analyzed based on their ability to detect pathological features, such as tumors, lesions, and tissue abnormalities. In addition, this review offers a closer look at the strengths and weaknesses of these models in terms of accuracy, robustness, and speed in real clinical settings. The common issues related to these models, including limited data, annotation quality, and interpretability of AI decisions, are discussed in detail. Moreover, the need for strong applicable models across different populations and imaging modalities are addressed. The importance of privacy and ethics in general data use as well as safety and regulations for healthcare data are emphasized. The future potential of these models lies in their accessibility in low resource settings, usability in shared learning spaces while maintaining privacy, and improvement in diagnostic accuracy through multimodal learning. This review also highlights the importance of interdisciplinary collaboration among artificial intelligence researchers, radiologists, and policymakers. Such cooperation is essential to address current challenges and to fully realize the potential of AI-based object detection in radiology.
Collapse
Affiliation(s)
| | - Sergio Saponara
- Department of Information Engineering, University of Pisa, 56122 Pisa, Italy
| | - Qinghe Zheng
- School of Intelligence Engineering, Shandong Management University, Jinan 250100, China
| | - Nawal Almutairi
- Information Technology Department, College of Computer and Information Sciences, King Saud University, Riyadh 145111, Saudi Arabia
| | - Yashbir Singh
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Shiba Kuanar
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Farzana Ali
- Department of Molecular and Medical Pharmacology, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Orhan Unal
- Departments of Radiology, School of Medicine, Medical Physics University of Wisconsin-Madison, Public Health Madison, Madison, WI 53705, USA
| | | |
Collapse
|
8
|
Skattebøl L, Nygaard GO, Leonardsen EH, Kaufmann T, Moridi T, Stawiarz L, Ouellette R, Ineichen BV, Ferreira D, Muehlboeck JS, Beyer MK, Sowa P, Manouchehrinia A, Westman E, Olsson T, Celius EG, Hillert J, Kockum I, Harbo HF, Piehl F, Granberg T, Westlye LT, Høgestøl EA. Brain age in multiple sclerosis: a study with deep learning and traditional machine learning. Brain Commun 2025; 7:fcaf152. [PMID: 40337466 PMCID: PMC12056726 DOI: 10.1093/braincomms/fcaf152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 03/17/2025] [Accepted: 04/16/2025] [Indexed: 05/09/2025] Open
Abstract
'Brain age' is a numerical estimate of the biological age of the brain and an overall effort to measure neurodegeneration, regardless of disease type. In multiple sclerosis, accelerated brain ageing has been linked to disability accrual. Artificial intelligence has emerged as a promising tool for the assessment and quantification of the impact of neurodegenerative diseases. Despite the existence of numerous AI models, there is a noticeable lack of comparative imaging data for traditional machine learning versus deep learning in conditions such as multiple sclerosis. A retrospective observational study was initiated to analyse clinical and MRI data (4584 MRIs) from various scanners in a large longitudinal cohort (n = 1516) of people with multiple sclerosis collected from two institutions (Karolinska Institute and Oslo University Hospital) using a uniform data post-processing pipeline. We conducted a comparative assessment of brain age using a deep learning simple fully convolutional network and a well-established traditional machine learning model. This study was primarily aimed to validate the deep learning brain age model in multiple sclerosis. The correlation between estimated brain age and chronological age was stronger for the deep learning estimates (r = 0.90, P < 0.001) than the traditional machine learning estimates (r = 0.75, P < 0.001). An increase in brain age was significantly associated with higher expanded disability status scale scores (traditional machine learning: t = 5.3, P < 0.001; deep learning: t = 3.7, P < 0.001) and longer disease duration (traditional machine learning: t = 6.5, P < 0.001; deep learning: t = 5.8, P < 0.001). No significant inter-model difference in clinical correlation or effect measure was found, but significant differences for traditional machine learning-derived brain age estimates were found between several scanners. Our study suggests that the deep learning-derived brain age is significantly associated with clinical disability, performed equally well to the traditional machine learning-derived brain age measures, and may counteract scanner variability.
Collapse
Affiliation(s)
- Lars Skattebøl
- Department of Neurology, Oslo University Hospital, Oslo 0450, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo 0318, Norway
| | - Gro O Nygaard
- Department of Neurology, Oslo University Hospital, Oslo 0450, Norway
| | - Esten H Leonardsen
- Department of Psychology, University of Oslo, Oslo 0373, Norway
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital, Oslo 0450, Norway
| | - Tobias Kaufmann
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital, Oslo 0450, Norway
- Tübingen Center for Mental Health, Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen 72074, Germany
| | - Thomas Moridi
- Center of Neurology, Academic Specialist Center, Stockholm Health Services, Stockholm 113 65, Sweden
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 171 77, Sweden
| | - Leszek Stawiarz
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 171 77, Sweden
| | - Russel Ouellette
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 171 77, Sweden
- Department of Neuroradiology, Karolinska University Hospital, Stockholm 171 77, Sweden
| | - Benjamin V Ineichen
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 171 77, Sweden
- Department of Neuroradiology, Karolinska University Hospital, Stockholm 171 77, Sweden
| | - Daniel Ferreira
- Division of Clinical Geriatrics, Center for Alzheimer Research, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm 171 77, Sweden
| | - J Sebastian Muehlboeck
- Division of Clinical Geriatrics, Center for Alzheimer Research, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm 171 77, Sweden
| | - Mona K Beyer
- Institute of Clinical Medicine, University of Oslo, Oslo 0318, Norway
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo 0450, Norway
| | - Piotr Sowa
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo 0450, Norway
| | - Ali Manouchehrinia
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 171 77, Sweden
| | - Eric Westman
- Division of Clinical Geriatrics, Center for Alzheimer Research, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm 171 77, Sweden
- Department of Neuroimaging, Centre for Neuroimaging Sciences, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London SE5 8AF, UK
| | - Tomas Olsson
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 171 77, Sweden
| | - Elisabeth G Celius
- Department of Neurology, Oslo University Hospital, Oslo 0450, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo 0318, Norway
| | - Jan Hillert
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 171 77, Sweden
| | - Ingrid Kockum
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 171 77, Sweden
| | - Hanne F Harbo
- Department of Neurology, Oslo University Hospital, Oslo 0450, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo 0318, Norway
| | - Fredrik Piehl
- Center of Neurology, Academic Specialist Center, Stockholm Health Services, Stockholm 113 65, Sweden
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 171 77, Sweden
| | - Tobias Granberg
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm 171 77, Sweden
- Department of Neuroradiology, Karolinska University Hospital, Stockholm 171 77, Sweden
| | - Lars T Westlye
- Department of Psychology, University of Oslo, Oslo 0373, Norway
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital, Oslo 0450, Norway
- K.G. Jebsen Center for Neurodevelopmental Disorders, University of Oslo, Oslo 5832, Norway
| | - Einar A Høgestøl
- Department of Neurology, Oslo University Hospital, Oslo 0450, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo 0318, Norway
- Department of Psychology, University of Oslo, Oslo 0373, Norway
| |
Collapse
|
9
|
Mittal S, Tong A, Young S, Jha P. Artificial intelligence applications in endometriosis imaging. Abdom Radiol (NY) 2025:10.1007/s00261-025-04897-w. [PMID: 40167644 DOI: 10.1007/s00261-025-04897-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2025] [Revised: 03/07/2025] [Accepted: 03/14/2025] [Indexed: 04/02/2025]
Abstract
Artificial intelligence (AI) may have the potential to improve existing diagnostic challenges in endometriosis imaging. To better direct future research, this descriptive review summarizes the general landscape of AI applications in endometriosis imaging. Articles from PubMed were selected to represent different approaches to AI applications in endometriosis imaging. Current endometriosis imaging literature focuses on AI applications in ultrasound (US) and magnetic resonance imaging (MRI). Most studies use US data, with MRI studies being limited at present. The majority of US studies employ transvaginal ultrasound (TVUS) data and aim to detect deep endometriosis implants, adenomyosis, endometriomas, and secondary signs of endometriosis. Most MRI studies evaluate endometriosis disease diagnosis and segmentation. Some studies analyze multi-modal methods for endometriosis imaging, combining US and MRI data or using imaging data in combination with clinical data. Current literature lacks generalizability and standardization. Most studies in this review utilize small sample sizes with retrospective approaches and single-center data. Existing models only focus on narrow disease detection or diagnosis questions and lack standardized ground truth. Overall, AI applications in endometriosis imaging analysis are in their early stages, and continued research is essential to develop and enhance these models.
Collapse
Affiliation(s)
- Sneha Mittal
- University of Tennessee Health Science Center, Memphis, USA
| | | | | | | |
Collapse
|
10
|
Maleš I, Kumrić M, Huić Maleš A, Cvitković I, Šantić R, Pogorelić Z, Božić J. A Systematic Integration of Artificial Intelligence Models in Appendicitis Management: A Comprehensive Review. Diagnostics (Basel) 2025; 15:866. [PMID: 40218216 PMCID: PMC11988987 DOI: 10.3390/diagnostics15070866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2025] [Revised: 03/24/2025] [Accepted: 03/27/2025] [Indexed: 04/14/2025] Open
Abstract
Artificial intelligence (AI) and machine learning (ML) are transforming the management of acute appendicitis by enhancing diagnostic accuracy, optimizing treatment strategies, and improving patient outcomes. This study reviews AI applications across all stages of appendicitis care, from triage to postoperative management, using sources from PubMed/MEDLINE, IEEE Xplore, arXiv, Web of Science, and Scopus, covering publications up to 14 February 2025. AI models have demonstrated potential in triage, enabling rapid differentiation of appendicitis from other causes of abdominal pain. In diagnostics, ML algorithms incorporating clinical, laboratory, imaging, and demographic data have improved accuracy and reduced uncertainty. These tools also predict disease severity, aiding decisions between conservative management and surgery. Radiomics further enhances diagnostic precision by analyzing imaging data. Intraoperatively, AI applications are emerging to support real-time decision-making, assess procedural steps, and improve surgical training. Postoperatively, ML models predict complications such as abscess formation and sepsis, facilitating early interventions and personalized recovery plans. This is the first comprehensive review to examine AI's role across the entire appendicitis treatment process, including triage, diagnosis, severity prediction, intraoperative assistance, and postoperative prognosis. Despite its potential, challenges remain regarding data quality, model interpretability, ethical considerations, and clinical integration. Future efforts should focus on developing end-to-end AI-assisted workflows that enhance diagnosis, treatment, and patient outcomes while ensuring equitable access and clinician oversight.
Collapse
Affiliation(s)
- Ivan Maleš
- Department of Abdominal Surgery, University Hospital of Split, Spinčićeva 1, 21000 Split, Croatia
| | - Marko Kumrić
- Department of Pathophysiology, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
- Laboratory for Cardiometabolic Research, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
| | - Andrea Huić Maleš
- Department of Pediatrics, University Hospital of Split, Spinčićeva 1, 21000 Split, Croatia
| | - Ivan Cvitković
- Department of Anesthesiology and Intensive Care, University Hospital of Split, Spinčićeva 1, 21000 Split, Croatia
| | - Roko Šantić
- Department of Pathophysiology, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
| | - Zenon Pogorelić
- Department of Surgery, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
- Department of Pediatric Surgery, University Hospital of Split, Spinčićeva 1, 21000 Split, Croatia
| | - Joško Božić
- Department of Pathophysiology, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
- Laboratory for Cardiometabolic Research, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
| |
Collapse
|
11
|
Harrison LM, Edison RL, Hallac RR. Artificial Intelligence Applications in Pediatric Craniofacial Surgery. Diagnostics (Basel) 2025; 15:829. [PMID: 40218180 PMCID: PMC11989140 DOI: 10.3390/diagnostics15070829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2025] [Revised: 03/09/2025] [Accepted: 03/19/2025] [Indexed: 04/14/2025] Open
Abstract
Artificial intelligence is rapidly transforming pediatric craniofacial surgery by enhancing diagnostic accuracy, improving surgical precision, and optimizing postoperative care. Machine learning and deep learning models are increasingly used to analyze complex craniofacial imaging, enabling early detection of congenital anomalies such as craniosynostosis, and cleft lip and palate. AI-driven algorithms assist in preoperative planning by identifying anatomical abnormalities, predicting surgical outcomes, and guiding personalized treatment strategies. In cleft lip and palate care, AI enhances prenatal detection, severity classification, and the design of custom therapeutic devices, while also refining speech evaluation. For craniosynostosis, AI supports automated morphology classification, severity scoring, and the assessment of surgical indications, thereby promoting diagnostic consistency and predictive outcome modeling. In orthognathic surgery, AI-driven analyses, including skeletal maturity evaluation and cephalometric assessment, inform optimal timing and diagnosis. Furthermore, in cases of craniofacial microsomia and microtia, AI improves phenotypic classification and surgical planning through precise intraoperative navigation. These advancements underscore AI's transformative role in diagnostic accuracy, and clinical decision-making, highlighting its potential to significantly enhance evidence-based pediatric craniofacial care.
Collapse
Affiliation(s)
- Lucas M. Harrison
- Department of Plastic Surgery, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Ragan L. Edison
- Analytical Imaging and Modeling Center, Children’s Health Medical Center, Dallas, TX 75235, USA
| | - Rami R. Hallac
- Department of Plastic Surgery, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Analytical Imaging and Modeling Center, Children’s Health Medical Center, Dallas, TX 75235, USA
| |
Collapse
|
12
|
Dangi RR, Sharma A, Vageriya V. Transforming Healthcare in Low-Resource Settings With Artificial Intelligence: Recent Developments and Outcomes. Public Health Nurs 2025; 42:1017-1030. [PMID: 39629887 DOI: 10.1111/phn.13500] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 11/10/2024] [Accepted: 11/18/2024] [Indexed: 03/12/2025]
Abstract
BACKGROUND Artificial intelligence now encompasses technologies like machine learning, natural language processing, and robotics, allowing machines to undertake complex tasks traditionally done by humans. AI's application in healthcare has led to advancements in diagnostic tools, predictive analytics, and surgical precision. AIM This comprehensive review aims to explore the transformative impact of AI across diverse healthcare domains, highlighting its applications, advancements, challenges, and contributions to enhancing patient care. METHODOLOGY A comprehensive literature search was conducted across multiple databases, covering publications from 2014 to 2024. Keywords related to AI applications in healthcare were used to gather data, focusing on studies exploring AI's role in medical specialties. RESULTS AI has demonstrated substantial benefits across various fields of medicine. In cardiology, it aids in automated image interpretation, risk prediction, and the management of cardiovascular diseases. In oncology, AI enhances cancer detection, treatment planning, and personalized drug selection. Radiology benefits from improved image analysis and diagnostic accuracy, while critical care sees advancements in patient triage and resource optimization. AI's integration into pediatrics, surgery, public health, neurology, pathology, and mental health has similarly shown significant improvements in diagnostic precision, personalized treatment, and overall patient care. The implementation of AI in low-resource settings has been particularly impactful, enhancing access to advanced diagnostic tools and treatments. CONCLUSION AI is rapidly changing the healthcare industry by greatly increasing the accuracy of diagnoses, streamlining treatment plans, and improving patient outcomes across a variety of medical specializations. This review underscores AI's transformative potential, from early disease detection to personalized treatment plans, and its ability to augment healthcare delivery, particularly in resource-limited settings.
Collapse
Affiliation(s)
- Ravi Rai Dangi
- Manikaka Topawala Institute of Nursing, Charotar University of Science and Technology, Changa, Gujarat, India
| | - Anil Sharma
- Manikaka Topawala Institute of Nursing, Charotar University of Science and Technology, Changa, Gujarat, India
| | - Vipin Vageriya
- Manikaka Topawala Institute of Nursing, Charotar University of Science and Technology, Changa, Gujarat, India
| |
Collapse
|
13
|
Mdletshe S, Wang A. Enhancing medical imaging education: integrating computing technologies, digital image processing and artificial intelligence. J Med Radiat Sci 2025; 72:148-155. [PMID: 39508409 PMCID: PMC11909706 DOI: 10.1002/jmrs.837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Accepted: 10/18/2024] [Indexed: 11/15/2024] Open
Abstract
The rapid advancement of technology has brought significant changes to various fields, including medical imaging (MI). This discussion paper explores the integration of computing technologies (e.g. Python and MATLAB), digital image processing (e.g. image enhancement, segmentation and three-dimensional reconstruction) and artificial intelligence (AI) into the undergraduate MI curriculum. By examining current educational practices, gaps and limitations that hinder the development of future-ready MI professionals are identified. A comprehensive curriculum framework is proposed, incorporating essential computational skills, advanced image processing techniques and state-of-the-art AI tools, such as large language models like ChatGPT. The proposed curriculum framework aims to improve the quality of MI education significantly and better equip students for future professional practice and challenges while enhancing diagnostic accuracy, improving workflow efficiency and preparing students for the evolving demands of the MI field.
Collapse
Affiliation(s)
- Sibusiso Mdletshe
- Department of Anatomy and Medical Imaging, Faculty of Medical and Health SciencesThe University of AucklandAucklandNew Zealand
| | - Alan Wang
- Auckland Bioengineering InstituteThe University of AucklandAucklandNew Zealand
- Medical Imaging Research centre, Faculty of Medical and Health SciencesThe University of AucklandAucklandNew Zealand
- Centre for Co‐Created Ageing ResearchThe University of AucklandAucklandNew Zealand
- Centre for Brain ResearchThe University of AucklandAucklandNew Zealand
| |
Collapse
|
14
|
Mizoguchi Y, Ito T, Yamada M, Tsutsumi T. Deep learning multi-classification of middle ear diseases using synthetic tympanic images. Acta Otolaryngol 2025; 145:134-139. [PMID: 39797517 DOI: 10.1080/00016489.2024.2448829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2024] [Revised: 12/18/2024] [Accepted: 12/19/2024] [Indexed: 01/13/2025]
Abstract
BACKGROUND Recent advances in artificial intelligence have facilitated the automatic diagnosis of middle ear diseases using endoscopic tympanic membrane imaging. AIM We aimed to develop an automated diagnostic system for middle ear diseases by applying deep learning techniques to tympanic membrane images obtained during routine clinical practice. MATERIAL AND METHODS To augment the training dataset, we explored the use of generative adversarial networks (GANs) to produce high-quality synthetic tympanic images that were subsequently added to the training data. Between 2016 and 2021, we collected 472 endoscopic images representing four tympanic membrane conditions: normal, acute otitis media, otitis media with effusion, and chronic suppurative otitis media. These images were utilized for machine learning based on the InceptionV3 model, which was pretrained on ImageNet. Additionally, 200 synthetic images generated using StyleGAN3 and considered appropriate for each disease category were incorporated for retraining. RESULTS The inclusion of synthetic images alongside real endoscopic images did not significantly improve the diagnostic accuracy compared to training solely with real images. However, when trained solely on synthetic images, the model achieved a diagnostic accuracy of approximately 70%. CONCLUSIONS AND SIGNIFICANCE Synthetic images generated by GANs have potential utility in the development of machine-learning models for medical diagnosis.
Collapse
Affiliation(s)
- Yoshimaru Mizoguchi
- Department of Otorhinolaryngology, Institute of Science Tokyo, Tokyo, Japan
- Department of Otolaryngology and Head and Neck Surgery, Tsuchiura-Kyodo General Hospital, Ibaraki, Japan
| | - Taku Ito
- Department of Otorhinolaryngology, Institute of Science Tokyo, Tokyo, Japan
| | - Masato Yamada
- Department of Otolaryngology and Head and Neck Surgery, Tsuchiura-Kyodo General Hospital, Ibaraki, Japan
| | - Takeshi Tsutsumi
- Department of Otorhinolaryngology, Institute of Science Tokyo, Tokyo, Japan
| |
Collapse
|
15
|
Abbas GH, Khouri E, Pouwels S. Artificial Intelligence-Based Predictive Modeling for Aortic Aneurysms. Cureus 2025; 17:e79662. [PMID: 40161150 PMCID: PMC11950341 DOI: 10.7759/cureus.79662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/25/2025] [Indexed: 04/02/2025] Open
Abstract
Abdominal aortic aneurysms (AAAs) remain a major concern to the global society because of the associated risk of rupture and death. Currently, the management of AAAs entails clinical and imaging risk factors, which are not precise and accurate in terms of patient-specific risk assessment. Over the last decade, the utilization of artificial intelligence (AI) and machine learning (ML) algorithms has transformed the process of decision-making in the field of medicine by allowing for the creation of personalized models based on the patient's characteristics. This review aims to discuss the current state and future directions of AI in the form of predictive modeling for aortic aneurysms, stressing the versatility and progression of the ML approaches in risk assessment, screening, and prognosis. We expand on the various strategies used in AI-based solutions and the differences between general and specific approaches such as supervised and unsupervised learning, deep learning, and others. Furthermore, we bring forward the problem of incorporating clinical, imaging, and genomic data into AI/ML to improve its predictiveness and applicability to clinical practice. In addition, we discuss the difficulties and prospects of turning the developed AI-based forecasting models into clinical practice, as well as the problems associated with data quality, model explainability, and legal and ethical concerns. This review aims to reveal the opportunities of AI and ML in enhancing the risk assessment and management of AAAs to shift the paradigm of cardiovascular care toward precision medicine.
Collapse
Affiliation(s)
- Ghulam Husain Abbas
- Faculty of Medicine, Ala-Too International University, Bishkek, KGZ
- Department of Medicine, Mass General Brigham, Boston, USA
| | - Edmon Khouri
- Faculty of Medicine, University of Jordan, Amman, JOR
| | - Sjaak Pouwels
- Department of Surgery, Bielefeld University - Detmold Campus, Detmold, DEU
- Department of Intensive Care Medicine, Elisabeth-Tweesteden Hospital, Tilburg, NLD
| |
Collapse
|
16
|
Al-Obeidat F, Hafez W, Rashid A, Jallo MK, Gador M, Cherrez-Ojeda I, Simancas-Racines D. Artificial intelligence for the detection of acute myeloid leukemia from microscopic blood images; a systematic review and meta-analysis. Front Big Data 2025; 7:1402926. [PMID: 39897067 PMCID: PMC11782132 DOI: 10.3389/fdata.2024.1402926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 12/23/2024] [Indexed: 02/04/2025] Open
Abstract
Background Leukemia is the 11th most prevalent type of cancer worldwide, with acute myeloid leukemia (AML) being the most frequent malignant blood malignancy in adults. Microscopic blood tests are the most common methods for identifying leukemia subtypes. An automated optical image-processing system using artificial intelligence (AI) has recently been applied to facilitate clinical decision-making. Aim To evaluate the performance of all AI-based approaches for the detection and diagnosis of acute myeloid leukemia (AML). Methods Medical databases including PubMed, Web of Science, and Scopus were searched until December 2023. We used the "metafor" and "metagen" libraries in R to analyze the different models used in the studies. Accuracy and sensitivity were the primary outcome measures. Results Ten studies were included in our review and meta-analysis, conducted between 2016 and 2023. Most deep-learning models have been utilized, including convolutional neural networks (CNNs). The common- and random-effects models had accuracies of 1.0000 [0.9999; 1.0001] and 0.9557 [0.9312, and 0.9802], respectively. The common and random effects models had high sensitivity values of 1.0000 and 0.8581, respectively, indicating that the machine learning models in this study can accurately detect true-positive leukemia cases. Studies have shown substantial variations in accuracy and sensitivity, as shown by the Q values and I2 statistics. Conclusion Our systematic review and meta-analysis found an overall high accuracy and sensitivity of AI models in correctly identifying true-positive AML cases. Future research should focus on unifying reporting methods and performance assessment metrics of AI-based diagnostics. Systematic review registration https://www.crd.york.ac.uk/prospero/#recordDetails, CRD42024501980.
Collapse
Affiliation(s)
- Feras Al-Obeidat
- College of Technological Innovation, Zayed University, Abu Dhabi, United Arab Emirates
| | - Wael Hafez
- Internal Medicine Department, Medical Research and Clinical Studies Institute, The National Research Centre, Cairo, Egypt
- NMC Royal Hospital, Abu Dhabi, United Arab Emirates
| | - Asrar Rashid
- NMC Royal Hospital, Abu Dhabi, United Arab Emirates
| | - Mahir Khalil Jallo
- Department of Clinical Sciences, College of Medicine, Gulf Medical University, Ajman, United Arab Emirates
| | - Munier Gador
- NMC Royal Hospital, Abu Dhabi, United Arab Emirates
| | - Ivan Cherrez-Ojeda
- Department of Allergy and Immunology, Universidad Espiritu Santo, Samborondon, Ecuador
- Respiralab Research Group, Guayaquil, Ecuador
| | - Daniel Simancas-Racines
- Centro de Investigación de Salud Pública y Epidemiología Clínica (CISPEC), Universidad UTE, Quito, Ecuador
| |
Collapse
|
17
|
Zuo J, Fang Y, Wang R, Liang S. High-throughput solutions in tumor organoids: from culture to drug screening. Stem Cells 2025; 43:sxae070. [PMID: 39460616 PMCID: PMC11811636 DOI: 10.1093/stmcls/sxae070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Accepted: 10/17/2024] [Indexed: 10/28/2024]
Abstract
Tumor organoids have emerged as an ideal in vitro model for patient-derived tissues, as they recapitulate the characteristics of the source tumor tissue to a certain extent, offering the potential for personalized tumor therapy and demonstrating significant promise in pharmaceutical research and development. However, establishing and applying this model involves multiple labor-intensive and time-consuming experimental steps and lacks standardized protocols and uniform identification criteria. Thus, high-throughput solutions are essential for the widespread adoption of tumor organoid models. This review provides a comprehensive overview of current high-throughput solutions across the entire workflow of tumor organoids, from sampling and culture to drug screening. Furthermore, we explore various technologies that can control and optimize single-cell preparation, organoid culture, and drug screening with the ultimate goal of ensuring the automation and high efficiency of the culture system and identifying more effective tumor therapeutics.
Collapse
Affiliation(s)
- Jianing Zuo
- The Key Laboratory of Biomarker High Throughput Screening and Target Translation of Breast and Gastrointestinal Tumor, Affiliated Zhongshan Hospital of Dalian University, No. 6 Jiefang Street, Zhongshan, Dalian 116001, Liaoning, China
| | - Yanhua Fang
- The Key Laboratory of Biomarker High Throughput Screening and Target Translation of Breast and Gastrointestinal Tumor, Affiliated Zhongshan Hospital of Dalian University, No. 6 Jiefang Street, Zhongshan, Dalian 116001, Liaoning, China
- Liaoning Key Laboratory of Molecular Recognition and Imaging, School of Bioengineering, Dalian University of Technology, Dalian 116024, China
| | - Ruoyu Wang
- The Key Laboratory of Biomarker High Throughput Screening and Target Translation of Breast and Gastrointestinal Tumor, Affiliated Zhongshan Hospital of Dalian University, No. 6 Jiefang Street, Zhongshan, Dalian 116001, Liaoning, China
| | - Shanshan Liang
- The Key Laboratory of Biomarker High Throughput Screening and Target Translation of Breast and Gastrointestinal Tumor, Affiliated Zhongshan Hospital of Dalian University, No. 6 Jiefang Street, Zhongshan, Dalian 116001, Liaoning, China
| |
Collapse
|
18
|
Xie Y, Zhai Y, Lu G. Evolution of artificial intelligence in healthcare: a 30-year bibliometric study. Front Med (Lausanne) 2025; 11:1505692. [PMID: 39882522 PMCID: PMC11775008 DOI: 10.3389/fmed.2024.1505692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2024] [Accepted: 12/31/2024] [Indexed: 01/31/2025] Open
Abstract
Introduction In recent years, the development of artificial intelligence (AI) technologies, including machine learning, deep learning, and large language models, has significantly supported clinical work. Concurrently, the integration of artificial intelligence with the medical field has garnered increasing attention from medical experts. This study undertakes a dynamic and longitudinal bibliometric analysis of AI publications within the healthcare sector over the past three decades to investigate the current status and trends of the fusion between medicine and artificial intelligence. Methods Following a search on the Web of Science, researchers retrieved all reviews and original articles concerning artificial intelligence in healthcare published between January 1993 and December 2023. The analysis employed Bibliometrix, Biblioshiny, and Microsoft Excel, incorporating the bibliometrix R package for data mining and analysis, and visualized the observed trends in bibliometrics. Results A total of 22,950 documents were collected in this study. From 1993 to 2023, there was a discernible upward trajectory in scientific output within bibliometrics. The United States and China emerged as primary contributors to medical artificial intelligence research, with Harvard University leading in publication volume among institutions. Notably, the rapid expansion of emerging topics such as COVID-19 and new drug discovery in recent years is noteworthy. Furthermore, the top five most cited papers in 2023 were all pertinent to the theme of ChatGPT. Conclusion This study reveals a sustained explosive growth trend in AI technologies within the healthcare sector in recent years, with increasingly profound applications in medicine. Additionally, medical artificial intelligence research is dynamically evolving with the advent of new technologies. Moving forward, concerted efforts to bolster international collaboration and enhance comprehension and utilization of AI technologies are imperative for fostering novel innovations in healthcare.
Collapse
Affiliation(s)
- Yaojue Xie
- Yangjiang Bainian Yanshen Medical Technology Co., Ltd., Yangjiang, China
| | - Yuansheng Zhai
- Department of Cardiology, Heart Center, First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
- NHC Key Laboratory of Assisted Circulation (Sun Yat-sen University), Guangzhou, China
| | - Guihua Lu
- Department of Cardiology, Heart Center, First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
- NHC Key Laboratory of Assisted Circulation (Sun Yat-sen University), Guangzhou, China
| |
Collapse
|
19
|
Liu TYA, Chen H, Koseoglu ND, Kolchinski A, Unberath M, Correa ZM. Direct Prediction of 48 Month Survival Status in Patients with Uveal Melanoma Using Deep Learning and Digital Cytopathology Images. Cancers (Basel) 2025; 17:230. [PMID: 39858012 PMCID: PMC11763770 DOI: 10.3390/cancers17020230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Revised: 12/24/2024] [Accepted: 12/27/2024] [Indexed: 01/27/2025] Open
Abstract
BACKGROUND Uveal melanoma (UM) is the most common primary intraocular malignancy in adults. The median overall survival time for patients who develop metastasis is approximately one year. In this study, we aim to leverage deep learning (DL) techniques to analyze digital cytopathology images and directly predict the 48 month survival status on a patient level. METHODS Fine-needle aspiration biopsy (FNAB) of the tumor was performed in each patient diagnosed with UM. The cell aspirate was smeared on a glass slide and stained with H&E. Each slide then underwent whole-slide scanning. Within each whole-slide image, regions of interest (ROIs) with UM cells were automatically extracted. Each ROI was converted into super pixels, and the super pixels were automatically detected, segmented and annotated as "tumor cell" or "background" using DL. Cell-level features were extracted from the segmented tumor cells. The cell-level features were aggregated into slide-level features which were learned by a fully connected layer in an artificial neural network, and the patient survival status was predicted directly from the slide-level features. The data were partitioned at the patient level (78% training and 22% testing). Our DL model was trained to perform the binary prediction of yes-versus-no survival by Month 48. The ground truth for patient survival was established via a retrospective chart review. RESULTS A total of 74 patients were included in this study (43% female; mean age at the time of diagnosis: 61.8 ± 11.6 years), and 207,260 unique ROIs were generated for model training and testing. By Month 48 after diagnosis, 18 patients (24%) died from UM metastasis. Our hold-out test set contained 16 patients, where 6 patients had passed away and 10 patients were alive at Month 48. When using a sensitivity threshold of 80% in predicting UM-specific death by Month 48, our model achieved an overall accuracy of 75%. Within the subgroup of patients who died by Month 48, our model achieved a prediction accuracy of 83%. Of note, one patient in our test set was a clinical surprise, namely death by Month 48 despite having a GEP class 1A tumor, which typically portends a good prognosis. Our model correctly predicted this clinical surprise as well. CONCLUSIONS Our DL model was able to predict the Month 48 survival status directly from digital cytopathology images obtained from FNABs of UM tumors with reasonably robust performance. This approach, if validated prospectively, could serve as an alternative survival prediction tool for patients with UM to whom GEP is not available.
Collapse
Affiliation(s)
- T. Y. Alvin Liu
- Wilmer Eye Institute, School of Medicine, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Haomin Chen
- School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | - Anna Kolchinski
- School of Medicine, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Mathias Unberath
- School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Zelia M. Correa
- Ocular Oncology Service, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| |
Collapse
|
20
|
Chetla N, Tandon M, Chang J, Sukhija K, Patel R, Sanchez R. Evaluating ChatGPT's Efficacy in Pediatric Pneumonia Detection From Chest X-Rays: Comparative Analysis of Specialized AI Models. JMIR AI 2025; 4:e67621. [PMID: 39793007 PMCID: PMC11759907 DOI: 10.2196/67621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 11/24/2024] [Accepted: 12/04/2024] [Indexed: 01/12/2025]
Affiliation(s)
- Nitin Chetla
- Department of Radiology, University of Virginia School of Medicine, Charlottesville, VA, United States
| | - Mihir Tandon
- Department of Orthopaedics, Albany Medical College, Albany, NY, United States
| | - Joseph Chang
- Department of Radiology, University of Passau, Passau, Germany
| | - Kunal Sukhija
- Department of Emergency Medicine, Kaweah Health Medical Center, Visalia, CA, United States
| | - Romil Patel
- Department of Radiology, University of Virginia School of Medicine, Charlottesville, VA, United States
| | - Ramon Sanchez
- Department of Radiology, Children's National Hospital, Washington, DC, United States
| |
Collapse
|
21
|
Sriwatana K, Puttanawarut C, Suwan Y, Achakulvisut T. Explainable Deep Learning for Glaucomatous Visual Field Prediction: Artifact Correction Enhances Transformer Models. Transl Vis Sci Technol 2025; 14:22. [PMID: 39847375 PMCID: PMC11758932 DOI: 10.1167/tvst.14.1.22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 12/10/2024] [Indexed: 01/24/2025] Open
Abstract
Purpose The purpose of this study was to develop a deep learning approach that restores artifact-laden optical coherence tomography (OCT) scans and predicts functional loss on the 24-2 Humphrey Visual Field (HVF) test. Methods This cross-sectional, retrospective study used 1674 visual field (VF)-OCT pairs from 951 eyes for training and 429 pairs from 345 eyes for testing. Peripapillary retinal nerve fiber layer (RNFL) thickness map artifacts were corrected using a generative diffusion model. Three convolutional neural networks and 2 transformer-based models were trained on original and artifact-corrected datasets to estimate 54 sensitivity thresholds of the 24-2 HVF test. Results Predictive performances were calculated using root mean square error (RMSE) and mean absolute error (MAE), with explainability evaluated through GradCAM, attention maps, and dimensionality reduction techniques. The Distillation with No Labels (DINO) Vision Transformers (ViT) trained on artifact-corrected datasets achieved the highest accuracy (RMSE, 95% confidence interval [CI] = 4.44, 95% CI = 4.07, 4.82 decibel [dB], MAE = 3.46, 95% CI = 3.14, 3.79 dB), and the greatest interpretability, showing improvements of 0.15 dB in global RMSE and MAE (P < 0.05) compared to the performance on original maps. Feature maps and visualization tools indicate that artifacts compromise DINO-ViT's predictive ability but improve with artifact correction. Conclusions Combining self-supervised ViTs with generative artifact correction enhances the correlation between glaucomatous structures and functions. Translational Relevance Our approach offers a comprehensive tool for glaucoma management, facilitates the exploration of structure-function correlations in research, and underscores the importance of addressing artifacts in the clinical interpretation of OCT.
Collapse
Affiliation(s)
- Kornchanok Sriwatana
- Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, Thailand
- Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Chanon Puttanawarut
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan, Thailand
- Department of Clinical Epidemiology and Biostatistics, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Yanin Suwan
- Department of Ophthalmology, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Titipat Achakulvisut
- Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, Thailand
| |
Collapse
|
22
|
Ahn J, Kim B. Application of Generative Artificial Intelligence in Dyslipidemia Care. J Lipid Atheroscler 2025; 14:77-93. [PMID: 39911966 PMCID: PMC11791424 DOI: 10.12997/jla.2025.14.1.77] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Revised: 10/27/2024] [Accepted: 10/27/2024] [Indexed: 02/07/2025] Open
Abstract
Dyslipidemia dramatically increases the risk of cardiovascular diseases, necessitating appropriate treatment techniques. Generative AI (GenAI), an advanced AI technology that can generate diverse content by learning from vast datasets, provides promising new opportunities to address this challenge. GenAI-powered frequently asked questions systems and chatbots offer continuous, personalized support by addressing lifestyle modifications and medication adherence, which is crucial for patients with dyslipidemia. These tools also help to promote health literacy by making information more accessible and reliable. GenAI helps healthcare providers construct clinical case scenarios, training materials, and evaluation tools, which supports professional development and evidence-based practice. Multimodal GenAI technology analyzes food images and nutritional content to deliver personalized dietary recommendations tailored to each patient's condition, improving long-term nutritional management for those with dyslipidemia. Moreover, using GenAI for image generation enhances the visual quality of educational materials for both patients and professionals, allowing healthcare providers to create real-time, customized visual aids. To apply successfully, healthcare providers must develop GenAI-related abilities, such as prompt engineering and critical evaluation of GenAI-generated data.
Collapse
Affiliation(s)
- Jihyun Ahn
- Department of Internal Medicine, Korea Medical Institute, Seoul, Korea
| | - Bokyoung Kim
- College of Nursing, Research Institute of Nursing Innovation, Kyungpook National University, Daegu, Korea
| |
Collapse
|
23
|
Kanavos T, Birbas E, Zanos TP. A Systematic Review of the Applications of Deep Learning for the Interpretation of Positron Emission Tomography Images of Patients with Lymphoma. Cancers (Basel) 2024; 17:69. [PMID: 39796698 PMCID: PMC11719749 DOI: 10.3390/cancers17010069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Revised: 12/20/2024] [Accepted: 12/24/2024] [Indexed: 01/13/2025] Open
Abstract
Background: Positron emission tomography (PET) is a valuable tool for the assessment of lymphoma, while artificial intelligence (AI) holds promise as a reliable resource for the analysis of medical images. In this context, we systematically reviewed the applications of deep learning (DL) for the interpretation of lymphoma PET images. Methods: We searched PubMed until 11 September 2024 for studies developing DL models for the evaluation of PET images of patients with lymphoma. The risk of bias and applicability concerns were assessed using the prediction model risk of bias assessment tool (PROBAST). The articles included were categorized and presented based on the task performed by the proposed models. Our study was registered with the international prospective register of systematic reviews, PROSPERO, as CRD42024600026. Results: From 71 papers initially retrieved, 21 studies with a total of 9402 participants were ultimately included in our review. The proposed models achieved a promising performance in diverse medical tasks, namely, the detection and histological classification of lesions, the differential diagnosis of lymphoma from other conditions, the quantification of metabolic tumor volume, and the prediction of treatment response and survival with areas under the curve, F1-scores, and R2 values of up to 0.963, 87.49%, and 0.94, respectively. Discussion: The primary limitations of several studies were the small number of participants and the absence of external validation. In conclusion, the interpretation of lymphoma PET images can reliably be aided by DL models, which are not designed to replace physicians but to assist them in managing large volumes of scans through rapid and accurate calculations, alleviate their workload, and provide them with decision support tools for precise care and improved outcomes.
Collapse
Affiliation(s)
| | | | - Theodoros P. Zanos
- Institute of Health System Science, Feinstein Institutes for Medical Research, Manhasset, NY 11030, USA; (T.K.); (E.B.)
| |
Collapse
|
24
|
Hosseini SM, Mohtarami SA, Shadnia S, Rahimi M, Erfan Talab Evini P, Mostafazadeh B, Memarian A, Heidarli E. Detection of Body Packs in Abdominal CT scans Through Artificial Intelligence; Developing a Machine Learning-based Model. ARCHIVES OF ACADEMIC EMERGENCY MEDICINE 2024; 13:e23. [PMID: 39958959 PMCID: PMC11829241 DOI: 10.22037/aaemj.v13i1.2479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/18/2025]
Abstract
Introduction Identifying the people who try to hide illegal substances in the body for smuggling is of considerable importance in forensic medicine and poisoning. This study aimed to develop a new diagnostic method using artificial intelligence to detect body packs in real-time Abdominal computed tomography (CT) scans. Methods In this cross-sectional study, abdominal CT scan images were employed to create a machine learning-based model for detecting body packs. A single-step object detection called RetinaNet using a modified neck (Proposed Model) was performed to achieve the best results. Also, an angled Bbox (oriented bounding box) in the training dataset played an important role in improving the results. Results A total of 888 abdominal CT scan images were studied. Our proposed Body Packs Detection (BPD) model achieved a mean average precision (mAP) value of 86.6% when the intersection over union (IoU) was 0.5, and a mAP value of 45.6% at different IoU thresholds (from 0.5 to 0.95 in steps of 0.05). It also obtained a Recall value of 58.5%, which was the best result among the standard object detection methods such as the standard RetinaNet. Conclusion This study employed a deep learning network to identify body packs in abdominal CT scans, highlighting the importance of incorporating object shape and variability when leveraging artificial intelligence in healthcare to aid medical practitioners. Nonetheless, the development of a tailored dataset for object detection, like body packs, requires careful curation by subject matter specialists to ensure successful training.
Collapse
Affiliation(s)
- Sayed Masoud Hosseini
- Toxicological Research Center, Excellence Center of Clinical Toxicology, Department of Clinical Toxicology, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Seyed Ali Mohtarami
- Department of Computer Engineering and Information Technology, (PNU), Tehran, Iran
| | - Shahin Shadnia
- Toxicological Research Center, Excellence Center of Clinical Toxicology, Department of Clinical Toxicology, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mitra Rahimi
- Toxicological Research Center, Excellence Center of Clinical Toxicology, Department of Clinical Toxicology, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Peyman Erfan Talab Evini
- Toxicological Research Center, Excellence Center of Clinical Toxicology, Department of Clinical Toxicology, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Babak Mostafazadeh
- Toxicological Research Center, Excellence Center of Clinical Toxicology, Department of Clinical Toxicology, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Azadeh Memarian
- Emergency Medicine, School of Medicine, Mazandaran University of Medical Sciences, Sari, Iran
| | - Elmira Heidarli
- Toxicological Research Center, Excellence Center of Clinical Toxicology, Department of Clinical Toxicology, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
25
|
Hasei J, Nakahara R, Otsuka Y, Nakamura Y, Ikuta K, Osaki S, Hironari T, Miwa S, Ohshika S, Nishimura S, Kahara N, Yoshida A, Fujiwara T, Nakata E, Kunisada T, Ozaki T. The Three-Class Annotation Method Improves the AI Detection of Early-Stage Osteosarcoma on Plain Radiographs: A Novel Approach for Rare Cancer Diagnosis. Cancers (Basel) 2024; 17:29. [PMID: 39796660 PMCID: PMC11718825 DOI: 10.3390/cancers17010029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Revised: 12/17/2024] [Accepted: 12/24/2024] [Indexed: 01/13/2025] Open
Abstract
Background/Objectives: Developing high-performance artificial intelligence (AI) models for rare diseases is challenging owing to limited data availability. This study aimed to evaluate whether a novel three-class annotation method for preparing training data could enhance AI model performance in detecting osteosarcoma on plain radiographs compared to conventional single-class annotation. Methods: We developed two annotation methods for the same dataset of 468 osteosarcoma X-rays and 378 normal radiographs: a conventional single-class annotation (1C model) and a novel three-class annotation method (3C model) that separately labeled intramedullary, cortical, and extramedullary tumor components. Both models used identical U-Net-based architectures, differing only in their annotation approaches. Performance was evaluated using an independent validation dataset. Results: Although both models achieved high diagnostic accuracy (AUC: 0.99 vs. 0.98), the 3C model demonstrated superior operational characteristics. At a standardized cutoff value of 0.2, the 3C model maintained balanced performance (sensitivity: 93.28%, specificity: 92.21%), whereas the 1C model showed compromised specificity (83.58%) despite high sensitivity (98.88%). Notably, at the 25th percentile threshold, both models showed identical false-negative rates despite significantly different cutoff values (3C: 0.661 vs. 1C: 0.985), indicating the ability of the 3C model to maintain diagnostic accuracy at substantially lower thresholds. Conclusions: This study demonstrated that anatomically informed three-class annotation can enhance AI model performance for rare disease detection without requiring additional training data. The improved stability at lower thresholds suggests that thoughtful annotation strategies can optimize the AI model training, particularly in contexts where training data are limited.
Collapse
Affiliation(s)
- Joe Hasei
- Department of Medical Information and Assistive Technology Development, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan
| | - Ryuichi Nakahara
- Science of Functional Recovery and Reconstruction, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan
| | - Yujiro Otsuka
- Department of Radiology, Juntendo University School of Medicine, Tokyo 113-8431, Japan
- Milliman, Inc., Tokyo 102-0083, Japan
- Plusman LCC, Tokyo 103-0023, Japan
| | | | - Kunihiro Ikuta
- Department of Orthopedic Surgery, Graduate School of Medicine, Nagoya University, Nagoya 464-0083, Japan
| | - Shuhei Osaki
- Department of Musculoskeletal Oncology and Rehabilitation, National Cancer Center Hospital, Tokyo 104-0045, Japan
| | - Tamiya Hironari
- Department of Musculoskeletal Oncology Service, Osaka International Cancer Institute, Osaka 541-8567, Japan
| | - Shinji Miwa
- Department of Orthopedic Surgery, Kanazawa University Graduate School of Medical Sciences, Ishikawa 920-8641, Japan
| | - Shusa Ohshika
- Department of Orthopaedic Surgery, Hirosaki University Graduate School of Medicine, Aomori 036-8563, Japan
| | - Shunji Nishimura
- Department of Orthopaedic Surgery, Kindai University Hospital, Osaka 589-8511, Japan
| | - Naoaki Kahara
- Department of Orthopedic Surgery, Mizushima Central Hospital, Kurashiki 712-8064, Japan
| | - Aki Yoshida
- Science of Functional Recovery and Reconstruction, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan
| | - Tomohiro Fujiwara
- Science of Functional Recovery and Reconstruction, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan
| | - Eiji Nakata
- Science of Functional Recovery and Reconstruction, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan
| | - Toshiyuki Kunisada
- Science of Functional Recovery and Reconstruction, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan
| | - Toshifumi Ozaki
- Science of Functional Recovery and Reconstruction, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan
| |
Collapse
|
26
|
Kim H, Yu I. Assessing the Diagnostic Performance of Automated Pituitary Gland Volume Measurement for Idiopathic Central Precocious Puberty. J Clin Med 2024; 14:15. [PMID: 39797098 PMCID: PMC11722546 DOI: 10.3390/jcm14010015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Revised: 12/16/2024] [Accepted: 12/19/2024] [Indexed: 01/13/2025] Open
Abstract
Background/Objectives: It is known that the pituitary gland volume (PV) in idiopathic central precocious puberty (IPP) is significantly higher than in healthy children. However, most PV measurements rely on manual quantitative methods, which are time-consuming and labor-intensive. This study aimed to automatically measure the PV of patients with IPP using artificial intelligence to accurately quantify the correlation between IPP and PV, and to improve the efficiency of diagnosing IPP. Methods: From July 2016 to February 2024, 226 patients who had been diagnosed with IPP and undergone brain MR imaging were included (117 males and 109 females; median age, 8 years; interquartile range, 7-9 years). A control group of 52 patients who had undergone brain MR imaging without symptoms of precocious puberty was also included (37 males and 15 females; median age, 8 years; interquartile range, 8-9 years). Measurement variability was examined between manual and automatic measurements (n = 57). The pituitary gland volume was measured using 1-3 mm thickness T1 sagittal images from non-enhanced brain MR imaging, analyzed with the MA-net artificial intelligence learning method. Physical characteristics (height, weight, and age) were correlated with PV, and the difference in PV between the IPP group and the control group was evaluated. Results: The intraclass correlation coefficient was 0.993 for agreement between manual and automatic measurement. Confounding bias was reduced by PSM. PV was positively correlated with age and body weight in the IPP group (17.4%, p = 0.009, and 14.0%, p = 0.037). The median values of PV were 432 mm³ in the IPP group and 380 mm³ in the control group, showing a significant difference of 52 mm³ (p < 0.05). Conclusions: The PV in the IPP group was significantly higher than in the control group. Automatically measuring PV along with assessing hormone levels could enable a faster and more straightforward diagnosis of IPP.
Collapse
Affiliation(s)
- Hayoun Kim
- Departments of Radiology, Eulji University Hospital, Eulji University College of Medicine, 95 Dunsanseo-ro, Seo-gu, Daejeon 35233, Republic of Korea
| | | |
Collapse
|
27
|
Battineni G, Chintalapudi N, Amenta F. Machine Learning Driven by Magnetic Resonance Imaging for the Classification of Alzheimer Disease Progression: Systematic Review and Meta-Analysis. JMIR Aging 2024; 7:e59370. [PMID: 39714089 PMCID: PMC11704653 DOI: 10.2196/59370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 06/12/2024] [Accepted: 09/25/2024] [Indexed: 12/24/2024] Open
Abstract
BACKGROUND To diagnose Alzheimer disease (AD), individuals are classified according to the severity of their cognitive impairment. There are currently no specific causes or conditions for this disease. OBJECTIVE The purpose of this systematic review and meta-analysis was to assess AD prevalence across different stages using machine learning (ML) approaches comprehensively. METHODS The selection of papers was conducted in 3 phases, as per PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) 2020 guidelines: identification, screening, and final inclusion. The final analysis included 24 papers that met the criteria. The selection of ML approaches for AD diagnosis was rigorously based on their relevance to the investigation. The prevalence of patients with AD at 2, 3, 4, and 6 stages was illustrated through the use of forest plots. RESULTS The prevalence rate for both cognitively normal (CN) and AD across 6 studies was 49.28% (95% CI 46.12%-52.45%; P=.32). The prevalence estimate for the 3 stages of cognitive impairment (CN, mild cognitive impairment, and AD) is 29.75% (95% CI 25.11%-34.84%, P<.001). Among 5 studies with 14,839 participants, the analysis of 4 stages (nondemented, moderately demented, mildly demented, and AD) found an overall prevalence of 13.13% (95% CI 3.75%-36.66%; P<.001). In addition, 4 studies involving 3819 participants estimated the prevalence of 6 stages (CN, significant memory concern, early mild cognitive impairment, mild cognitive impairment, late mild cognitive impairment, and AD), yielding a prevalence of 23.75% (95% CI 12.22%-41.12%; P<.001). CONCLUSIONS The significant heterogeneity observed across studies reveals that demographic and setting characteristics are responsible for the impact on AD prevalence estimates. This study shows how ML approaches can be used to describe AD prevalence across different stages, which provides valuable insights for future research.
Collapse
Affiliation(s)
- Gopi Battineni
- Clinical Research, Telemedicine and Telepharmacy Centre, School of Medicinal and Health Products Sciences, University Camerino, Camerino, Italy
- Centre for Global Health Research, Saveetha University, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| | - Nalini Chintalapudi
- Clinical Research, Telemedicine and Telepharmacy Centre, School of Medicinal and Health Products Sciences, University Camerino, Camerino, Italy
| | - Francesco Amenta
- Clinical Research, Telemedicine and Telepharmacy Centre, School of Medicinal and Health Products Sciences, University Camerino, Camerino, Italy
| |
Collapse
|
28
|
Tiraboschi C, Parenti F, Sangalli F, Resovi A, Belotti D, Lanzarone E. Automatic Segmentation of Metastatic Livers by Means of U-Net-Based Procedures. Cancers (Basel) 2024; 16:4159. [PMID: 39766059 PMCID: PMC11674041 DOI: 10.3390/cancers16244159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Revised: 11/26/2024] [Accepted: 12/08/2024] [Indexed: 01/11/2025] Open
Abstract
Background: The liver is one of the most common sites for the spread of pancreatic ductal adenocarcinoma (PDAC) cells, with metastases present in about 80% of patients. Clinical and preclinical studies of PDAC require quantification of the liver's metastatic burden from several acquired images, which can benefit from automatic image segmentation tools. Methods: We developed three neural networks based on U-net architecture to automatically segment the healthy liver area (HL), the metastatic liver area (MLA), and liver metastases (LM) in micro-CT images of a mouse model of PDAC with liver metastasis. Three alternative U-nets were trained for each structure to be segmented following appropriate image preprocessing and the one with the highest performance was then chosen and applied for each case. Results: Good performance was achieved, with accuracy of 92.6%, 88.6%, and 91.5%, specificity of 95.5%, 93.8%, and 99.9%, Dice of 71.6%, 74.4%, and 29.9%, and negative predicted value (NPV) of 97.9%, 91.5%, and 91.5% on the pilot validation set for the chosen HL, MLA, and LM networks, respectively. Conclusions: The networks provided good performance and advantages in terms of saving time and ensuring reproducibility.
Collapse
Affiliation(s)
- Camilla Tiraboschi
- Department of Management, Information and Production Engineering, University of Bergamo, 24044 Dalmine, BG, Italy
| | - Federica Parenti
- Department of Management, Information and Production Engineering, University of Bergamo, 24044 Dalmine, BG, Italy
| | - Fabio Sangalli
- Department of Biomedical Engineering, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, 24126 Bergamo, BG, Italy
| | - Andrea Resovi
- Department of Oncology, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, 24126 Bergamo, BG, Italy
| | - Dorina Belotti
- Department of Oncology, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, 24126 Bergamo, BG, Italy
| | - Ettore Lanzarone
- Department of Management, Information and Production Engineering, University of Bergamo, 24044 Dalmine, BG, Italy
| |
Collapse
|
29
|
Issa J, Chidiac A, Mozdziak P, Kempisty B, Dorocka-Bobkowska B, Mehr K, Dyszkiewicz-Konwińska M. Artificial Intelligence-Assisted Segmentation of a Falx Cerebri Calcification on Cone-Beam Computed Tomography: A Case Report. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:2048. [PMID: 39768927 PMCID: PMC11676691 DOI: 10.3390/medicina60122048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Revised: 11/25/2024] [Accepted: 12/10/2024] [Indexed: 01/11/2025]
Abstract
Intracranial calcifications, particularly within the falx cerebri, serve as crucial diagnostic markers ranging from benign accumulations to signs of severe pathologies. The falx cerebri, a dural fold that separates the cerebral hemispheres, presents challenges in visualization due to its low contrast in standard imaging techniques. Recent advancements in artificial intelligence (AI), particularly in machine learning and deep learning, have significantly transformed radiological diagnostics. This study aims to explore the application of AI in the segmentation and detection of falx cerebri calcifications using Cone-Beam Computed Tomography (CBCT) images through a comprehensive literature review and a detailed case report. The case report presents a 59-year-old patient diagnosed with falx cerebri calcifications whose CBCT images were analyzed using a cloud-based AI platform, demonstrating effectiveness in segmenting these calcifications, although challenges persist in distinguishing these from other cranial structures. A specific search strategy was employed to search electronic databases, yielding four studies exploring AI-based segmentation of the falx cerebri. The review detailed various AI models and their accuracy across different imaging modalities in identifying and segmenting falx cerebri calcifications, also highlighting the gap in publications in this area. In conclusion, further research is needed to improve AI-driven methods for accurately identifying and measuring intracranial calcifications. Advancing AI applications in radiology, particularly for detecting falx cerebri calcifications, could significantly enhance diagnostic precision, support disease monitoring, and inform treatment planning.
Collapse
Affiliation(s)
- Julien Issa
- Chair of Practical Clinical Dentistry, Department of Diagnostics, Poznan University of Medical Sciences, Bukowska 70, 60-812 Poznan, Poland
- Doctoral School, Poznań University of Medical Sciences, Bukowska 70, 60-812 Poznan, Poland
| | - Alexandre Chidiac
- Faculty of Medical Sciences, Poznan University of Medical Sciences, Fredry 10, 61-701 Poznan, Poland
| | - Paul Mozdziak
- Prestage Department of Poultry Sciences, North Carolina State University, Raleigh, NC 27695, USA
- Physiology Graduate Program, North Carolina State University, Raleigh, NC 27695, USA
| | - Bartosz Kempisty
- Prestage Department of Poultry Sciences, North Carolina State University, Raleigh, NC 27695, USA
- Department of Veterinary Surgery, Institute of Veterinary Medicine, Nicolaus Copernicus University in Torun, 87-100 Torun, Poland
- Department of Human Morphology and Embryology, Head of Division of Anatomy, Wrocław Medical University, 50-367 Wrocław, Poland
- Center of Assisted Reproduction, Department of Obstetrics and Gynecology, University Hospital and Masaryk University, 601 77 Brno, Czech Republic
| | - Barbara Dorocka-Bobkowska
- Department of Gerostomatology and Pathology of Oral Cavity, Poznan University of Medical Sciences, Bukowska 70, 60-812 Poznan, Poland
| | - Katarzyna Mehr
- Department of Gerostomatology and Pathology of Oral Cavity, Poznan University of Medical Sciences, Bukowska 70, 60-812 Poznan, Poland
| | - Marta Dyszkiewicz-Konwińska
- Chair of Practical Clinical Dentistry, Department of Diagnostics, Poznan University of Medical Sciences, Bukowska 70, 60-812 Poznan, Poland
| |
Collapse
|
30
|
Zhang C, Iqbal MFB, Iqbal I, Cheng M, Sarhan N, Awwad EM, Ghadi YY. Prognostic Modeling for Liver Cirrhosis Mortality Prediction and Real-Time Health Monitoring from Electronic Health Data. BIG DATA 2024. [PMID: 39651607 DOI: 10.1089/big.2024.0071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2024]
Abstract
Liver cirrhosis stands as a prominent contributor to mortality, impacting millions across the United States. Enabling health care providers to predict early mortality among patients with cirrhosis holds the potential to enhance treatment efficacy significantly. Our hypothesis centers on the correlation between mortality and laboratory test results along with relevant diagnoses in this patient cohort. Additionally, we posit that a deep learning model could surpass the predictive capabilities of the existing Model for End-Stage Liver Disease score. This research seeks to advance prognostic accuracy and refine approaches to address the critical challenges posed by cirrhosis-related mortality. This study evaluates the performance of an artificial neural network model for liver disease classification using various training dataset sizes. Through meticulous experimentation, three distinct training proportions were analyzed: 70%, 80%, and 90%. The model's efficacy was assessed using precision, recall, F1-score, accuracy, and support metrics, alongside receiver operating characteristic (ROC) and precision-recall (PR) curves. The ROC curves were quantified using the area under the curve (AUC) metric. Results indicated that the model's performance improved with an increased size of the training dataset. Specifically, the 80% training data model achieved the highest AUC, suggesting superior classification ability over the models trained with 70% and 90% data. PR analysis revealed a steep trade-off between precision and recall across all datasets, with 80% training data again demonstrating a slightly better balance. This is indicative of the challenges faced in achieving high precision with a concurrently high recall, a common issue in imbalanced datasets such as those found in medical diagnostics.
Collapse
Affiliation(s)
- Chengping Zhang
- Mechanical and Electrical Engineering College, Hainan Vocational University of Science and Technology, Haikou, China
| | - Muhammad Faisal Buland Iqbal
- Key Laboratory of Intelligent Computing & Information Processing, Ministry of Education, Xiangtan University, Xiangtan, China
| | - Imran Iqbal
- Department of Pathology, NYU Grossman School of Medicine, New York University Langone Health, New York, USA
| | - Minghao Cheng
- School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, China
| | - Nadia Sarhan
- Department of Quantitative Analysis, College of Business Administration, King Saud University, Riyadh, Saudi Arabia
| | - Emad Mahrous Awwad
- Department of Electrical Engineering, College of Engineering, King Saud University, Riyadh, Saudi Arabia
| | - Yazeed Yasin Ghadi
- Department of Computer Science and Software Engineering, Al Ain University, Al Ain, United Arab Emirates
| |
Collapse
|
31
|
Demirbaş AA, Üzen H, Fırat H. Spatial-attention ConvMixer architecture for classification and detection of gastrointestinal diseases using the Kvasir dataset. Health Inf Sci Syst 2024; 12:32. [PMID: 38685985 PMCID: PMC11056348 DOI: 10.1007/s13755-024-00290-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/12/2024] [Indexed: 05/02/2024] Open
Abstract
Gastrointestinal (GI) disorders, encompassing conditions like cancer and Crohn's disease, pose a significant threat to public health. Endoscopic examinations have become crucial for diagnosing and treating these disorders efficiently. However, the subjective nature of manual evaluations by gastroenterologists can lead to potential errors in disease classification. In addition, the difficulty of diagnosing diseased tissues in GI and the high similarity between classes made the subject a difficult area. Automated classification systems that use artificial intelligence to solve these problems have gained traction. Automatic detection of diseases in medical images greatly benefits in the diagnosis of diseases and reduces the time of disease detection. In this study, we suggested a new architecture to enable research on computer-assisted diagnosis and automated disease detection in GI diseases. This architecture, called Spatial-Attention ConvMixer (SAC), further developed the patch extraction technique used as the basis of the ConvMixer architecture with a spatial attention mechanism (SAM). The SAM enables the network to concentrate selectively on the most informative areas, assigning importance to each spatial location within the feature maps. We employ the Kvasir dataset to assess the accuracy of classifying GI illnesses using the SAC architecture. We compare our architecture's results with Vanilla ViT, Swin Transformer, ConvMixer, MLPMixer, ResNet50, and SqueezeNet models. Our SAC method gets 93.37% accuracy, while the other architectures get respectively 79.52%, 74.52%, 92.48%, 63.04%, 87.44%, and 85.59%. The proposed spatial attention block improves the accuracy of the ConvMixer architecture on the Kvasir, outperforming the state-of-the-art methods with an accuracy rate of 93.37%.
Collapse
Affiliation(s)
| | - Hüseyin Üzen
- Department of Computer Engineering, Faculty of Engineering, Bingol University, Bingol, Turkey
| | - Hüseyin Fırat
- Department of Computer Engineering, Faculty of Engineering, Dicle University, Diyarbakır, Turkey
| |
Collapse
|
32
|
Lin N, Paul R, Guerra S, Liu Y, Doulgeris J, Shi M, Lin M, Engeberg ED, Hashemi J, Vrionis FD. The Frontiers of Smart Healthcare Systems. Healthcare (Basel) 2024; 12:2330. [PMID: 39684952 DOI: 10.3390/healthcare12232330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2024] [Revised: 11/14/2024] [Accepted: 11/15/2024] [Indexed: 12/18/2024] Open
Abstract
Artificial Intelligence (AI) is poised to revolutionize numerous aspects of human life, with healthcare among the most critical fields set to benefit from this transformation. Medicine remains one of the most challenging, expensive, and impactful sectors, with challenges such as information retrieval, data organization, diagnostic accuracy, and cost reduction. AI is uniquely suited to address these challenges, ultimately improving the quality of life and reducing healthcare costs for patients worldwide. Despite its potential, the adoption of AI in healthcare has been slower compared to other industries, highlighting the need to understand the specific obstacles hindering its progress. This review identifies the current shortcomings of AI in healthcare and explores its possibilities, realities, and frontiers to provide a roadmap for future advancements.
Collapse
Affiliation(s)
- Nan Lin
- Department of Gastroenterology, The Affiliated Hospital of Putian University, Putian 351100, China
| | - Rudy Paul
- Department of Ocean & Mechanical Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Santiago Guerra
- Department of Ocean & Mechanical Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Yan Liu
- Department of Gastroenterology, The Affiliated Hospital of Putian University, Putian 351100, China
- Department of Neurosurgery, Marcus Neuroscience Institute, Boca Raton Regional Hospital, Boca Raton, FL 33486, USA
| | - James Doulgeris
- Department of Biomedical Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Min Shi
- Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02115, USA
- School of Computing and Informatics, University of Louisiana, Lafayette, LA 70504, USA
| | - Maohua Lin
- Department of Biomedical Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Erik D Engeberg
- Department of Ocean & Mechanical Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA
- Department of Biomedical Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA
- Center for Complex Systems and Brain Science, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Javad Hashemi
- Department of Biomedical Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Frank D Vrionis
- Department of Neurosurgery, Marcus Neuroscience Institute, Boca Raton Regional Hospital, Boca Raton, FL 33486, USA
| |
Collapse
|
33
|
Jütte L, Patel H, Roth B. Advancing dermoscopy through a synthetic hair benchmark dataset and deep learning-based hair removal. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:116003. [PMID: 39564076 PMCID: PMC11575456 DOI: 10.1117/1.jbo.29.11.116003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 11/05/2024] [Accepted: 11/12/2024] [Indexed: 11/21/2024]
Abstract
Significance Early detection of melanoma is crucial for improving patient outcomes, and dermoscopy is a critical tool for this purpose. However, hair presence in dermoscopic images can obscure important features, complicating the diagnostic process. Enhancing image clarity by removing hair without compromising lesion integrity can significantly aid dermatologists in accurate melanoma detection. Aim We aim to develop a novel synthetic hair dermoscopic image dataset and a deep learning model specifically designed for hair removal in melanoma dermoscopy images. Approach To address the challenge of hair in dermoscopic images, we created a comprehensive synthetic hair dataset that simulates various hair types and dimensions over melanoma lesions. We then designed a convolutional neural network (CNN)-based model that focuses on effective hair removal while preserving the integrity of the melanoma lesions. Results The CNN-based model demonstrated significant improvements in the clarity and diagnostic utility of dermoscopic images. The enhanced images provided by our model offer a valuable tool for the dermatological community, aiding in more accurate and efficient melanoma detection. Conclusions The introduction of our synthetic hair dermoscopic image dataset and CNN-based model represents a significant advancement in medical image analysis for melanoma detection. By effectively removing hair from dermoscopic images while preserving lesion details, our approach enhances diagnostic accuracy and supports early melanoma detection efforts.
Collapse
Affiliation(s)
- Lennart Jütte
- Leibniz University Hannover, Hannover Centre for Optical Technologies, Hannover, Germany
| | - Harshkumar Patel
- Leibniz University Hannover, Hannover Centre for Optical Technologies, Hannover, Germany
| | - Bernhard Roth
- Leibniz University Hannover, Hannover Centre for Optical Technologies, Hannover, Germany
- Leibniz University Hannover, Cluster of Excellence PhoenixD, Hannover, Germany
| |
Collapse
|
34
|
Sajdeya R, Narouze S. Harnessing artificial intelligence for predicting and managing postoperative pain: a narrative literature review. Curr Opin Anaesthesiol 2024; 37:604-615. [PMID: 39011674 DOI: 10.1097/aco.0000000000001408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/17/2024]
Abstract
PURPOSE OF REVIEW This review examines recent research on artificial intelligence focusing on machine learning (ML) models for predicting postoperative pain outcomes. We also identify technical, ethical, and practical hurdles that demand continued investigation and research. RECENT FINDINGS Current ML models leverage diverse datasets, algorithmic techniques, and validation methods to identify predictive biomarkers, risk factors, and phenotypic signatures associated with increased acute and chronic postoperative pain and persistent opioid use. ML models demonstrate satisfactory performance to predict pain outcomes and their prognostic trajectories, identify modifiable risk factors and at-risk patients who benefit from targeted pain management strategies, and show promise in pain prevention applications. However, further evidence is needed to evaluate the reliability, generalizability, effectiveness, and safety of ML-driven approaches before their integration into perioperative pain management practices. SUMMARY Artificial intelligence (AI) has the potential to enhance perioperative pain management by providing more accurate predictive models and personalized interventions. By leveraging ML algorithms, clinicians can better identify at-risk patients and tailor treatment strategies accordingly. However, successful implementation needs to address challenges in data quality, algorithmic complexity, and ethical and practical considerations. Future research should focus on validating AI-driven interventions in clinical practice and fostering interdisciplinary collaboration to advance perioperative care.
Collapse
Affiliation(s)
- Ruba Sajdeya
- Department of Anesthesiology, Duke University School of Medicine, Durham, North Carolina
| | - Samer Narouze
- Division of Pain Medicine, University Hospitals Medical Center, Cleveland, Ohio, USA
| |
Collapse
|
35
|
Iftikhar M, Saqib M, Zareen M, Mumtaz H. Artificial intelligence: revolutionizing robotic surgery: review. Ann Med Surg (Lond) 2024; 86:5401-5409. [PMID: 39238994 PMCID: PMC11374272 DOI: 10.1097/ms9.0000000000002426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Accepted: 07/25/2024] [Indexed: 09/07/2024] Open
Abstract
Robotic surgery, known for its minimally invasive techniques and computer-controlled robotic arms, has revolutionized modern medicine by providing improved dexterity, visualization, and tremor reduction compared to traditional methods. The integration of artificial intelligence (AI) into robotic surgery has further advanced surgical precision, efficiency, and accessibility. This paper examines the current landscape of AI-driven robotic surgical systems, detailing their benefits, limitations, and future prospects. Initially, AI applications in robotic surgery focused on automating tasks like suturing and tissue dissection to enhance consistency and reduce surgeon workload. Present AI-driven systems incorporate functionalities such as image recognition, motion control, and haptic feedback, allowing real-time analysis of surgical field images and optimizing instrument movements for surgeons. The advantages of AI integration include enhanced precision, reduced surgeon fatigue, and improved safety. However, challenges such as high development costs, reliance on data quality, and ethical concerns about autonomy and liability hinder widespread adoption. Regulatory hurdles and workflow integration also present obstacles. Future directions for AI integration in robotic surgery include enhancing autonomy, personalizing surgical approaches, and refining surgical training through AI-powered simulations and virtual reality. Overall, AI integration holds promise for advancing surgical care, with potential benefits including improved patient outcomes and increased access to specialized expertise. Addressing challenges and promoting responsible adoption are essential for realizing the full potential of AI-driven robotic surgery.
Collapse
|
36
|
Jain R, Srirambhatla R, Kessler J, Goel R. The Black Box Dilemma: Challenges in Human-AI Collaboration in ML-CDSS. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024; 24:108-110. [PMID: 39226005 DOI: 10.1080/15265161.2024.2377105] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Affiliation(s)
| | | | | | - Ram Goel
- MIT Computer Science and Artificial Intelligence Laboratory
| |
Collapse
|
37
|
Liawrungrueang W, Han I, Cholamjiak W, Sarasombath P, Riew KD. Artificial Intelligence Detection of Cervical Spine Fractures Using Convolutional Neural Network Models. Neurospine 2024; 21:833-841. [PMID: 39363462 PMCID: PMC11456954 DOI: 10.14245/ns.2448580.290] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 07/05/2024] [Accepted: 07/14/2024] [Indexed: 10/05/2024] Open
Abstract
OBJECTIVE To develop and evaluate a technique using convolutional neural networks (CNNs) for the computer-assisted diagnosis of cervical spine fractures from radiographic x-ray images. By leveraging deep learning techniques, the study might potentially lead to improved patient outcomes and clinical decision-making. METHODS This study obtained 500 lateral radiographic cervical spine x-ray images from standard open-source dataset repositories to develop a classification model using CNNs. All the images contained diagnostic information, including normal cervical radiographic images (n=250) and fracture images of the cervical spine fracture (n=250). The model would classify whether the patient had a cervical spine fracture or not. Seventy percent of the images were training data sets used for model training, and 30% were for testing. Konstanz Information Miner (KNIME)'s graphic user interface-based programming enabled class label annotation, data preprocessing, CNNs model training, and performance evaluation. RESULTS The performance evaluation of a model for detecting cervical spine fractures presents compelling results across various metrics. This model exhibits high sensitivity (recall) values of 0.886 for fractures and 0.957 for normal cases, indicating its proficiency in identifying true positives. Precision values of 0.954 for fractures and 0.893 for normal cases highlight the model's ability to minimize false positives. With specificity values of 0.957 for fractures and 0.886 for normal cases, the model effectively identifies true negatives. The overall accuracy of 92.14% highlights its reliability in correctly classifying cases by the area under the receiver operating characteristic curve. CONCLUSION We successfully used deep learning models for computer-assisted diagnosis of cervical spine fractures from radiographic x-ray images. This approach can assist the radiologist in screening, detecting, and diagnosing cervical spine fractures.
Collapse
Affiliation(s)
| | - Inbo Han
- Department of Neurosurgery, CHA Bundang Medical Center, CHA University School of Medicine, Seongnam, Korea
| | | | - Peem Sarasombath
- Department of Orthopaedics, Phramongkutklao Hospital and College of Medicine, Bangkok, Thailand
| | - K. Daniel Riew
- Department of Neurological Surgery, Weill-Cornell Medicine and Department of Orthopedic Surgery, The Och Spine Hospital at New York Presbyterian Hospital, Columbia University, New York, NY, USA
| |
Collapse
|
38
|
Zhao A, Li L, Liu S. UIDF-Net: Unsupervised Image Dehazing and Fusion Utilizing GAN and Encoder-Decoder. J Imaging 2024; 10:164. [PMID: 39057735 PMCID: PMC11278268 DOI: 10.3390/jimaging10070164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 06/27/2024] [Accepted: 07/09/2024] [Indexed: 07/28/2024] Open
Abstract
Haze weather deteriorates image quality, causing images to become blurry with reduced contrast. This makes object edges and features unclear, leading to lower detection accuracy and reliability. To enhance haze removal effectiveness, we propose an image dehazing and fusion network based on the encoder-decoder paradigm (UIDF-Net). This network leverages the Image Fusion Module (MDL-IFM) to fuse the features of dehazed images, producing clearer results. Additionally, to better extract haze information, we introduce a haze encoder (Mist-Encode) that effectively processes different frequency features of images, improving the model's performance in image dehazing tasks. Experimental results demonstrate that the proposed model achieves superior dehazing performance compared to existing algorithms on outdoor datasets.
Collapse
Affiliation(s)
- Anxin Zhao
- School of Communication and Information Engineering, Xi’an University of Science and Technology, Xi’an 710054, China
| | | | | |
Collapse
|
39
|
Ling R, Wang M, Lu J, Wu S, Wu P, Ge J, Wang L, Liu Y, Jiang J, Shi K, Yan Z, Zuo C, Jiang J. Radiomics-Guided Deep Learning Networks Classify Differential Diagnosis of Parkinsonism. Brain Sci 2024; 14:680. [PMID: 39061420 PMCID: PMC11274493 DOI: 10.3390/brainsci14070680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 06/17/2024] [Accepted: 06/26/2024] [Indexed: 07/28/2024] Open
Abstract
The differential diagnosis between atypical Parkinsonian syndromes may be challenging and critical. We aimed to proposed a radiomics-guided deep learning (DL) model to discover interpretable DL features and further verify the proposed model through the differential diagnosis of Parkinsonian syndromes. We recruited 1495 subjects for 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) scanning, including 220 healthy controls and 1275 patients diagnosed with idiopathic Parkinson's disease (IPD), multiple system atrophy (MSA), or progressive supranuclear palsy (PSP). Baseline radiomics and two DL models were developed and tested for the Parkinsonian diagnosis. The DL latent features were extracted from the last layer and subsequently guided by radiomics. The radiomics-guided DL model outperformed the baseline radiomics approach, suggesting the effectiveness of the DL approach. DenseNet showed the best diagnosis ability (sensitivity: 95.7%, 90.1%, and 91.2% for IPD, MSA, and PSP, respectively) using retained DL features in the test dataset. The retained DL latent features were significantly associated with radiomics features and could be interpreted through biological explanations of handcrafted radiomics features. The radiomics-guided DL model offers interpretable high-level abstract information for differential diagnosis of Parkinsonian disorders and holds considerable promise for personalized disease monitoring.
Collapse
Affiliation(s)
- Ronghua Ling
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China;
- School of Medical Imaging, Shanghai University of Medicine & Health Science, Shanghai 201318, China;
| | - Min Wang
- School of Life Sciences, Shanghai University, Shanghai 200444, China (J.J.)
| | - Jiaying Lu
- Department of Nuclear Medicine & PET Center, National Clinical Research Center for Aging and Medicine, & National Center for Neurological Disorders, Huashan Hospital, Fudan University, Shanghai 200437, China
| | - Shaoyou Wu
- School of Life Sciences, Shanghai University, Shanghai 200444, China (J.J.)
| | - Ping Wu
- Department of Nuclear Medicine & PET Center, National Clinical Research Center for Aging and Medicine, & National Center for Neurological Disorders, Huashan Hospital, Fudan University, Shanghai 200437, China
| | - Jingjie Ge
- Department of Nuclear Medicine & PET Center, National Clinical Research Center for Aging and Medicine, & National Center for Neurological Disorders, Huashan Hospital, Fudan University, Shanghai 200437, China
| | - Luyao Wang
- School of Life Sciences, Shanghai University, Shanghai 200444, China (J.J.)
| | - Yingqian Liu
- School of Electrical Engineering, Shandong University of Aeronautics, Binzhou 256601, China
| | - Juanjuan Jiang
- School of Medical Imaging, Shanghai University of Medicine & Health Science, Shanghai 201318, China;
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, 3010 Bern, Switzerland
- Computer Aided Medical Procedures, School of Computation, Information and Technology, Technical University of Munich, 85748 Munich, Germany
| | - Zhuangzhi Yan
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China;
- School of Life Sciences, Shanghai University, Shanghai 200444, China (J.J.)
| | - Chuantao Zuo
- Department of Nuclear Medicine & PET Center, National Clinical Research Center for Aging and Medicine, & National Center for Neurological Disorders, Huashan Hospital, Fudan University, Shanghai 200437, China
| | - Jiehui Jiang
- School of Life Sciences, Shanghai University, Shanghai 200444, China (J.J.)
| |
Collapse
|
40
|
Jeyaraman N, Jeyaraman M, Yadav S, Ramasubramanian S, Balaji S, Muthu S, Lekha P C, Patro BP. Applications of Fog Computing in Healthcare. Cureus 2024; 16:e64263. [PMID: 39130982 PMCID: PMC11315376 DOI: 10.7759/cureus.64263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/10/2024] [Indexed: 08/13/2024] Open
Abstract
Fog computing is a decentralized computing infrastructure that processes data at or near its source, reducing latency and bandwidth usage. This technology is gaining traction in healthcare due to its potential to enhance real-time data processing and decision-making capabilities in critical medical scenarios. A systematic review of existing literature on fog computing in healthcare was conducted. The review included searches in major databases such as PubMed, IEEE Xplore, Scopus, and Google Scholar. The search terms used were "fog computing in healthcare," "real-time diagnostics and fog computing," "continuous patient monitoring fog computing," "predictive analytics fog computing," "interoperability in fog computing healthcare," "scalability issues fog computing healthcare," and "security challenges fog computing healthcare." Articles published between 2010 and 2023 were considered. Inclusion criteria encompassed peer-reviewed articles, conference papers, and review articles focusing on the applications of fog computing in healthcare. Exclusion criteria were articles not available in English, those not related to healthcare applications, and those lacking empirical data. Data extraction focused on the applications of fog computing in real-time diagnostics, continuous monitoring, predictive analytics, and the identified challenges of interoperability, scalability, and security. Fog computing significantly enhances diagnostic capabilities by facilitating real-time data analysis, crucial for urgent diagnostics such as stroke detection, by processing data closer to its source. It also improves monitoring during surgeries by enabling real-time processing of vital signs and physiological parameters, thereby enhancing patient safety. In chronic disease management, continuous data collection and analysis through wearable devices allow for proactive disease management and timely adjustments to treatment plans. Additionally, fog computing supports telemedicine by enabling real-time communication between remote specialists and patients, thereby improving access to specialist care in underserved regions. Fog computing offers transformative potential in healthcare, improving diagnostic precision, patient monitoring, and personalized treatment. Addressing the challenges of interoperability, scalability, and security will be crucial for fully realizing the benefits of fog computing in healthcare, leading to a more connected and efficient healthcare environment.
Collapse
Affiliation(s)
- Naveen Jeyaraman
- Orthopaedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Madhan Jeyaraman
- Clinical Research, Virginia Tech India, Dr. MGR Educational and Research Institute, Chennai, IND
- Orthopaedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sankalp Yadav
- Medicine, Shri Madan Lal Khurana Chest Clinic, New Delhi, IND
| | | | - Sangeetha Balaji
- Orthopaedics, Government Medical College, Omandurar Government Estate, Chennai, IND
| | - Sathish Muthu
- Orthopaedics and Traumatology, Orthopaedic Research Group, Coimbatore, IND
- Biotechnology, Karpagam Academy of Higher Education, Coimbatore, IND
- Orthopaedics, Government Medical College, Karur, IND
| | - Chithra Lekha P
- Clinical Research, Virginia Tech India, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Bishnu P Patro
- Orthopaedics, All India Institute of Medical Sciences, Bhubaneswar, IND
| |
Collapse
|
41
|
Dash SK, Mishra S, Mishra S. Diagnostic Potentials of Lung Ultrasound In Neonatal Care: An Updated Overview. Cureus 2024; 16:e62200. [PMID: 39006672 PMCID: PMC11239959 DOI: 10.7759/cureus.62200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/11/2024] [Indexed: 07/16/2024] Open
Abstract
Recent technological strides, including high-frequency probes and lung ultrasound, have become a crucial non-invasive diagnostic tool in neonatal care, revolutionizing how respiratory conditions are assessed in the neonatal intensive care unit (NICU). High-frequency probes and portable devices significantly enhance the effectiveness of lung ultrasound in identifying respiratory distress syndrome (RDS), pneumonia, and pneumothorax, and underscore its growing significance. This comprehensive review explores the historical journey of lung ultrasonography, technological advancements, contemporary applications in neonatal care, emerging trends, and collaborative initiatives, and foresees a future where personalized healthcare optimizes outcomes for neonates.
Collapse
Affiliation(s)
- Swarup Kumar Dash
- Pediatrics/Neonatology, Latifa Women and Children Hospital, Dubai, ARE
| | - Swagatika Mishra
- Prosthetics and Orthotics (Cranial), OrthoMENA Prosthetics and Orthotics Centre, Dubai, ARE
| | - Swapnesh Mishra
- General Medicine, Pandit Raghunath Murmu Medical College, Baripada, IND
| |
Collapse
|
42
|
Umemoto M, Mariya T, Nambu Y, Nagata M, Horimai T, Sugita S, Kanaseki T, Takenaka Y, Shinkai S, Matsuura M, Iwasaki M, Hirohashi Y, Hasegawa T, Torigoe T, Fujino Y, Saito T. Prediction of Mismatch Repair Status in Endometrial Cancer from Histological Slide Images Using Various Deep Learning-Based Algorithms. Cancers (Basel) 2024; 16:1810. [PMID: 38791889 PMCID: PMC11119770 DOI: 10.3390/cancers16101810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 04/22/2024] [Accepted: 05/08/2024] [Indexed: 05/26/2024] Open
Abstract
The application of deep learning algorithms to predict the molecular profiles of various cancers from digital images of hematoxylin and eosin (H&E)-stained slides has been reported in recent years, mainly for gastric and colon cancers. In this study, we investigated the potential use of H&E-stained endometrial cancer slide images to predict the associated mismatch repair (MMR) status. H&E-stained slide images were collected from 127 cases of the primary lesion of endometrial cancer. After digitization using a Nanozoomer virtual slide scanner (Hamamatsu Photonics), we segmented the scanned images into 5397 tiles of 512 × 512 pixels. The MMR proteins (PMS2, MSH6) were immunohistochemically stained, classified into MMR proficient/deficient, and annotated for each case and tile. We trained several neural networks, including convolutional and attention-based networks, using tiles annotated with the MMR status. Among the tested networks, ResNet50 exhibited the highest area under the receiver operating characteristic curve (AUROC) of 0.91 for predicting the MMR status. The constructed prediction algorithm may be applicable to other molecular profiles and useful for pre-screening before implementing other, more costly genetic profiling tests.
Collapse
Affiliation(s)
- Mina Umemoto
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Tasuku Mariya
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Yuta Nambu
- Department of Media Architecture, Future University Hakodate, Hakodate 041-8655, Japan; (Y.N.); (M.N.); (Y.F.)
| | - Mai Nagata
- Department of Media Architecture, Future University Hakodate, Hakodate 041-8655, Japan; (Y.N.); (M.N.); (Y.F.)
| | | | - Shintaro Sugita
- Department of Surgical Pathology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (S.S.); (T.H.)
| | - Takayuki Kanaseki
- Department of Pathology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (T.K.); (Y.H.); (T.T.)
| | - Yuka Takenaka
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Shota Shinkai
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Motoki Matsuura
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Masahiro Iwasaki
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Yoshihiko Hirohashi
- Department of Pathology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (T.K.); (Y.H.); (T.T.)
| | - Tadashi Hasegawa
- Department of Surgical Pathology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (S.S.); (T.H.)
| | - Toshihiko Torigoe
- Department of Pathology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (T.K.); (Y.H.); (T.T.)
| | - Yuichi Fujino
- Department of Media Architecture, Future University Hakodate, Hakodate 041-8655, Japan; (Y.N.); (M.N.); (Y.F.)
| | - Tsuyoshi Saito
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| |
Collapse
|
43
|
Thakur GK, Thakur A, Kulkarni S, Khan N, Khan S. Deep Learning Approaches for Medical Image Analysis and Diagnosis. Cureus 2024; 16:e59507. [PMID: 38826977 PMCID: PMC11144045 DOI: 10.7759/cureus.59507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 05/01/2024] [Indexed: 06/04/2024] Open
Abstract
In addition to enhancing diagnostic accuracy, deep learning techniques offer the potential to streamline workflows, reduce interpretation time, and ultimately improve patient outcomes. The scalability and adaptability of deep learning algorithms enable their deployment across diverse clinical settings, ranging from radiology departments to point-of-care facilities. Furthermore, ongoing research efforts focus on addressing the challenges of data heterogeneity, model interpretability, and regulatory compliance, paving the way for seamless integration of deep learning solutions into routine clinical practice. As the field continues to evolve, collaborations between clinicians, data scientists, and industry stakeholders will be paramount in harnessing the full potential of deep learning for advancing medical image analysis and diagnosis. Furthermore, the integration of deep learning algorithms with other technologies, including natural language processing and computer vision, may foster multimodal medical data analysis and clinical decision support systems to improve patient care. The future of deep learning in medical image analysis and diagnosis is promising. With each success and advancement, this technology is getting closer to being leveraged for medical purposes. Beyond medical image analysis, patient care pathways like multimodal imaging, imaging genomics, and intelligent operating rooms or intensive care units can benefit from deep learning models.
Collapse
Affiliation(s)
- Gopal Kumar Thakur
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Abhishek Thakur
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Shridhar Kulkarni
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Naseebia Khan
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Shahnawaz Khan
- Department of Computer Application, Bundelkhand University, Jhansi, IND
| |
Collapse
|
44
|
Camastra C, Pasini G, Stefano A, Russo G, Vescio B, Bini F, Marinozzi F, Augimeri A. Development and Implementation of an Innovative Framework for Automated Radiomics Analysis in Neuroimaging. J Imaging 2024; 10:96. [PMID: 38667994 PMCID: PMC11051015 DOI: 10.3390/jimaging10040096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 04/11/2024] [Accepted: 04/16/2024] [Indexed: 04/28/2024] Open
Abstract
Radiomics represents an innovative approach to medical image analysis, enabling comprehensive quantitative evaluation of radiological images through advanced image processing and Machine or Deep Learning algorithms. This technique uncovers intricate data patterns beyond human visual detection. Traditionally, executing a radiomic pipeline involves multiple standardized phases across several software platforms. This could represent a limit that was overcome thanks to the development of the matRadiomics application. MatRadiomics, a freely available, IBSI-compliant tool, features its intuitive Graphical User Interface (GUI), facilitating the entire radiomics workflow from DICOM image importation to segmentation, feature selection and extraction, and Machine Learning model construction. In this project, an extension of matRadiomics was developed to support the importation of brain MRI images and segmentations in NIfTI format, thus extending its applicability to neuroimaging. This enhancement allows for the seamless execution of radiomic pipelines within matRadiomics, offering substantial advantages to the realm of neuroimaging.
Collapse
Affiliation(s)
- Chiara Camastra
- Department of Mechanical and Aerospace Engineering, Sapienza University of Rome, Eudossiana 18, 00184 Rome, Italy; (G.P.); (F.B.); (F.M.)
| | - Giovanni Pasini
- Department of Mechanical and Aerospace Engineering, Sapienza University of Rome, Eudossiana 18, 00184 Rome, Italy; (G.P.); (F.B.); (F.M.)
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù and 88100 Catanzaro, Italy; (A.S.); (G.R.); or (B.V.)
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù and 88100 Catanzaro, Italy; (A.S.); (G.R.); or (B.V.)
| | - Giorgio Russo
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù and 88100 Catanzaro, Italy; (A.S.); (G.R.); or (B.V.)
| | - Basilio Vescio
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù and 88100 Catanzaro, Italy; (A.S.); (G.R.); or (B.V.)
- Biotecnomed SCARL, Campus Universitario di Germaneto, Viale Europa, 88100 Catanzaro, Italy;
| | - Fabiano Bini
- Department of Mechanical and Aerospace Engineering, Sapienza University of Rome, Eudossiana 18, 00184 Rome, Italy; (G.P.); (F.B.); (F.M.)
| | - Franco Marinozzi
- Department of Mechanical and Aerospace Engineering, Sapienza University of Rome, Eudossiana 18, 00184 Rome, Italy; (G.P.); (F.B.); (F.M.)
| | - Antonio Augimeri
- Biotecnomed SCARL, Campus Universitario di Germaneto, Viale Europa, 88100 Catanzaro, Italy;
| |
Collapse
|