1
|
Huang X, Wang Z, Zhou W, Yang K, Wen K, Liu H, Huang S, Lyu M. Tailored self-supervised pretraining improves brain MRI diagnostic models. Comput Med Imaging Graph 2025; 123:102560. [PMID: 40252479 DOI: 10.1016/j.compmedimag.2025.102560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2024] [Revised: 04/05/2025] [Accepted: 04/16/2025] [Indexed: 04/21/2025]
Abstract
Self-supervised learning has shown potential in enhancing deep learning methods, yet its application in brain magnetic resonance imaging (MRI) analysis remains underexplored. This study seeks to leverage large-scale, unlabeled public brain MRI datasets to improve the performance of deep learning models in various downstream tasks for the development of clinical decision support systems. To enhance training efficiency, data filtering methods based on image entropy and slice positions were developed, condensing a combined dataset of approximately 2 million images from fastMRI-brain, OASIS-3, IXI, and BraTS21 into a more focused set of 250 K images enriched with brain features. The Momentum Contrast (MoCo) v3 algorithm was then employed to learn these image features, resulting in robustly pretrained models specifically tailored to brain MRI. The pretrained models were subsequently evaluated in tumor classification, lesion detection, hippocampal segmentation, and image reconstruction tasks. The results demonstrate that our brain MRI-oriented pretraining outperformed both ImageNet pretraining and pretraining on larger multi-organ, multi-modality medical datasets, achieving a ∼2.8 % increase in 4-class tumor classification accuracy, a ∼0.9 % improvement in tumor detection mean average precision, a ∼3.6 % gain in adult hippocampal segmentation Dice score, and a ∼0.1 PSNR improvement in reconstruction at 2-fold acceleration. This study underscores the potential of self-supervised learning for brain MRI using large-scale, tailored datasets derived from public sources.
Collapse
Affiliation(s)
- Xinhao Huang
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China; College of Applied Sciences, Shenzhen University, Shenzhen, China; Guangdong-Hongkong-Macau CNS Regeneration Institute, Key Laboratory of CNS Regeneration (Jinan University)-Ministry of Education, Jinan University, Guangzhou, China
| | - Zihao Wang
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China; College of Applied Sciences, Shenzhen University, Shenzhen, China; Guangdong-Hongkong-Macau CNS Regeneration Institute, Key Laboratory of CNS Regeneration (Jinan University)-Ministry of Education, Jinan University, Guangzhou, China
| | - Weichen Zhou
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Kexin Yang
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China; College of Applied Sciences, Shenzhen University, Shenzhen, China
| | - Kaihua Wen
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Haiguang Liu
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Shoujin Huang
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Mengye Lyu
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China; College of Applied Sciences, Shenzhen University, Shenzhen, China; Guangdong-Hongkong-Macau CNS Regeneration Institute, Key Laboratory of CNS Regeneration (Jinan University)-Ministry of Education, Jinan University, Guangzhou, China.
| |
Collapse
|
2
|
Pisani N, Abate F, Avallone AR, Barone P, Cesarelli M, Amato F, Picillo M, Ricciardi C. A radiomics approach to distinguish Progressive Supranuclear Palsy Richardson's syndrome from other phenotypes starting from MR images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 266:108778. [PMID: 40250307 DOI: 10.1016/j.cmpb.2025.108778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Revised: 04/11/2025] [Accepted: 04/12/2025] [Indexed: 04/20/2025]
Abstract
BACKGROUND AND OBJECTIVE Progressive Supranuclear Palsy (PSP) is an uncommon neurodegenerative disorder with different clinical onset, including Richardson's syndrome (PSP-RS) and other variant phenotypes (vPSP). Recognising the clinical progression of different phenotypes would enhance the accuracy of detection and treatment of PSP. The study goal was to identify radiomic biomarkers for distinguishing PSP phenotypes extracted from T1-weighted magnetic resonance images (MRI). METHODS Forty PSP patients (20 PSP-RS and 20 vPSP) took part in the present work. Radiomic features were collected from 21 regions of interest (ROIs) mainly from frontal cortex, supratentorial white matter, basal nuclei, brainstem, cerebellum, 3rd and 4th ventricles. After features selection, three tree-based machine learning (ML) classifiers were implemented to classify PSP phenotypes. RESULTS 10 out of 21 ROIs performed best about sensitivity, specificity, accuracy and area under the receiver operating characteristic curve (AUCROC). Particularly, features extracted from the pons region obtained the best accuracy (0.92) and AUCROC (0.83) values while by using the other 10 ROIs, evaluation metrics range from 0.67 to 0.83. Eight features of the Gray Level Dependence Matrix were recurrently extracted for the 10 ROIs. Furthermore, by combining these ROIs, the results exceeded 0.83 in phenotypes classification and the selected areas were brain stem, pons, occipital white matter, precentral gyrus and thalamus regions. CONCLUSIONS Based on the achieved results, our proposed approach could represent a promising tool for distinguishing PSP-RS from vPSP.
Collapse
Affiliation(s)
- Noemi Pisani
- Department of Advanced Biomedical Sciences, University of Naples Federico II, 80131 Naples, Italy
| | - Filomena Abate
- Center for Neurodegenerative Diseases (CEMAND), Department of Medicine, Surgery and Dentistry "Scuola Medica Salernitana", University of Salerno, 84131 Salerno, Italy
| | - Anna Rosa Avallone
- Center for Neurodegenerative Diseases (CEMAND), Department of Medicine, Surgery and Dentistry "Scuola Medica Salernitana", University of Salerno, 84131 Salerno, Italy
| | - Paolo Barone
- Center for Neurodegenerative Diseases (CEMAND), Department of Medicine, Surgery and Dentistry "Scuola Medica Salernitana", University of Salerno, 84131 Salerno, Italy
| | - Mario Cesarelli
- Department of Engineering, University of Sannio, 82100 Benevento, Italy
| | - Francesco Amato
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, 80125 Naples, Italy
| | - Marina Picillo
- Center for Neurodegenerative Diseases (CEMAND), Department of Medicine, Surgery and Dentistry "Scuola Medica Salernitana", University of Salerno, 84131 Salerno, Italy
| | - Carlo Ricciardi
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, 80125 Naples, Italy.
| |
Collapse
|
3
|
Wang K, Zhu M, Boulila W, Driss M, Gadekallu TR, Chen CM, Wang L, Kumari S, Yiu SM. SeqNovo: De Novo Peptide Sequencing Prediction in IoMT via Seq2Seq. IEEE J Biomed Health Inform 2025; 29:2377-2387. [PMID: 37792659 DOI: 10.1109/jbhi.2023.3321780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
In the Internet of Medical Things (IoMT), de novo peptide sequencing prediction is one of the most important techniques for the fields of disease prediction, diagnosis, and treatment. Recently, deep-learning-based peptide sequencing prediction has been a new trend. However, most popular deep learning models for peptide sequencing prediction suffer from poor interpretability and poor ability to capture long-range dependencies. To solve these issues, we propose a model named SeqNovo, which has the encoding-decoding structure of sequence to sequence (Seq2Seq), the highly nonlinear properties of multilayer perceptron (MLP), and the ability of the attention mechanism to capture long-range dependencies. SeqNovo use MLP to improve the feature extraction and utilize the attention mechanism to discover key information. A series of experiments have been conducted to show that the SeqNovo is superior to the Seq2Seq benchmark model, DeepNovo. SeqNovo improves both the accuracy and interpretability of the predictions, which will be expected to support more related research.
Collapse
|
4
|
Khan MA, Shafiq U, Hamza A, Mirza AM, Baili J, AlHammadi DA, Cho HC, Chang B. A novel network-level fused deep learning architecture with shallow neural network classifier for gastrointestinal cancer classification from wireless capsule endoscopy images. BMC Med Inform Decis Mak 2025; 25:150. [PMID: 40165262 PMCID: PMC11956435 DOI: 10.1186/s12911-025-02966-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2024] [Accepted: 03/10/2025] [Indexed: 04/02/2025] Open
Abstract
Deep learning has significantly contributed to medical imaging and computer-aided diagnosis (CAD), providing accurate disease classification and diagnosis. However, challenges such as inter- and intra-class similarities, class imbalance, and computational inefficiencies due to numerous hyperparameters persist. This study aims to address these challenges by presenting a novel deep-learning framework for classifying and localizing gastrointestinal (GI) diseases from wireless capsule endoscopy (WCE) images. The proposed framework begins with dataset augmentation to enhance training robustness. Two novel architectures, Sparse Convolutional DenseNet201 with Self-Attention (SC-DSAN) and CNN-GRU, are fused at the network level using a depth concatenation layer, avoiding the computational costs of feature-level fusion. Bayesian Optimization (BO) is employed for dynamic hyperparameter tuning, and an Entropy-controlled Marine Predators Algorithm (EMPA) selects optimal features. These features are classified using a Shallow Wide Neural Network (SWNN) and traditional classifiers. Experimental evaluations on the Kvasir-V1 and Kvasir-V2 datasets demonstrate superior performance, achieving accuracies of 99.60% and 95.10%, respectively. The proposed framework offers improved accuracy, precision, and computational efficiency compared to state-of-the-art models. The proposed framework addresses key challenges in GI disease diagnosis, demonstrating its potential for accurate and efficient clinical applications. Future work will explore its adaptability to additional datasets and optimize its computational complexity for broader deployment.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science and Engineering, College of Computer Engineering and Science, Prince Mohammad Bin Fahd University, Al-Khobar, KSA, Kingdom of Saudi Arabia.
| | - Usama Shafiq
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Ameer Hamza
- Centre of Real Time Computer Systems, Kaunas University of Technology (KTU), Kaunas, Lithuania
| | - Anwar M Mirza
- Department of Computer Science and Engineering, College of Computer Engineering and Science, Prince Mohammad Bin Fahd University, Al-Khobar, KSA, Kingdom of Saudi Arabia
| | - Jamel Baili
- Department of Computer Engineering, College of Computer Science, King Khalid University, Abha, 61413, Saudi Arabia
| | - Dina Abdulaziz AlHammadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O.Box 84428, Riyadh, 11671, Saudi Arabia
| | - Hee-Chan Cho
- HYU Center for Computational Social Science, Hanyang University, Seoul, South Korea
| | - Byoungchol Chang
- Department of Computer Science, Hanynag University, seoul, 01000, Korea, Republic of (South Korea).
| |
Collapse
|
5
|
Kumar Y, Cardan RA, Chang HH, Heinzman KA, Gultekin K, Goss A, McDonald A, Murdaugh D, McConathy J, Rothenberg S, Smith AD, Fiveash J, Cardenas CE. Demonstrating an Academic Core Facility for Automated Medical Image Processing and Analysis: Workflow Design and Practical Applications. Diagnostics (Basel) 2025; 15:803. [PMID: 40218152 PMCID: PMC11988328 DOI: 10.3390/diagnostics15070803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2025] [Revised: 03/12/2025] [Accepted: 03/19/2025] [Indexed: 04/14/2025] Open
Abstract
Background/Objectives: Medical research institutions are increasingly leveraging artificial intelligence (AI) to enhance the processing and analysis of medical imaging data. However, scaling AI-driven medical image analysis often requires specialized expertise and infrastructure that individual labs may lack. A centralized solution is to establish a core facility-a shared institutional resource-dedicated to Automated Medical Image Processing and Analysis (AMIPA). Methods: This technical note offers a practical roadmap for institutions to create an AI-based core facility for AMIPA, drawing on our experience in building such a resource. Results: We outline the key components for replicating a successful AMIPA core facility, including high-performance computing resources, robust AI software pipelines, data management strategies, and dedicated support personnel. Emphasis is placed on workflow automation and reproducibility, ensuring researchers can efficiently and consistently process large imaging datasets. Conclusions: By following this roadmap, institutions can accelerate AI adoption in imaging workflows and foster a shared resource that enhances the quality and productivity of medical imaging research.
Collapse
Affiliation(s)
- Yogesh Kumar
- Department of Radiation Oncology, University of Alabama at Birmingham, Birmingham, AL 35233, USA; (R.A.C.); (H.-h.C.); (K.A.H.); (A.M.); (J.F.)
| | - Rex A. Cardan
- Department of Radiation Oncology, University of Alabama at Birmingham, Birmingham, AL 35233, USA; (R.A.C.); (H.-h.C.); (K.A.H.); (A.M.); (J.F.)
| | - Ho-hsin Chang
- Department of Radiation Oncology, University of Alabama at Birmingham, Birmingham, AL 35233, USA; (R.A.C.); (H.-h.C.); (K.A.H.); (A.M.); (J.F.)
| | - Katherine A. Heinzman
- Department of Radiation Oncology, University of Alabama at Birmingham, Birmingham, AL 35233, USA; (R.A.C.); (H.-h.C.); (K.A.H.); (A.M.); (J.F.)
- Department of Biomedical Engineering, University of Alabama at Birmingham, Birmingham, AL 35294, USA;
- Institute for Cancer Outcomes and Survivorship, University of Alabama at Birmingham, Birmingham, AL 35233, USA;
| | - Kadir Gultekin
- Department of Biomedical Engineering, University of Alabama at Birmingham, Birmingham, AL 35294, USA;
| | - Amy Goss
- Department of Nutrition Sciences, University of Alabama at Birmingham, Birmingham, AL 35233, USA;
| | - Andrew McDonald
- Department of Radiation Oncology, University of Alabama at Birmingham, Birmingham, AL 35233, USA; (R.A.C.); (H.-h.C.); (K.A.H.); (A.M.); (J.F.)
- Institute for Cancer Outcomes and Survivorship, University of Alabama at Birmingham, Birmingham, AL 35233, USA;
| | - Donna Murdaugh
- Institute for Cancer Outcomes and Survivorship, University of Alabama at Birmingham, Birmingham, AL 35233, USA;
- Department of Pediatrics, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - Jonathan McConathy
- Department of Radiology, University of Alabama at Birmingham, Birmingham, AL 35294, USA; (J.M.); (S.R.); (A.D.S.)
| | - Steven Rothenberg
- Department of Radiology, University of Alabama at Birmingham, Birmingham, AL 35294, USA; (J.M.); (S.R.); (A.D.S.)
| | - Andrew D. Smith
- Department of Radiology, University of Alabama at Birmingham, Birmingham, AL 35294, USA; (J.M.); (S.R.); (A.D.S.)
- Department of Radiology, St Jude Children’s Research Hospital, Memphis, TN 38105, USA
| | - John Fiveash
- Department of Radiation Oncology, University of Alabama at Birmingham, Birmingham, AL 35233, USA; (R.A.C.); (H.-h.C.); (K.A.H.); (A.M.); (J.F.)
| | - Carlos E. Cardenas
- Department of Radiation Oncology, University of Alabama at Birmingham, Birmingham, AL 35233, USA; (R.A.C.); (H.-h.C.); (K.A.H.); (A.M.); (J.F.)
| |
Collapse
|
6
|
Xie W, Chen P, Li Z, Wang X, Wang C, Zhang L, Wu W, Xiang J, Wang Y, Zhong D. A Two stage deep learning network for automated femoral segmentation in bilateral lower limb CT scans. Sci Rep 2025; 15:9198. [PMID: 40097821 PMCID: PMC11914536 DOI: 10.1038/s41598-025-94180-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2024] [Accepted: 03/12/2025] [Indexed: 03/19/2025] Open
Abstract
This study presents the development of a deep learning-based two-stage network designed for the efficient and precise segmentation of the femur in full lower limb CT images. The proposed network incorporates a dual-phase approach: rapid delineation of regions of interest followed by semantic segmentation of the femur. The experimental dataset comprises 100 samples obtained from a hospital, partitioned into 85 for training, 8 for validation, and 7 for testing. In the first stage, the model achieves an average Intersection over Union of 0.9671 and a mean Average Precision of 0.9656, effectively delineating the femoral region with high accuracy. During the second stage, the network attains an average Dice coefficient of 0.953, sensitivity of 0.965, specificity of 0.998, and pixel accuracy of 0.996, ensuring precise segmentation of the femur. When compared to the single-stage SegResNet architecture, the proposed two-stage model demonstrates faster convergence during training, reduced inference times, higher segmentation accuracy, and overall superior performance. Comparative evaluations against the TransUnet model further highlight the network's notable advantages in accuracy and robustness. In summary, the proposed two-stage network offers an efficient, accurate, and autonomous solution for femur segmentation in large-scale and complex medical imaging datasets. Requiring relatively modest training and computational resources, the model exhibits significant potential for scalability and clinical applicability, making it a valuable tool for advancing femoral image segmentation and supporting diagnostic workflows.
Collapse
Affiliation(s)
- Wenqing Xie
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China
| | - Peng Chen
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China
| | - Zhigang Li
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China
| | - Xiaopeng Wang
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China
| | - Chenggong Wang
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China
| | - Lin Zhang
- Changzhou Jinse Medical Information Technology Co., Ltd, Changzhou, 213000, Jiangsu, China
| | - Wenhao Wu
- Changzhou Jinse Medical Information Technology Co., Ltd, Changzhou, 213000, Jiangsu, China
| | - Junjie Xiang
- Changzhou Jinse Medical Information Technology Co., Ltd, Changzhou, 213000, Jiangsu, China
| | - Yiping Wang
- Changzhou Jinse Medical Information Technology Co., Ltd, Changzhou, 213000, Jiangsu, China.
| | - Da Zhong
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China.
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China.
| |
Collapse
|
7
|
Clement David-Olawade A, Olawade DB, Vanderbloemen L, Rotifa OB, Fidelis SC, Egbon E, Akpan AO, Adeleke S, Ghose A, Boussios S. AI-Driven Advances in Low-Dose Imaging and Enhancement-A Review. Diagnostics (Basel) 2025; 15:689. [PMID: 40150031 PMCID: PMC11941271 DOI: 10.3390/diagnostics15060689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2025] [Revised: 02/24/2025] [Accepted: 03/05/2025] [Indexed: 03/29/2025] Open
Abstract
The widespread use of medical imaging techniques such as X-rays and computed tomography (CT) has raised significant concerns regarding ionizing radiation exposure, particularly among vulnerable populations requiring frequent imaging. Achieving a balance between high-quality diagnostic imaging and minimizing radiation exposure remains a fundamental challenge in radiology. Artificial intelligence (AI) has emerged as a transformative solution, enabling low-dose imaging protocols that enhance image quality while significantly reducing radiation doses. This review explores the role of AI-assisted low-dose imaging, particularly in CT, X-ray, and magnetic resonance imaging (MRI), highlighting advancements in deep learning models, convolutional neural networks (CNNs), and other AI-based approaches. These technologies have demonstrated substantial improvements in noise reduction, artifact removal, and real-time optimization of imaging parameters, thereby enhancing diagnostic accuracy while mitigating radiation risks. Additionally, AI has contributed to improved radiology workflow efficiency and cost reduction by minimizing the need for repeat scans. The review also discusses emerging directions in AI-driven medical imaging, including hybrid AI systems that integrate post-processing with real-time data acquisition, personalized imaging protocols tailored to patient characteristics, and the expansion of AI applications to fluoroscopy and positron emission tomography (PET). However, challenges such as model generalizability, regulatory constraints, ethical considerations, and computational requirements must be addressed to facilitate broader clinical adoption. AI-driven low-dose imaging has the potential to revolutionize radiology by enhancing patient safety, optimizing imaging quality, and improving healthcare efficiency, paving the way for a more advanced and sustainable future in medical imaging.
Collapse
Affiliation(s)
| | - David B. Olawade
- Department of Allied and Public Health, School of Health, Sport and Bioscience, University of East London, London E16 2RD, UK
- Department of Research and Innovation, Medway NHS Foundation Trust, Gillingham ME7 5NY, UK;
- Department of Public Health, York St. John University, London E14 2BA, UK
| | - Laura Vanderbloemen
- Department of Primary Care and Public Health, Imperial College London, London SW7 2AZ, UK;
- School of Health, Sport and Bioscience, University of East London, London E16 2RD, UK
| | - Oluwayomi B. Rotifa
- Department of Radiology, Afe Babalola University MultiSystem Hospital, Ado-Ekiti 360102, Ekiti State, Nigeria;
| | - Sandra Chinaza Fidelis
- School of Nursing and Midwifery, University of Central Lancashire, Preston Campus, Preston PR1 2HE, UK;
| | - Eghosasere Egbon
- Department of Tissue Engineering and Regenerative Medicine, Faculty of Life Science Engineering, FH Technikum, 1200 Vienna, Austria;
| | | | - Sola Adeleke
- Guy’s Cancer Centre, Guy’s and St. Thomas’ NHS Foundation Trust, London SE1 9RT, UK;
- School of Cancer & Pharmaceutical Sciences, King’s College London, Strand, London WC2R 2LS, UK
| | - Aruni Ghose
- Department of Medical Oncology, Medway NHS Foundation Trust, Gillingham ME7 5NY, UK;
- United Kingdom and Ireland Global Cancer Network, Manchester M20 4BX, UK
| | - Stergios Boussios
- Department of Research and Innovation, Medway NHS Foundation Trust, Gillingham ME7 5NY, UK;
- School of Cancer & Pharmaceutical Sciences, King’s College London, Strand, London WC2R 2LS, UK
- Department of Medical Oncology, Medway NHS Foundation Trust, Gillingham ME7 5NY, UK;
- Faculty of Medicine, Health, and Social Care, Canterbury Christ Church University, Canterbury CT1 1QU, UK
- Kent Medway Medical School, University of Kent, Canterbury CT2 7NZ, UK
- AELIA Organization, 57001 Thessaloniki, Greece
| |
Collapse
|
8
|
Xu X, Su J, Zhu R, Li K, Zhao X, Fan J, Mao F. From morphology to single-cell molecules: high-resolution 3D histology in biomedicine. Mol Cancer 2025; 24:63. [PMID: 40033282 DOI: 10.1186/s12943-025-02240-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Accepted: 01/18/2025] [Indexed: 03/05/2025] Open
Abstract
High-resolution three-dimensional (3D) tissue analysis has emerged as a transformative innovation in the life sciences, providing detailed insights into the spatial organization and molecular composition of biological tissues. This review begins by tracing the historical milestones that have shaped the development of high-resolution 3D histology, highlighting key breakthroughs that have facilitated the advancement of current technologies. We then systematically categorize the various families of high-resolution 3D histology techniques, discussing their core principles, capabilities, and inherent limitations. These 3D histology techniques include microscopy imaging, tomographic approaches, single-cell and spatial omics, computational methods and 3D tissue reconstruction (e.g. 3D cultures and spheroids). Additionally, we explore a wide range of applications for single-cell 3D histology, demonstrating how single-cell and spatial technologies are being utilized in the fields such as oncology, cardiology, neuroscience, immunology, developmental biology and regenerative medicine. Despite the remarkable progress made in recent years, the field still faces significant challenges, including high barriers to entry, issues with data robustness, ambiguous best practices for experimental design, and a lack of standardization across methodologies. This review offers a thorough analysis of these challenges and presents recommendations to surmount them, with the overarching goal of nurturing ongoing innovation and broader integration of cellular 3D tissue analysis in both biology research and clinical practice.
Collapse
Affiliation(s)
- Xintian Xu
- Institute of Medical Innovation and Research, Peking University Third Hospital, Beijing, China
- Cancer Center, Peking University Third Hospital, Beijing, China
- Department of Biochemistry and Molecular Biology, Beijing, Key Laboratory of Protein Posttranslational Modifications and Cell Function, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Jimeng Su
- Institute of Medical Innovation and Research, Peking University Third Hospital, Beijing, China
- Cancer Center, Peking University Third Hospital, Beijing, China
- College of Animal Science and Technology, Yangzhou University, Yangzhou, Jiangsu, China
| | - Rongyi Zhu
- Department of Biochemistry and Molecular Biology, Beijing, Key Laboratory of Protein Posttranslational Modifications and Cell Function, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Kailong Li
- Department of Biochemistry and Molecular Biology, Beijing, Key Laboratory of Protein Posttranslational Modifications and Cell Function, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Xiaolu Zhao
- State Key Laboratory of Female Fertility Promotion, Center for Reproductive Medicine, Department of Obstetrics and GynecologyNational Clinical Research Center for Obstetrics and Gynecology (Peking University Third Hospital)Key Laboratory of Assisted Reproduction (Peking University), Ministry of EducationBeijing Key Laboratory of Reproductive Endocrinology and Assisted Reproductive Technology, Peking University Third Hospital, Beijing, China.
| | - Jibiao Fan
- College of Animal Science and Technology, Yangzhou University, Yangzhou, Jiangsu, China.
| | - Fengbiao Mao
- Institute of Medical Innovation and Research, Peking University Third Hospital, Beijing, China.
- Cancer Center, Peking University Third Hospital, Beijing, China.
- Beijing Key Laboratory for Interdisciplinary Research in Gastrointestinal Oncology (BLGO), Beijing, China.
| |
Collapse
|
9
|
Cheng Z, Wang T, Zhu J, He Y, Liu S, Li MY, Lu H, Wen X, Lee J, Liu S, Mao S. All-Inorganic Lead-Free Cs₂AgBiBr₆/ZnO Artificial Retina Synapse Based on Photoelectric Synergistic Dual-Mechanism for Neuromorphic Computing. SMALL (WEINHEIM AN DER BERGSTRASSE, GERMANY) 2025; 21:e2411129. [PMID: 39895204 DOI: 10.1002/smll.202411129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2024] [Revised: 01/23/2025] [Indexed: 02/04/2025]
Abstract
Adaptive learning capability of optoelectronic synaptic hardware holds promising application prospects in next generation artificial intelligence, and the development of biometric retina perception is sternly hampered by three crucial issues, including well-balance between excitatory and inhibitory, non-volatile multi-state storage, and optimal energy consumption. In this work, a novel Cs2AgBiBr6/ZnO non-volatile optoelectronic synapse is proposed and successfully programmed with optical excitatory and electronic inhibitory in the light of dual-mechanism: Lead-free perovskite Cs2AgBiBr6 guarantees abundant photogenerated carrier concentration, and the process of carrier capture and release occurs in ZnO layer, which can collaboratively modulate various synaptic plasticity behaviors depending on distinct stimulus. Consequently, multi-bit storage is attained with the dual-mechanism non-volatile memory (DNVM) as a function of consecutive light spikes. The energy consumption of the DNVM is 1.85 nJ at a single light spike, and an ultra-low one of 13.8 fJ is triggered with a single electrical pulse, which approximatively meets the requirement of the biological synaptic event energy consumption. The performance of the DNVM is further evaluated with the Pavlov's classical conditioning experiment and visual hardware system, offering an exciting paradigm for implementing on-chip adaptive visual perception and neuromorphic computing.
Collapse
Affiliation(s)
- Zhenpeng Cheng
- School of Physics and Mechanics, Wuhan University of Technology, Wuhan, 430070, China
| | - Tianle Wang
- School of Physics and Mechanics, Wuhan University of Technology, Wuhan, 430070, China
| | - Junyan Zhu
- School of Physics and Mechanics, Wuhan University of Technology, Wuhan, 430070, China
| | - Yaqi He
- School of Physics and Mechanics, Wuhan University of Technology, Wuhan, 430070, China
| | - Shijie Liu
- School of Physics and Mechanics, Wuhan University of Technology, Wuhan, 430070, China
| | - Ming-Yu Li
- School of Physics and Mechanics, Wuhan University of Technology, Wuhan, 430070, China
| | - Haifei Lu
- School of Physics and Mechanics, Wuhan University of Technology, Wuhan, 430070, China
| | - Xiaoyan Wen
- School of Physics and Mechanics, Wuhan University of Technology, Wuhan, 430070, China
| | - Jihoon Lee
- Department of Electronic Engineering, College of Electronics and Information, Kwangwoon University, Nowon-gu, Seoul, 01897, Republic of Korea
| | - Sisi Liu
- School of Physics and Mechanics, Wuhan University of Technology, Wuhan, 430070, China
| | - Sui Mao
- Institute of Hybrid Materials, National Center of International Research for Hybrid Materials Technology, National Base of International Science & Technology Cooperation, College of Materials Science and Engineering, Qingdao University, Qingdao, 266071, China
| |
Collapse
|
10
|
Hossain MM, Ahmed MM, Nafi AAN, Islam MR, Ali MS, Haque J, Miah MS, Rahman MM, Islam MK. A novel hybrid ViT-LSTM model with explainable AI for brain stroke detection and classification in CT images: A case study of Rajshahi region. Comput Biol Med 2025; 186:109711. [PMID: 39847947 DOI: 10.1016/j.compbiomed.2025.109711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 12/12/2024] [Accepted: 01/15/2025] [Indexed: 01/25/2025]
Abstract
Computed tomography (CT) scans play a key role in the diagnosis of stroke, a leading cause of morbidity and mortality worldwide. However, interpreting these scans is often challenging, necessitating automated solutions for timely and accurate diagnosis. This research proposed a novel hybrid model that integrates a Vision Transformer (ViT) and a Long Short Term Memory (LSTM) to accurately detect and classify stroke characteristics using CT images. The ViT identifies essential features from CT images, while LSTM processes sequential information generated by the ViT, adept at capturing crucial temporal dependencies for understanding patterns and context in sequential data. Moreover, our approach addresses class imbalance issues in stroke datasets by utilizing advanced strategies to improve model robustness. To ensure clinical relevance, Explainable Artificial Intelligence (XAI) methods, including attention maps, SHAP, and LIME, were incorporated to provide reliable and interpretable predictions. The proposed model was evaluated using the primary BrSCTHD-2023 dataset, collected from Rajshahi Medical College Hospital, achieving top accuracies of 73.80%, 91.61%, 93.50%, and 94.55% with the SGD, RMSProp, Adam, and AdamW optimizers, respectively. To further validate and generalize the model, it was also tested on the Kaggle brain stroke dataset, where it achieved an impressive accuracy of 96.61%. The proposed ViT-LSTM model significantly outperformed traditional CNNs and ViT models, demonstrating superior diagnostic performance and generalizability. This study advances automated stroke diagnosis by combining deep learning innovations, domain expertise, and enhanced interpretability to support clinical decision-making, providing reliable diagnostic solutions.
Collapse
Affiliation(s)
- Md Maruf Hossain
- Department of Biomedical Engineering, Islamic University, Kushtia, 7003, Bangladesh; Bio-Imaging Research Laboratory, Islamic University, Kushtia, 7003, Bangladesh.
| | - Md Mahfuz Ahmed
- Department of Biomedical Engineering, Islamic University, Kushtia, 7003, Bangladesh; Bio-Imaging Research Laboratory, Islamic University, Kushtia, 7003, Bangladesh.
| | - Abdullah Al Nomaan Nafi
- Department of Information and Communication Technology, Islamic University, Kushtia, 7003, Bangladesh.
| | - Md Rakibul Islam
- Department of Information and Communication Technology, Islamic University, Kushtia, 7003, Bangladesh; Bio-Imaging Research Laboratory, Islamic University, Kushtia, 7003, Bangladesh; Department of Computer Science and Engineering, Northern University Bangladesh, Dhaka, 1230, Bangladesh.
| | - Md Shahin Ali
- Department of Biomedical Engineering, Islamic University, Kushtia, 7003, Bangladesh; Bio-Imaging Research Laboratory, Islamic University, Kushtia, 7003, Bangladesh.
| | - Jahurul Haque
- Department of Biomedical Engineering, Islamic University, Kushtia, 7003, Bangladesh.
| | - Md Sipon Miah
- Department of Information and Communication Technology, Islamic University, Kushtia, 7003, Bangladesh.
| | - Md Mahbubur Rahman
- Department of Information and Communication Technology, Islamic University, Kushtia, 7003, Bangladesh.
| | - Md Khairul Islam
- Department of Biomedical Engineering, Islamic University, Kushtia, 7003, Bangladesh; Bio-Imaging Research Laboratory, Islamic University, Kushtia, 7003, Bangladesh.
| |
Collapse
|
11
|
Mdletshe S, Wang A. Enhancing medical imaging education: integrating computing technologies, digital image processing and artificial intelligence. J Med Radiat Sci 2025; 72:148-155. [PMID: 39508409 PMCID: PMC11909706 DOI: 10.1002/jmrs.837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Accepted: 10/18/2024] [Indexed: 11/15/2024] Open
Abstract
The rapid advancement of technology has brought significant changes to various fields, including medical imaging (MI). This discussion paper explores the integration of computing technologies (e.g. Python and MATLAB), digital image processing (e.g. image enhancement, segmentation and three-dimensional reconstruction) and artificial intelligence (AI) into the undergraduate MI curriculum. By examining current educational practices, gaps and limitations that hinder the development of future-ready MI professionals are identified. A comprehensive curriculum framework is proposed, incorporating essential computational skills, advanced image processing techniques and state-of-the-art AI tools, such as large language models like ChatGPT. The proposed curriculum framework aims to improve the quality of MI education significantly and better equip students for future professional practice and challenges while enhancing diagnostic accuracy, improving workflow efficiency and preparing students for the evolving demands of the MI field.
Collapse
Affiliation(s)
- Sibusiso Mdletshe
- Department of Anatomy and Medical Imaging, Faculty of Medical and Health SciencesThe University of AucklandAucklandNew Zealand
| | - Alan Wang
- Auckland Bioengineering InstituteThe University of AucklandAucklandNew Zealand
- Medical Imaging Research centre, Faculty of Medical and Health SciencesThe University of AucklandAucklandNew Zealand
- Centre for Co‐Created Ageing ResearchThe University of AucklandAucklandNew Zealand
- Centre for Brain ResearchThe University of AucklandAucklandNew Zealand
| |
Collapse
|
12
|
Alblehai F, El-Latif AAA, Pławiak P, Abd-El-Atty B. Cascading quantum walks with Chebyshev map for designing a robust medical image encryption algorithm. Sci Rep 2025; 15:6685. [PMID: 39994319 PMCID: PMC11850827 DOI: 10.1038/s41598-025-90725-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 02/14/2025] [Indexed: 02/26/2025] Open
Abstract
The secure storage and transmission of healthcare data have become a critical concern due to their increasing use in the diagnosis and treatment of various diseases. Medical images contain confidential patient information, and unauthorized access to or modification of these images can have severe consequences. Chaotic maps are commonly used for constructing medical image cipher systems, but with the growth of quantum technology, these systems may become vulnerable. To address this issue, a new medical image cipher algorithm based on cascading quantum walk with Chebyshev map has been presented in this paper. The proposed system has been tested and found to have high levels of security and efficiency, with UACI, NPCR, Chi-square, and global information entropy values averaging at 33.48095%, 99.62984%, 248.92128, and 7.99923, respectively.
Collapse
Affiliation(s)
- Fahad Alblehai
- Computer Science Department, Community College, King Saud University, 11437, Riyadh, Saudi Arabia
| | - Ahmed A Abd El-Latif
- Jadara University Research Center, Jadara University, Irbid, Jordan.
- Mathematics and Computer Science Department, Faculty of Science, Menoufia University, Shebin El-Koom, 32511, Egypt.
| | - Paweł Pławiak
- Department of Computer Science, Faculty of Computer Science and Telecommunications, Cracow University of Technology, Warszawska 24, 31-155, Krakow, Poland
- Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Bałtycka 5, 44-100, Gliwice, Poland
| | - Bassem Abd-El-Atty
- Department of Computer Science, Faculty of Computers and Information, Luxor University, Luxor, 85957, Egypt.
| |
Collapse
|
13
|
Wang L, Lin N, Chen W, Xiao H, Zhang Y, Sha Y. Deep learning models for differentiating three sinonasal malignancies using multi-sequence MRI. BMC Med Imaging 2025; 25:56. [PMID: 39984860 PMCID: PMC11846208 DOI: 10.1186/s12880-024-01517-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Accepted: 11/26/2024] [Indexed: 02/23/2025] Open
Abstract
PURPOSE To develop MRI-based deep learning (DL) models for distinguishing sinonasal squamous cell carcinoma (SCC), adenoid cystic carcinoma (ACC) and olfactory neuroblastoma (ONB) and to evaluate whether the DL models could improve the diagnostic performance of Senior radiologist (SR) and Junior radiologist (JR). METHODS This retrospective analysis consisted of 465 patients (229 sinonasal SCCs, 128 ACCs and 108 ONBs). The training and validation cohorts included 325 and 47 patients and the independent external testing cohort consisted of 93 patients. MRI images included T2-weighted image (T2WI), contrast-enhanced T1-weighted image (CE-T1WI) and apparent diffusion coefficient (ADC). We analyzed the conventional MRI features to choose the independent predictors and built the conventional MRI model. Then we compared the macro- and micro- area under the curves (AUCs) of different sequences and different DL networks to formulate the best DL model [artificial intelligence (AI) model scheme]. With AI assistance, we observed the diagnostic performances between SR and JR. The diagnostic efficacies of SR and JR were assessed by accuracy, Recall, precision, F1-Score and confusion matrices. RESULTS The independent predictors of conventional MRI included intensity on T2WI and intracranial invasion of sinonasal malignancies. With ExtraTrees (ET) classier, the conventional MRI model owned AUC of 78.8%. For DL models, ResNet101 network showed better performance than ResNet50 and DensNet121, especially for the mean fusion sequence (macro-AUC = 0.892, micro-AUC = 0.875, Accuracy = 0.810), and also good for the ADC sequence (macro-AUC = 0.872, micro-AUC = 0.874, Accuracy = 0.814). Grad-CAM showed that DL models focused on solid component of lesions. With the best AI scheme (ResNet101-mean sequence-based DL model) assistance, the diagnosis performances of SR (accuracy = 0.957, average Recall = 0.962, precision = 0.955, F1-Score = 0.957) and JR (accuracy = 0.925, average Recall = 0.917, precision = 0.931, F1-Score = 0.923) were significantly improved. CONCLUSION The ResNet101 network with mean sequence based DL model could effectively differential between sinonasal SCC, ACC and ONB and improved the diagnostic performances of both senior and junior radiologists.
Collapse
Affiliation(s)
- Luxi Wang
- Shanghai Institute of Medical Imaging, Fudan University, Shanghai, China
- Department of Radiology, Eye & ENT Hospital of Fudan University, 83 Fenyang Road, Shanghai, 200031, China
| | - Naier Lin
- Department of Radiology, Eye & ENT Hospital of Fudan University, 83 Fenyang Road, Shanghai, 200031, China
| | - Wei Chen
- Shanghai Institute of Medical Imaging, Fudan University, Shanghai, China
- Department of Radiology, Eye & ENT Hospital of Fudan University, 83 Fenyang Road, Shanghai, 200031, China
| | - Hanyu Xiao
- Shanghai Institute of Medical Imaging, Fudan University, Shanghai, China
- Department of Radiology, Eye & ENT Hospital of Fudan University, 83 Fenyang Road, Shanghai, 200031, China
| | - Yiyin Zhang
- Department of Radiology, Eye & ENT Hospital of Fudan University, 83 Fenyang Road, Shanghai, 200031, China
| | - Yan Sha
- Shanghai Institute of Medical Imaging, Fudan University, Shanghai, China.
- Department of Radiology, Eye & ENT Hospital of Fudan University, 83 Fenyang Road, Shanghai, 200031, China.
| |
Collapse
|
14
|
Ma W, Wang Y, Ma N, Ding Y. Diagnosis of major depressive disorder using a novel interpretable GCN model based on resting state fMRI. Neuroscience 2025; 566:124-131. [PMID: 39730018 DOI: 10.1016/j.neuroscience.2024.12.045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Revised: 11/03/2024] [Accepted: 12/21/2024] [Indexed: 12/29/2024]
Abstract
The diagnosis and analysis of major depressive disorder (MDD) faces some intractable challenges such as dataset limitations and clinical variability. Resting-state functional magnetic resonance imaging (Rs-fMRI) can reflect the fluctuation data of brain activity in a resting state, which can find the interrelationships, functional connections, and network characteristics among brain regions of the patients. In this paper, a brain functional connectivity matrix is constructed using Pearson correlation based on the characteristics of multi-site Rs-fMRI data and brain atlas, and an adaptive propagation operator graph convolutional network (APO-GCN) model is designed. The APO-GCN model can automatically adjust the propagation operator in each hidden layer according to the data features to control the expressive power of the model. By adaptively learning effective information in the graph, this model significantly improves its ability to capture complex graph structural patterns. The experimental results on Rs-fMRI data from 1601 participants (830 MDD and 771 HC) and 16 sites of REST-meta-MDD project show that the APO-GCN achieved a classification accuracy of 91.8%, outperforming those of the state-of-the-art classifier methods. The classification process is driven by multiple significant brain regions, and our method further reveals functional connectivity abnormalities between these brain regions, which are important biomarkers of classification. It is worth noting that the brain regions identified by the classifier and the networks involved are consistent with existing research results, which suggest that the pathogenesis of depression may be related to dysfunction of multiple brain networks.
Collapse
Affiliation(s)
- Wenzheng Ma
- School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing, 100048, China
| | - Yu Wang
- School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing, 100048, China.
| | - Ningxin Ma
- School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing, 100048, China
| | - Yankai Ding
- School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing, 100048, China
| |
Collapse
|
15
|
Li S, Omer AM, Duan Y, Fang Q, Hamad KO, Fernandez M, Lin R, Wen J, Wang Y, Cai J, Guo G, Wu Y, Yi F, Meng J, Mao Z, Duan Y. Deep-Optimal Leucorrhea Detection Through Fluorescent Benchmark Data Analysis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01428-3. [PMID: 39904942 DOI: 10.1007/s10278-025-01428-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2024] [Revised: 01/16/2025] [Accepted: 01/23/2025] [Indexed: 02/06/2025]
Abstract
Vaginitis is a common condition in women that is described medically as irritation and/or inflammation of the vagina; it poses a significant health risk for women, necessitating precise diagnostic methods. Presently, conventional techniques for examining vaginal discharge involve the use of wet mounts and gram staining to identify vaginal diseases. In this research, we utilized fluorescent staining, which enables distinct visualization of cellular and pathogenic components, each exhibiting unique color characteristics when exposed to the same light source. We established a large, challenging multiple fluorescence leucorrhea dataset benchmark comprising 8 categories with a total of 343 K high-quality labels. We also presented a robust lightweight deep-learning network, LRNet. It includes a lightweight feature extraction network that employs Ghost modules, a feature pyramid network that incorporates deformable convolution in the neck, and a single detection head. The evaluation results indicate that this detection network surpasses conventional networks and can cut down the model parameters by up to 91.4% and floating-point operations (FLOPs) by 74%. The deep-optimal leucorrhea detection capability of LRNet significantly enhances its ability to detect various crucial indicators related to vaginal health.
Collapse
Affiliation(s)
- Shuang Li
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Akam M Omer
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Yuping Duan
- Qingdao Central Hospital, University of Health and Rehabilitation Sciences, 127 Siliu South Road, Qingdao, Shandong, China
| | - Qiang Fang
- School of Marine Engineering Equipment, Zhejiang Ocean University, Zhoushan, 316022, China.
| | - Kamyar Othman Hamad
- School of Automation, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Mauricio Fernandez
- School of Computer Science and Engineering, Central South University, 932 Lushan South Road, Changsha, 410083, China
| | - Ruiqing Lin
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Jianghua Wen
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Yanping Wang
- Shenzhen United Medical Technology Co., LTD, Nanshan District, Block 6, Liuxian Culture Park, Shenzhen, China
| | - Jingang Cai
- Shenzhen United Medical Technology Co., LTD, Nanshan District, Block 6, Liuxian Culture Park, Shenzhen, China
| | - Guangchao Guo
- Shenzhen United Medical Technology Co., LTD, Nanshan District, Block 6, Liuxian Culture Park, Shenzhen, China
| | - Yingying Wu
- Shenzhen United Medical Technology Co., LTD, Nanshan District, Block 6, Liuxian Culture Park, Shenzhen, China
| | - Fang Yi
- Department of Geriatric Neurology, National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Jianqiao Meng
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China
| | - Zhiqun Mao
- Department of PET Imaging Center, Hunan Provincial People's Hospital, Changsha, Hunan, China.
| | - Yuxia Duan
- School of Physics, Central South University, 932 Lushan South Road, Changsha, 410083, Hunan, China.
| |
Collapse
|
16
|
Mosch R, Alevizakos V, Ströbele DA, Schiller M, von See C. Exploring Augmented Reality for Dental Implant Surgery: Feasibility of Using Smartphones as Navigation Tools. Clin Exp Dent Res 2025; 11:e70110. [PMID: 40045547 PMCID: PMC11882750 DOI: 10.1002/cre2.70110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2024] [Revised: 01/15/2025] [Accepted: 02/25/2025] [Indexed: 03/09/2025] Open
Abstract
OBJECTIVES Dental implant placement requires exceptional precision to ensure functional and esthetic success. Traditional guidance methods, such as static drilling guides and dynamic navigation systems, have improved accuracy but are limited by high costs, rigidity, and reliance on specialized hardware. This study introduces an augmented reality (AR) system using consumer smartphones for real-time navigation in dental implant placement. The system aims to provide a cost-effective, eco-friendly alternative to conventional methods by integrating virtual planning with physical models. MATERIAL AND METHODS A modified dental training model with removable parallel pins served as the physical component. Implant positions were digitally planned and color-coded using 3D scanning and modeling software, then integrated into an AR application built with Unity Engine. A smartphone's camera was calibrated to project virtual overlays onto the physical model. In vitro testing evaluated alignment accuracy, drill guidance, and system performance under controlled lighting conditions. RESULTS The AR system successfully aligned virtual overlays with the physical model, providing effective visual guidance for implant drill positioning. Operators maintained planned trajectories, demonstrating the feasibility of AR as an alternative to static and dynamic guidance systems. Challenges included the system's sensitivity to stable lighting and visual cues. CONCLUSIONS This AR-based approach offers an accessible and sustainable solution for modern dental implantology. Future research will focus on quantitative accuracy assessments, AI integration for enhanced performance, and clinical trials to validate real-world applicability. AR technology has the potential to transform dental practices by improving outcomes while reducing costs and environmental impact.
Collapse
Affiliation(s)
- Richard Mosch
- Department of Dentistry, Faculty of Medicine and Dentistry, Research Center for Digital Technologies in Dentistry and CAD/CAMDanube Private UniversityKrems an der DonauAustria
| | - Vasilios Alevizakos
- Department of Dentistry, Faculty of Medicine and Dentistry, Research Center for Digital Technologies in Dentistry and CAD/CAMDanube Private UniversityKrems an der DonauAustria
| | - Dragan Alexander Ströbele
- Department of Dentistry, Faculty of Medicine and Dentistry, Research Center for Digital Technologies in Dentistry and CAD/CAMDanube Private UniversityKrems an der DonauAustria
| | - Marcus Schiller
- Department of Oral and Maxillofacial SurgeryHannover Medical SchoolHannoverGermany
| | - Constantin von See
- Department of Dentistry, Faculty of Medicine and Dentistry, Research Center for Digital Technologies in Dentistry and CAD/CAMDanube Private UniversityKrems an der DonauAustria
| |
Collapse
|
17
|
Kabir MM, Rahman A, Hasan MN, Mridha MF. Computer vision algorithms in healthcare: Recent advancements and future challenges. Comput Biol Med 2025; 185:109531. [PMID: 39675214 DOI: 10.1016/j.compbiomed.2024.109531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 10/05/2024] [Accepted: 12/03/2024] [Indexed: 12/17/2024]
Abstract
Computer vision has emerged as a promising technology with numerous applications in healthcare. This systematic review provides an overview of advancements and challenges associated with computer vision in healthcare. The review highlights the application areas where computer vision has made significant strides, including medical imaging, surgical assistance, remote patient monitoring, and telehealth. Additionally, it addresses the challenges related to data quality, privacy, model interpretability, and integration with existing healthcare systems. Ethical and legal considerations, such as patient consent and algorithmic bias, are also discussed. The review concludes by identifying future directions and opportunities for research, emphasizing the potential impact of computer vision on healthcare delivery and outcomes. Overall, this systematic review underscores the importance of understanding both the advancements and challenges in computer vision to facilitate its responsible implementation in healthcare.
Collapse
Affiliation(s)
- Md Mohsin Kabir
- School of Innovation, Design and Engineering, Mälardalens University, Västerås, 722 20, Sweden.
| | - Ashifur Rahman
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Mirpur-2, Dhaka, 1216, Bangladesh.
| | - Md Nahid Hasan
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, United States.
| | - M F Mridha
- Department of Computer Science, American International University-Bangladesh, Dhaka, 1229, Dhaka, Bangladesh.
| |
Collapse
|
18
|
Al‐Qudimat AR, Fares ZE, Elaarag M, Osman M, Al‐Zoubi RM, Aboumarzouk OM. Advancing Medical Research Through Artificial Intelligence: Progressive and Transformative Strategies: A Literature Review. Health Sci Rep 2025; 8:e70200. [PMID: 39980823 PMCID: PMC11839394 DOI: 10.1002/hsr2.70200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 07/23/2024] [Accepted: 10/28/2024] [Indexed: 02/22/2025] Open
Abstract
Background and Aims Artificial intelligence (AI) has become integral to medical research, impacting various aspects such as data analysis, writing assistance, and publishing. This paper explores the multifaceted influence of AI on the process of writing medical research papers, encompassing data analysis, ethical considerations, writing assistance, and publishing efficiency. Methods The review was conducted following the PRISMA guidelines; a comprehensive search was performed in Scopus, PubMed, EMBASE, and MEDLINE databases for research publications on artificial intelligence in medical research published up to October 2023. Results AI facilitates the writing process by generating drafts, offering grammar and style suggestions, and enhancing manuscript quality through advanced models like ChatGPT. Ethical concerns regarding content ownership and potential biases in AI-generated content underscore the need for collaborative efforts among researchers, publishers, and AI creators to establish ethical standards. Moreover, AI significantly influences data analysis in healthcare, optimizing outcomes and patient care, particularly in fields such as obstetrics and gynecology and pharmaceutical research. The application of AI in publishing, ranging from peer review to manuscript quality control and journal matching, underscores its potential to streamline and enhance the entire research and publication process. Overall, while AI presents substantial benefits, ongoing research, and ethical guidelines are essential for its responsible integration into the evolving landscape of medical research and publishing. Conclusion The integration of AI in medical research has revolutionized efficiency and innovation, impacting data analysis, writing assistance, publishing, and others. While AI tools offer significant benefits, ethical considerations such as biases and content ownership must be addressed. Ongoing research and collaborative efforts are crucial to ensure responsible and transparent AI implementation in the dynamic landscape of medical research and publishing.
Collapse
Affiliation(s)
- Ahmad R. Al‐Qudimat
- Department of Surgery, Hamad Medical CorporationSurgical Research SectionDohaQatar
- Department of Public Health, College of Health Sciences, QU‐HealthQatar UniversityDohaQatar
| | - Zainab E. Fares
- Department of Surgery, Hamad Medical CorporationSurgical Research SectionDohaQatar
| | - Mai Elaarag
- Department of Surgery, Hamad Medical CorporationSurgical Research SectionDohaQatar
| | - Maha Osman
- Department of Public Health, College of Health Sciences, QU‐HealthQatar UniversityDohaQatar
| | - Raed M. Al‐Zoubi
- Department of Surgery, Hamad Medical CorporationSurgical Research SectionDohaQatar
- Department of Biomedical Sciences, College of Health Sciences, QU‐HealthQatar UniversityDohaQatar
- Department of Chemistry, College of ScienceJordan University of Science and TechnologyIrbidJordan
| | - Omar M. Aboumarzouk
- Department of Surgery, Hamad Medical CorporationSurgical Research SectionDohaQatar
- School of Medicine, Dentistry and NursingThe University of GlasgowGlasgowUK
| |
Collapse
|
19
|
Sundaresan V, Lehman JF, Maffei C, Haber SN, Yendiki A. Self-supervised segmentation and characterization of fiber bundles in anatomic tracing data. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2023.09.30.560310. [PMID: 37873366 PMCID: PMC10592842 DOI: 10.1101/2023.09.30.560310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Anatomic tracing is the gold standard tool for delineating brain connections and for validating more recently developed imaging approaches such as diffusion MRI tractography. A key step in the analysis of data from tracer experiments is the careful, manual charting of fiber trajectories on histological sections. This is a very time-consuming process, which limits the amount of annotated tracer data that are available for validation studies. Thus, there is a need to accelerate this process by developing a method for computer-assisted segmentation. Such a method must be robust to the common artifacts in tracer data, including variations in the intensity of stained axons and background, as well as spatial distortions introduced by sectioning and mounting the tissue. The method should also achieve satisfactory performance using limited manually charted data for training. Here we propose the first deep-learning method, with a self-supervised loss function, for segmentation of fiber bundles on histological sections from macaque brains that have received tracer injections. We address the limited availability of manual labels with a semi-supervised training technique that takes advantage of unlabeled data to improve performance. We also introduce anatomic and across-section continuity constraints to improve accuracy. We show that our method can be trained on manually charted sections from a single case and segment unseen sections from different cases, with a true positive rate of ~0.80. We further demonstrate the utility of our method by quantifying the density of fiber bundles as they travel through different white-matter pathways. We show that fiber bundles originating in the same injection site have different levels of density when they travel through different pathways, a finding that can have implications for microstructure-informed tractography methods. The code for our method is available at https://github.com/v-sundaresan/fiberbundle_seg_tracing.
Collapse
Affiliation(s)
- Vaanathi Sundaresan
- Department of Computational and Data Sciences, Indian Institute of Science, Bengaluru, Karnataka 560012, India
| | - Julia F. Lehman
- Department of Pharmacology and Physiology, University of Rochester School of Medicine, Rochester, NY, United States
| | - Chiara Maffei
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States
| | - Suzanne N. Haber
- Department of Pharmacology and Physiology, University of Rochester School of Medicine, Rochester, NY, United States
- McLean Hospital, Belmont, MA, United States
| | - Anastasia Yendiki
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States
| |
Collapse
|
20
|
Pugar JA, Kim J, Khabaz K, Yuan K, Pocivavsek L. Thoracic Aortic Shape: A Data-Driven Scale Space Approach. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2024.08.30.24312310. [PMID: 39974021 PMCID: PMC11838945 DOI: 10.1101/2024.08.30.24312310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
The scale and resolution of anatomical features extracted from medical CT images are crucial for advancing clinical decision-making tools. While traditional metrics, such as maximum aortic diameter, have long been the standard for classifying aortic diseases, these one-dimensional measures often fall short in capturing the rich geometrical nuances available in progressively advancing imaging modalities. Recent advancements in computational methods and imaging have introduced more sophisticated geometric signatures, in particular scale-invariant measures of aortic shape. Among these, the normalized fluctuation in total integrated Gaussian curvature (δ K ~ ) over a surface mesh model of the aorta has emerged as a particularly promising metric. However, there exists a critical tradeoff between noise reduction and shape signal preservation within the scale space parameters - namely, smoothing intensity, meshing density, and partitioning size. Through a comprehensive analysis of over 1200 unique scale space constructions derived from a cohort of 185 aortic dissection patients, this work pinpoints optimal resolution scales at which shape variations are most strongly correlated with surgical outcomes. Importantly, these findings emphasize the pivotal role of a secondary discretization step, which consistently yield the most robust signal when scaled to approximately 1 cm. This approach enables the development of models that are not only clinically effective but also inherently resilient to biases introduced by patient population heterogeneity. By focusing on the appropriate intermediate scales for analysis, this study paves the way for more precise and reliable tools in medical imaging, ultimately contributing to improved patient outcomes in cardiovascular surgery.
Collapse
Affiliation(s)
| | - Junsung Kim
- University of Chicago, Chicago, IL 60637 USA
| | | | - Karen Yuan
- University of Chicago, Chicago, IL 60637 USA
| | | |
Collapse
|
21
|
Gan W, Zhao R, Ma Y, Ning X. TSF-MDD: A Deep Learning Approach for Electroencephalography-Based Diagnosis of Major Depressive Disorder with Temporal-Spatial-Frequency Feature Fusion. Bioengineering (Basel) 2025; 12:95. [PMID: 40001616 PMCID: PMC11851794 DOI: 10.3390/bioengineering12020095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2024] [Revised: 01/13/2025] [Accepted: 01/15/2025] [Indexed: 02/27/2025] Open
Abstract
Major depressive disorder (MDD) is a prevalent mental illness characterized by persistent sadness, loss of interest in activities, and significant functional impairment. It poses severe risks to individuals' physical and psychological well-being. The development of automated diagnostic systems for MDD is essential to improve diagnostic accuracy and efficiency. Electroencephalography (EEG) has been extensively utilized in MDD diagnostic research. However, studies employing deep learning methods still face several challenges, such as difficulty in extracting effective information from EEG signals and risks of data leakage due to experimental designs. These issues result in limited generalization capabilities when models are tested on unseen individuals, thereby restricting their practical application. In this study, we propose a novel deep learning approach, termed TSF-MDD, which integrates temporal, spatial, and frequency-domain information. TSF-MDD first applies a data reconstruction scheme to obtain a four-dimensional temporal-spatial-frequency representation of EEG signals. These data are then processed by a model based on 3D-CNN and CapsNet, enabling comprehensive feature extraction across domains. Finally, a subject-independent data partitioning strategy is employed during training and testing to eliminate data leakage. The proposed approach achieves an accuracy of 92.1%, precision of 90.0%, recall of 94.9%, and F1-score of 92.4%, respectively, on the Mumtaz2016 public dataset. The results demonstrate that TSF-MDD exhibits excellent generalization performance.
Collapse
Affiliation(s)
- Wei Gan
- School of Instrumentation Science and Optoelectronic Engineering, Beihang University, Beijing 100191, China; (W.G.); (R.Z.); (Y.M.)
| | - Ruochen Zhao
- School of Instrumentation Science and Optoelectronic Engineering, Beihang University, Beijing 100191, China; (W.G.); (R.Z.); (Y.M.)
| | - Yujie Ma
- School of Instrumentation Science and Optoelectronic Engineering, Beihang University, Beijing 100191, China; (W.G.); (R.Z.); (Y.M.)
| | - Xiaolin Ning
- Hangzhou Institute of National Extremely-Weak Magnetic Field Infrastructure, Hangzhou 310000, China
- Hefei National Laboratory, Gaoxin District, Hefei 230088, China
| |
Collapse
|
22
|
Mohanarajan M, Salunke PP, Arif A, Iglesias Gonzalez PM, Ospina D, Benavides DS, Amudha C, Raman KK, Siddiqui HF. Advancements in Machine Learning and Artificial Intelligence in the Radiological Detection of Pulmonary Embolism. Cureus 2025; 17:e78217. [PMID: 40026993 PMCID: PMC11872007 DOI: 10.7759/cureus.78217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/29/2025] [Indexed: 03/05/2025] Open
Abstract
Pulmonary embolism (PE) is a clinically challenging diagnosis that varies from silent to life-threatening symptoms. Timely diagnosis of the condition is subject to clinical assessment, D-dimer testing and radiological imaging. Computed tomography pulmonary angiogram (CTPA) is considered the gold standard imaging modality, although some cases can be missed due to reader dependency, resulting in adverse patient outcomes. Hence, it is crucial to implement faster and precise diagnostic strategies to help clinicians diagnose and treat PE patients promptly and mitigate morbidity and mortality. Machine learning (ML) and artificial intelligence (AI) are the newly emerging tools in the medical field, including in radiological imaging, potentially improving diagnostic efficacy. Our review of the studies showed that computer-aided design (CAD) and AI tools displayed similar to superior sensitivity and specificity in identifying PE on CTPA as compared to radiologists. Several tools demonstrated the potential in identifying minor PE on radiological scans showing promising ability to aid clinicians in reducing missed cases substantially. However, it is imperative to design sophisticated tools and conduct large clinical trials to integrate AI use in everyday clinical setting and establish guidelines for its ethical applicability. ML and AI can also potentially help physicians in formulating individualized management strategies to enhance patient outcomes.
Collapse
Affiliation(s)
| | | | - Ali Arif
- Medicine, Dow University of Health Sciences, Karachi, PAK
| | | | - David Ospina
- Internal Medicine, Universidad de los Andes, Bogotá, COL
| | | | - Chaithanya Amudha
- Medicine and Surgery, Saveetha Medical College and Hospital, Chennai, IND
| | - Kumareson K Raman
- Cardiology, Nottingham University Hospitals National Health Service (NHS) Trust, Nottingham, GBR
| | - Humza F Siddiqui
- Internal Medicine, Jinnah Postgraduate Medical Centre, Karachi, PAK
| |
Collapse
|
23
|
Ahuja S, Zaheer S. Advancements in pathology: Digital transformation, precision medicine, and beyond. J Pathol Inform 2025; 16:100408. [PMID: 40094037 PMCID: PMC11910332 DOI: 10.1016/j.jpi.2024.100408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2024] [Revised: 10/30/2024] [Accepted: 11/12/2024] [Indexed: 01/02/2025] Open
Abstract
Pathology, a cornerstone of medical diagnostics and research, is undergoing a revolutionary transformation fueled by digital technology, molecular biology advancements, and big data analytics. Digital pathology converts conventional glass slides into high-resolution digital images, enhancing collaboration and efficiency among pathologists worldwide. Integrating artificial intelligence (AI) and machine learning (ML) algorithms with digital pathology improves diagnostic accuracy, particularly in complex diseases like cancer. Molecular pathology, facilitated by next-generation sequencing (NGS), provides comprehensive genomic, transcriptomic, and proteomic insights into disease mechanisms, guiding personalized therapies. Immunohistochemistry (IHC) plays a pivotal role in biomarker discovery, refining disease classification and prognostication. Precision medicine integrates pathology's molecular findings with individual genetic, environmental, and lifestyle factors to customize treatment strategies, optimizing patient outcomes. Telepathology extends diagnostic services to underserved areas through remote digital pathology. Pathomics leverages big data analytics to extract meaningful insights from pathology images, advancing our understanding of disease pathology and therapeutic targets. Virtual autopsies employ non-invasive imaging technologies to revolutionize forensic pathology. These innovations promise earlier diagnoses, tailored treatments, and enhanced patient care. Collaboration across disciplines is essential to fully realize the transformative potential of these advancements in medical practice and research.
Collapse
Affiliation(s)
- Sana Ahuja
- Department of Pathology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India
| | - Sufian Zaheer
- Department of Pathology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India
| |
Collapse
|
24
|
Okunlola FO, Adetuyi TG, Olajide PA, Okunlola AR, Adetuyi BO, Adeyemo-Eleyode VO, Akomolafe AA, Yunana N, Baba F, Nwachukwu KC, Oyewole OA, Adetunji CO, Shittu OB, Ginikanwa EG. Biomedical image characterization and radio genomics using machine learning techniques. MINING BIOMEDICAL TEXT, IMAGES AND VISUAL FEATURES FOR INFORMATION RETRIEVAL 2025:397-421. [DOI: 10.1016/b978-0-443-15452-2.00019-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
25
|
Galanty M, Luitse D, Noteboom SH, Croon P, Vlaar AP, Poell T, Sanchez CI, Blanke T, Išgum I. Assessing the documentation of publicly available medical image and signal datasets and their impact on bias using the BEAMRAD tool. Sci Rep 2024; 14:31846. [PMID: 39738436 PMCID: PMC11686007 DOI: 10.1038/s41598-024-83218-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Accepted: 12/12/2024] [Indexed: 01/02/2025] Open
Abstract
Medical datasets are vital for advancing Artificial Intelligence (AI) in healthcare. Yet biases in these datasets on which deep-learning models are trained can compromise reliability. This study investigates biases stemming from dataset-creation practices. Drawing on existing guidelines, we first developed a BEAMRAD tool to assess the documentation of public Magnetic Resonance Imaging (MRI); Color Fundus Photography (CFP), and Electrocardiogram (ECG) datasets. In doing so, we provide an overview of the biases that may emerge due to inadequate dataset documentation. Second, we examine the current state of documentation for public medical images and signal data. Our research reveals that there is substantial variance in the documentation of image and signal datasets, even though guidelines have been developed in medical imaging. This indicates that dataset documentation is subject to individual discretionary decisions. Furthermore, we find that aspects such as hardware and data acquisition details are commonly documented, while information regarding data annotation practices, annotation error quantification, or data limitations are not consistently reported. This risks having considerable implications for the abilities of data users to detect potential sources of bias through these respective aspects and develop reliable and robust models that can be adapted for clinical practice.
Collapse
Affiliation(s)
- Maria Galanty
- Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands.
- Department of Biomedical Engineering and Physics, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands.
| | - Dieuwertje Luitse
- Department of Media Studies, Faculty of Humanities, University of Amsterdam, Amsterdam, The Netherlands
| | - Sijm H Noteboom
- Department of Intensive Care, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Philip Croon
- Department of Cardiology, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, United States
| | - Alexander P Vlaar
- Department of Intensive Care, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Thomas Poell
- Department of Media Studies, Faculty of Humanities, University of Amsterdam, Amsterdam, The Netherlands
| | - Clara I Sanchez
- Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
- Department of Biomedical Engineering and Physics, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
| | - Tobias Blanke
- Department of Media Studies, Faculty of Humanities, University of Amsterdam, Amsterdam, The Netherlands
| | - Ivana Išgum
- Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
- Department of Biomedical Engineering and Physics, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
- Department of Radiology and Nuclear Medicine, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
26
|
Yan S, Xiong F, Xin Y, Zhou Z, Liu W. Automated assessment of endometrial receptivity for screening recurrent pregnancy loss risk using deep learning-enhanced ultrasound and clinical data. Front Physiol 2024; 15:1404418. [PMID: 39777360 PMCID: PMC11703864 DOI: 10.3389/fphys.2024.1404418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 12/11/2024] [Indexed: 01/11/2025] Open
Abstract
Background Recurrent pregnancy loss (RPL) poses significant challenges in clinical management due to an unclear etiology in over half the cases. Traditional screening methods, including ultrasonographic evaluation of endometrial receptivity (ER), have been debated for their efficacy in identifying high-risk individuals. Despite the potential of artificial intelligence, notably deep learning (DL), to enhance medical imaging analysis, its application in ER assessment for RPL risk stratification remains underexplored. Objective This study aims to leverage DL techniques in the analysis of routine clinical and ultrasound examination data to refine ER assessment within RPL management. Methods Employing a retrospective, controlled design, this study included 346 individuals with unexplained RPL and 369 controls to assess ER. Participants were allocated into training (n = 485) and testing (n = 230) datasets for model construction and performance evaluation, respectively. DL techniques were applied to analyze conventional grayscale ultrasound images and clinical data, utilizing a pre-trained ResNet-50 model for imaging analysis and TabNet for tabular data interpretation. The model outputs were calibrated to generate probabilistic scores, representing the risk of RPL. Both comparative analyses and ablation studies were performed using ResNet-50, TabNet, and a combined fusion model. These were evaluated against other state-of-the-art DL and machine learning (ML) models, with the results validated against the testing dataset. Results The comparative analysis demonstrated that the ResNet-50 model outperformed other DL architectures, achieving the highest accuracy and the lowest Brier score. Similarly, the TabNet model exceeded the performance of traditional ML models. Ablation studies demonstrated that the fusion model, which integrates both data modalities and is presented through a nomogram, provided the most accurate predictions, with an area under the curve of 0.853. The radiological DL model made a more significant contribution to the overall performance of the fusion model, underscoring its superior predictive capability. Conclusion This investigation demonstrates the superiority of a DL-enhanced fusion model that integrates routine ultrasound and clinical data for accurate stratification of RPL risk, offering significant advancements over traditional methods.
Collapse
Affiliation(s)
- Shanling Yan
- Department of Ultrasound, Deyang People’s Hospital, Deyang, Sichuan, China
| | - Fei Xiong
- Department of Ultrasound, Deyang People’s Hospital, Deyang, Sichuan, China
| | - Yanfen Xin
- Department of Ultrasound, Deyang People’s Hospital, Deyang, Sichuan, China
| | - Zhuyu Zhou
- Department of Ultrasound, Deyang People’s Hospital, Deyang, Sichuan, China
| | - Wanqing Liu
- Department of Obstetrics and Gynecology, Deyang People’s Hospital, Deyang, Sichuan, China
| |
Collapse
|
27
|
Malik S, Das R, Thongtan T, Thompson K, Dbouk N. AI in Hepatology: Revolutionizing the Diagnosis and Management of Liver Disease. J Clin Med 2024; 13:7833. [PMID: 39768756 PMCID: PMC11678868 DOI: 10.3390/jcm13247833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Revised: 12/13/2024] [Accepted: 12/19/2024] [Indexed: 01/11/2025] Open
Abstract
The integration of artificial intelligence (AI) into hepatology is revolutionizing the diagnosis and management of liver diseases amidst a rising global burden of conditions like metabolic-associated steatotic liver disease (MASLD). AI harnesses vast datasets and complex algorithms to enhance clinical decision making and patient outcomes. AI's applications in hepatology span a variety of conditions, including autoimmune hepatitis, primary biliary cholangitis, primary sclerosing cholangitis, MASLD, hepatitis B, and hepatocellular carcinoma. It enables early detection, predicts disease progression, and supports more precise treatment strategies. Despite its transformative potential, challenges remain, including data integration, algorithm transparency, and computational demands. This review examines the current state of AI in hepatology, exploring its applications, limitations, and the opportunities it presents to enhance liver health and care delivery.
Collapse
Affiliation(s)
- Sheza Malik
- Department of Internal Medicine, Rochester General Hospital, Rochester, NY 14621, USA;
| | - Rishi Das
- Division of Digestive Diseases, Emory University School of Medicine, Atlanta, GA 30322, USA; (R.D.); (T.T.)
- Department of Medicine, Emory University School of Medicine, Atlanta, GA 30322, USA;
| | - Thanita Thongtan
- Division of Digestive Diseases, Emory University School of Medicine, Atlanta, GA 30322, USA; (R.D.); (T.T.)
- Department of Medicine, Emory University School of Medicine, Atlanta, GA 30322, USA;
| | - Kathryn Thompson
- Department of Medicine, Emory University School of Medicine, Atlanta, GA 30322, USA;
| | - Nader Dbouk
- Division of Digestive Diseases, Emory University School of Medicine, Atlanta, GA 30322, USA; (R.D.); (T.T.)
- Department of Medicine, Emory University School of Medicine, Atlanta, GA 30322, USA;
- Emory Transplant Center, Emory University School of Medicine, Atlanta, GA 30322, USA
| |
Collapse
|
28
|
Ajani SN, Mulla RA, Limkar S, Ashtagi R, Wagh SK, Pawar ME. RETRACTED ARTICLE: DLMBHCO: design of an augmented bioinspired deep learning-based multidomain body parameter analysis via heterogeneous correlative body organ analysis. Soft comput 2024; 28:635. [PMID: 37362266 PMCID: PMC10248994 DOI: 10.1007/s00500-023-08613-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/23/2023] [Indexed: 06/28/2023]
Affiliation(s)
- Samir N. Ajani
- Department of Computer Science
& Engineering (Data Science), St.
Vincent Pallotti College of Engineering and Technology,
Nagpur, Maharashtra India
| | - Rais Allauddin Mulla
- Department of Computer Engineering, Vasantdada Patil
Pratishthan College of Engineering and Visual Arts, Mumbai, Maharashtra India
| | - Suresh Limkar
- Department of Artificial Intelligence
and Data Science, AISSMS Institute of
Information Technology, Pune,
Maharashtra India
| | - Rashmi Ashtagi
- Department of Computer Engineering,
Vishwakarma Institute of Technology,
Bibwewadi, Pune, 411037 Maharashtra
India
| | - Sharmila K. Wagh
- Department of Computer Engineering,
Modern Education Society’s College of
Engineering, Pune, Maharashtra India
| | - Mahendra Eknath Pawar
- Department of Computer Engineering, Vasantdada Patil
Pratishthan College of Engineering and Visual Arts, Mumbai, Maharashtra India
| |
Collapse
|
29
|
Fathi M, Eshraghi R, Behzad S, Tavasol A, Bahrami A, Tafazolimoghadam A, Bhatt V, Ghadimi D, Gholamrezanezhad A. Potential strength and weakness of artificial intelligence integration in emergency radiology: a review of diagnostic utilizations and applications in patient care optimization. Emerg Radiol 2024; 31:887-901. [PMID: 39190230 DOI: 10.1007/s10140-024-02278-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Accepted: 08/08/2024] [Indexed: 08/28/2024]
Abstract
Artificial intelligence (AI) and its recent increasing healthcare integration has created both new opportunities and challenges in the practice of radiology and medical imaging. Recent advancements in AI technology have allowed for more workplace efficiency, higher diagnostic accuracy, and overall improvements in patient care. Limitations of AI such as data imbalances, the unclear nature of AI algorithms, and the challenges in detecting certain diseases make it difficult for its widespread adoption. This review article presents cases involving the use of AI models to diagnose intracranial hemorrhage, spinal fractures, and rib fractures, while discussing how certain factors like, type, location, size, presence of artifacts, calcification, and post-surgical changes, affect AI model performance and accuracy. While the use of artificial intelligence has the potential to improve the practice of emergency radiology, it is important to address its limitations to maximize its advantages while ensuring the safety of patients overall.
Collapse
Affiliation(s)
- Mobina Fathi
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Sciences, Tehran, Iran
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Reza Eshraghi
- Student Research Committee, Kashan University of Medical Science, Kashan, Iran
| | | | - Arian Tavasol
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ashkan Bahrami
- Student Research Committee, Kashan University of Medical Science, Kashan, Iran
| | | | - Vivek Bhatt
- School of Medicine, University of California, Riverside, CA, USA
| | - Delaram Ghadimi
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ali Gholamrezanezhad
- Keck School of Medicine of University of Southern California, Los Angeles, CA, USA.
- Department of Radiology, Division of Emergency Radiology, Keck School of Medicine, Cedars Sinai Hospital, University of Southern California, 1500 San Pablo Street, Los Angeles, CA, 90033, USA.
| |
Collapse
|
30
|
Zhu Z. Advancements in automated classification of chronic obstructive pulmonary disease based on computed tomography imaging features through deep learning approaches. Respir Med 2024; 234:107809. [PMID: 39299523 DOI: 10.1016/j.rmed.2024.107809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 09/16/2024] [Accepted: 09/17/2024] [Indexed: 09/22/2024]
Abstract
Chronic Obstructive Pulmonary Disease (COPD) represents a global public health issue that significantly impairs patients' quality of life and overall health. As one of the primary causes of chronic respiratory diseases and global mortality, effective diagnosis and classification of COPD are crucial for clinical management. Pulmonary function tests (PFTs) are standard for diagnosing COPD, yet their accuracy is influenced by patient compliance and other factors, and they struggle to detect early disease pathologies. Furthermore, the complexity of COPD pathological changes poses additional challenges for clinical diagnosis, increasing the difficulty for physicians in practice. Recently, deep learning (DL) technologies have demonstrated significant potential in medical image analysis, particularly for the diagnosis and classification of COPD. By analyzing key radiological features such as airway alterations, emphysema, and vascular characteristics in Computed Tomography (CT) scan images, DL enhances diagnostic accuracy and efficiency, providing more precise treatment plans for COPD patients. This article reviews the latest research advancements in DL methods based on principal radiological features of COPD for its classification and discusses the advantages, challenges, and future research directions of DL in this field, aiming to provide new perspectives for the personalized management and treatment of COPD.
Collapse
Affiliation(s)
- Zirui Zhu
- School of Medicine, Xiamen University, Xiamen 361102, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, 361102, China.
| |
Collapse
|
31
|
Karnan N, Francis J, Vijayvargiya I, Rubino Tan C. Analyzing the Effectiveness of AI-Generated Patient Education Materials: A Comparative Study of ChatGPT and Google Gemini. Cureus 2024; 16:e74398. [PMID: 39723279 PMCID: PMC11669264 DOI: 10.7759/cureus.74398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2024] [Indexed: 12/28/2024] Open
Abstract
OBJECTIVE The study aims to compare ChatGPT and Google Gemini-generated patient education guides regarding claustrophobia during MRI, mammography screening, and MR safe and unsafe items and the importance of knowing what items can be carried into an MR room. METHODS The study utilized ChatGPT 3.5 and Google Gemini to create patient education guides concerning claustrophobia during MRI, mammography screening, and MR safe and unsafe items. A Flesch-Kincaid calculator was used to evaluate readability and ease of understanding. QuillBot (QuillBot, Inc., Chicago, USA) was used to generate a similarity score to evaluate possible plagiarism. In order to assess the scientific reliability of the AI-generated responses, we utilized a modified DISCERN score. R Studio 4.3.2 (The R Foundation for Statistical Computing, Vienna, Austria) was used for statistical analyses, with unpaired t-tests used to determine statistical significance between variables. RESULTS The average number of words in ChatGPT and Google Gemini were 468.7±132.07 and 328.7±163.65, respectively. The mean number of sentences was 35.67±18.15 for ChatGPT and 30.33±12.22 for Google Gemini. Ease of readability for ChatGPT responses was 36.30±7.01 and for Google Gemini 46.77±4.96. The similarity scores for the ChatGPT responses were 0.50±0.62 and for Google Gemini 9.43±6.20. The reliability score was evaluated at 2.67±0.25 for ChatGPT and 2.67± 0.58 for Google Gemini. CONCLUSION The AI generated by ChatGPT and Google Gemini had no statistically significant difference in regard to word count, average word per sentence, average syllables per word, grade level comprehension score, or scientific reliability. However, the ease score was significantly greater for the ChatGPT response compared to Google Gemini. In addition, the similarity score was much higher in Google Gemini than in ChatGPT responses.
Collapse
Affiliation(s)
- Nithin Karnan
- Internal Medicine, K.A.P. Viswanatham Government Medical College, Tiruchirappalli, Tiruchirappalli, IND
| | - Jobin Francis
- Emergency Medicine, Aneurin Bevan University Health Board, Newport, GBR
| | - Ishan Vijayvargiya
- Internal Medicine, Sir Seewoosagar Ramgoolam Medical College, University of Mauritius, Belle Rive, MUS
| | | |
Collapse
|
32
|
Wang G, Shanker S, Nag A, Lian Y, John D. ECG Biometric Authentication Using Self-Supervised Learning for IoT Edge Sensors. IEEE J Biomed Health Inform 2024; 28:6606-6618. [PMID: 39250357 DOI: 10.1109/jbhi.2024.3455803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Wearable Internet of Things (IoT) devices are gaining ground for continuous physiological data acquisition and health monitoring. These physiological signals can be used for security applications to achieve continuous authentication and user convenience due to passive data acquisition. This paper investigates an electrocardiogram (ECG) based biometric user authentication system using features derived from the Convolutional Neural Network (CNN) and self-supervised contrastive learning. Contrastive learning enables us to use large unlabeled datasets to train the model and establish its generalizability. We propose approaches enabling the CNN encoder to extract appropriate features that distinguish the user from other subjects. When evaluated using the PTB ECG database with 290 subjects, the proposed technique achieved an authentication accuracy of 99.15%. To test its generalizability, we applied the model to two new datasets, the MIT-BIH Arrhythmia Database and the ECG-ID Database, achieving over 98.5% accuracy without any modifications. Furthermore, we show that repeating the authentication step three times can increase accuracy to nearly 100% for both PTBDB and ECGIDDB. This paper also presents model optimizations for embedded device deployment, which makes the system more relevant to real-world scenarios. To deploy our model in IoT edge sensors, we optimized the model complexity by applying quantization and pruning. The optimized model achieves 98.67% accuracy on PTBDB, with 0.48% accuracy loss and 62.6% CPU cycles compared to the unoptimized model. An accuracy-vs-time-complexity tradeoff analysis is performed, and results are presented for different optimization levels.
Collapse
|
33
|
Xu Y, Quan R, Xu W, Huang Y, Chen X, Liu F. Advances in Medical Image Segmentation: A Comprehensive Review of Traditional, Deep Learning and Hybrid Approaches. Bioengineering (Basel) 2024; 11:1034. [PMID: 39451409 PMCID: PMC11505408 DOI: 10.3390/bioengineering11101034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 10/08/2024] [Accepted: 10/11/2024] [Indexed: 10/26/2024] Open
Abstract
Medical image segmentation plays a critical role in accurate diagnosis and treatment planning, enabling precise analysis across a wide range of clinical tasks. This review begins by offering a comprehensive overview of traditional segmentation techniques, including thresholding, edge-based methods, region-based approaches, clustering, and graph-based segmentation. While these methods are computationally efficient and interpretable, they often face significant challenges when applied to complex, noisy, or variable medical images. The central focus of this review is the transformative impact of deep learning on medical image segmentation. We delve into prominent deep learning architectures such as Convolutional Neural Networks (CNNs), Fully Convolutional Networks (FCNs), U-Net, Recurrent Neural Networks (RNNs), Adversarial Networks (GANs), and Autoencoders (AEs). Each architecture is analyzed in terms of its structural foundation and specific application to medical image segmentation, illustrating how these models have enhanced segmentation accuracy across various clinical contexts. Finally, the review examines the integration of deep learning with traditional segmentation methods, addressing the limitations of both approaches. These hybrid strategies offer improved segmentation performance, particularly in challenging scenarios involving weak edges, noise, or inconsistent intensities. By synthesizing recent advancements, this review provides a detailed resource for researchers and practitioners, offering valuable insights into the current landscape and future directions of medical image segmentation.
Collapse
Affiliation(s)
- Yan Xu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Rixiang Quan
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Weiting Xu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Yi Huang
- Bristol Medical School, University of Bristol, Bristol BS8 1UD, UK;
| | - Xiaolong Chen
- Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, Nottingham NG7 2RD, UK;
| | - Fengyuan Liu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| |
Collapse
|
34
|
Balel Y. ScholarGPT's performance in oral and maxillofacial surgery. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 126:102114. [PMID: 39389541 DOI: 10.1016/j.jormas.2024.102114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2024] [Revised: 09/23/2024] [Accepted: 10/07/2024] [Indexed: 10/12/2024]
Abstract
OBJECTIVE The purpose of this study is to evaluate the performance of Scholar GPT in answering technical questions in the field of oral and maxillofacial surgery and to conduct a comparative analysis with the results of a previous study that assessed the performance of ChatGPT. MATERIALS AND METHODS Scholar GPT was accessed via ChatGPT (www.chatgpt.com) on March 20, 2024. A total of 60 technical questions (15 each on impacted teeth, dental implants, temporomandibular joint disorders, and orthognathic surgery) from our previous study were used. Scholar GPT's responses were evaluated using a modified Global Quality Scale (GQS). The questions were randomized before scoring using an online randomizer (www.randomizer.org). A single researcher performed the evaluations at three different times, three weeks apart, with each evaluation preceded by a new randomization. In cases of score discrepancies, a fourth evaluation was conducted to determine the final score. RESULTS Scholar GPT performed well across all technical questions, with an average GQS score of 4.48 (SD=0.93). Comparatively, ChatGPT's average GQS score in previous study was 3.1 (SD=1.492). The Wilcoxon Signed-Rank Test indicated a statistically significant higher average score for Scholar GPT compared to ChatGPT (Mean Difference = 2.00, SE = 0.163, p < 0.001). The Kruskal-Wallis Test showed no statistically significant differences among the topic groups (χ² = 0.799, df = 3, p = 0.850, ε² = 0.0135). CONCLUSION Scholar GPT demonstrated a generally high performance in technical questions within oral and maxillofacial surgery and produced more consistent and higher-quality responses compared to ChatGPT. The findings suggest that GPT models based on academic databases can provide more accurate and reliable information. Additionally, developing a specialized GPT model for oral and maxillofacial surgery could ensure higher quality and consistency in artificial intelligence-generated information.
Collapse
Affiliation(s)
- Yunus Balel
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Sivas Cumhuriyet University, Sivas 58000, Turkiye.
| |
Collapse
|
35
|
Patra A, Biswas P, Behera SK, Barpanda NK, Sethy PK, Nanthaamornphong A. Transformative insights: Image-based breast cancer detection and severity assessment through advanced AI techniques. JOURNAL OF INTELLIGENT SYSTEMS 2024; 33. [DOI: 10.1515/jisys-2024-0172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2025] Open
Abstract
Abstract
In the realm of image-based breast cancer detection and severity assessment, this study delves into the revolutionary potential of sophisticated artificial intelligence (AI) techniques. By investigating image processing, machine learning (ML), and deep learning (DL), the research illuminates their combined impact on transforming breast cancer diagnosis. This integration offers insights into early identification and precise characterization of cancers. With a foundation in 125 research articles, this article presents a comprehensive overview of the current state of image-based breast cancer detection. Synthesizing the transformative role of AI, including image processing, ML, and DL, the review explores how these technologies collectively reshape the landscape of breast cancer diagnosis and severity assessment. An essential aspect highlighted is the synergy between advanced image processing methods and ML algorithms. This combination facilitates the automated examination of medical images, which is crucial for detecting minute anomalies indicative of breast cancer. The utilization of complex neural networks for feature extraction and pattern recognition in DL models further enhances diagnostic precision. Beyond diagnostic improvements, the abstract underscores the substantial influence of AI-driven methods on breast cancer treatment. The integration of AI not only increases diagnostic precision but also opens avenues for individualized treatment planning, marking a paradigm shift toward personalized medicine in breast cancer care. However, challenges persist, with issues related to data quality and interpretability requiring continued research efforts. Looking forward, the abstract envisions future directions for breast cancer identification and diagnosis, emphasizing the adoption of explainable AI techniques and global collaboration for data sharing. These initiatives promise to propel the field into a new era characterized by enhanced efficiency and precision in breast cancer care.
Collapse
Affiliation(s)
- Ankita Patra
- Department of Electronics, Sambalpur University , Burla , Odisha, 768019 , India
| | - Preesat Biswas
- Department of Electronics and Telecommunication Engineering, GEC Jagdalpur , C.G., 494001 , India
| | - Santi Kumari Behera
- Department of Computer Science and Engineering, VSSUT , Burla , Odisha, 768018 , India
| | | | - Prabira Kumar Sethy
- Department of Electronics, Sambalpur University , Burla , Odisha, 768019 , India
| | - Aziz Nanthaamornphong
- College of Computing, Prince of Songkla University, Phuket Campus , Phuket 83120 , Thailand
| |
Collapse
|
36
|
Li X, Li Z, Hu T, Long M, Ma X, Huang J, Liu Y, Yalikun Y, Liu S, Wang D, Wu J, Mei L, Lei C. MSGM: An Advanced Deep Multi-Size Guiding Matching Network for Whole Slide Histopathology Images Addressing Staining Variation and Low Visibility Challenges. IEEE J Biomed Health Inform 2024; 28:6019-6030. [PMID: 38913517 DOI: 10.1109/jbhi.2024.3417937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Matching whole slide histopathology images to provide comprehensive information on homologous tissues is beneficial for cancer diagnosis. However, the challenge arises with the Giga-pixel whole slide images (WSIs) when aiming for high-accuracy matching. Learning-based methods are difficult to generalize well with large-size WSIs, necessitating the integration of traditional matching methods to enhance accuracy as the size increases. In this paper, we propose a multi-size guiding matching method applicable high-accuracy requirements. Specifically, we design learning multiscale texture to train deep descriptors, called TDescNet, that trains 64 × 64 × 256 and 256 × 256 × 128 size convolution layer as C64 and C256 descriptors to overcome staining variation and low visibility challenges. Furthermore, we develop the 3D-ring descriptor using sparse keypoints to support the description of large-size WSIs. Finally, we employ C64, C256, and 3D-ring descriptors to progressively guide refined local matching, utilizing geometric consistency to identify correct matching results. Experiments show that when matching WSIs of size 4096 × 4096 pixels, our average matching error is 123.48 μm and the success rate is 93.02 % in 43 cases. Notably, our method achieves an average improvement of 65.52 μm in matching accuracy compared to recent state-of-the-art methods, with enhancements ranging from 36.27 μm to 131.66 μm. Therefore, we achieve high-fidelity whole-slice image matching, and overcome staining variation and low visibility challenges, enabling assistance in comprehensive cancer diagnosis through matched WSIs.
Collapse
|
37
|
Muhunzi D, Kitambala L, Mashauri HL. Big data analytics in the healthcare sector: Opportunities and challenges in developing countries. A literature review. Health Informatics J 2024; 30:14604582241294217. [PMID: 39434249 DOI: 10.1177/14604582241294217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2024]
Abstract
Background: Despite the ongoing efforts to digitalize the healthcare sector in developing countries, the full adoption of big data analytics in healthcare settings is yet to be attained Exploring opportunities and challenges encountered is essential for designing and implementing effective interventional strategies. Objective: Exploring opportunities and challenges towards integrating big data analytics technologies in the healthcare industry in developing countries. Methodology: This was a narrative review study design. A literature search on different databases was conducted including PubMed, ScienceDirect, MEDLINE, Scopus, and Google Scholar. Articles with predetermined keywords and written in English were included. Results: Big data analytics finds its application in population health management and clinical decision-support systems even in developing countries. The major challenges towards the integration of big data analytics in the healthcare sector in developing countries include fragmentation of healthcare data and lack of interoperability, data security, privacy and confidentiality concerns, limited resources and inadequate regulatory and policy frameworks for governing big data analytics technologies and limited reliable power and internet infrastructures. Conclusion: Digitalization of healthcare delivery in developing countries faces several significant challenges. However, the integration of big data analytics can potentially open new avenues for enhancing healthcare delivery with cost-effective benefits.
Collapse
Affiliation(s)
- David Muhunzi
- Department of Internal Medicine, Muhimbili University of Health and Allied Sciences(MUHAS), Dar es Salaam, Tanzania
| | - Lucy Kitambala
- Department of Internal Medicine, Muhimbili University of Health and Allied Sciences(MUHAS), Dar es Salaam, Tanzania
| | - Harold L Mashauri
- Department of Epidemiology, Institute of Public Health, Kilimanjaro Christian Medical University College, Moshi, Tanzania
- Department of Internal Medicine, Kilimanjaro Christian Medical University College, Moshi, Tanzania
| |
Collapse
|
38
|
Apostolidis G, Kakouri A, Dimaridis I, Vasileiou E, Gerasimou I, Charisis V, Hadjidimitriou S, Lazaridis N, Germanidis G, Hadjileontiadis L. A web-based platform for studying the impact of artificial intelligence in video capsule endoscopy. Health Informatics J 2024; 30:14604582241296072. [PMID: 39441895 DOI: 10.1177/14604582241296072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2024]
Abstract
Objective: Integrating artificial intelligence (AI) solutions into clinical practice, particularly in the field of video capsule endoscopy (VCE), necessitates the execution of rigorous clinical studies. Methods: This work introduces a novel software platform tailored to facilitate the conduct of multi-reader multi-case clinical studies in VCE. The platform, developed as a web application, prioritizes remote accessibility to accommodate multi-center studies. Notably, considerable attention was devoted to user interface and user experience design elements to ensure a seamless and engaging interface. To evaluate the usability of the platform, a pilot study is conducted. Results: The results indicate a high level of usability and acceptance among users, providing valuable insights into the expectations and preferences of gastroenterologists navigating AI-driven VCE solutions. Conclusion: This research lays a foundation for future advancements in AI integration within clinical VCE practice.
Collapse
Affiliation(s)
- Georgios Apostolidis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Antigoni Kakouri
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Ioannis Dimaridis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Eleni Vasileiou
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Ioannis Gerasimou
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Vasileios Charisis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Stelios Hadjidimitriou
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nikolaos Lazaridis
- Division of Gastroenterology and Hepatology, First Department of Internal Medicine, AHEPA University Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Georgios Germanidis
- Division of Gastroenterology and Hepatology, First Department of Internal Medicine, AHEPA University Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece
- Basic and Translational Research Unit, Special Unit for Biomedical Research and Education, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Leontios Hadjileontiadis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
- Department of Biomedical Engineering, Khalifa University, Abu Dhabi, UAE
| |
Collapse
|
39
|
Sarangi PK, Datta S, Swarup MS, Panda S, Nayak DSK, Malik A, Datta A, Mondal H. Radiologic Decision-Making for Imaging in Pulmonary Embolism: Accuracy and Reliability of Large Language Models-Bing, Claude, ChatGPT, and Perplexity. Indian J Radiol Imaging 2024; 34:653-660. [PMID: 39318561 PMCID: PMC11419749 DOI: 10.1055/s-0044-1787974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2024] Open
Abstract
Background Artificial intelligence chatbots have demonstrated potential to enhance clinical decision-making and streamline health care workflows, potentially alleviating administrative burdens. However, the contribution of AI chatbots to radiologic decision-making for clinical scenarios remains insufficiently explored. This study evaluates the accuracy and reliability of four prominent Large Language Models (LLMs)-Microsoft Bing, Claude, ChatGPT 3.5, and Perplexity-in offering clinical decision support for initial imaging for suspected pulmonary embolism (PE). Methods Open-ended (OE) and select-all-that-apply (SATA) questions were crafted, covering four variants of case scenarios of PE in-line with the American College of Radiology Appropriateness Criteria. These questions were presented to the LLMs by three radiologists from diverse geographical regions and setups. The responses were evaluated based on established scoring criteria, with a maximum achievable score of 2 points for OE responses and 1 point for each correct answer in SATA questions. To enable comparative analysis, scores were normalized (score divided by the maximum achievable score). Result In OE questions, Perplexity achieved the highest accuracy (0.83), while Claude had the lowest (0.58), with Bing and ChatGPT each scoring 0.75. For SATA questions, Bing led with an accuracy of 0.96, Perplexity was the lowest at 0.56, and both Claude and ChatGPT scored 0.6. Overall, OE questions saw higher scores (0.73) compared to SATA (0.68). There is poor agreement among radiologists' scores for OE (Intraclass Correlation Coefficient [ICC] = -0.067, p = 0.54), while there is strong agreement for SATA (ICC = 0.875, p < 0.001). Conclusion The study revealed variations in accuracy across LLMs for both OE and SATA questions. Perplexity showed superior performance in OE questions, while Bing excelled in SATA questions. OE queries yielded better overall results. The current inconsistencies in LLM accuracy highlight the importance of further refinement before these tools can be reliably integrated into clinical practice, with a need for additional LLM fine-tuning and judicious selection by radiologists to achieve consistent and reliable support for decision-making.
Collapse
Affiliation(s)
- Pradosh Kumar Sarangi
- Department of Radiodiagnosis, All India Institute of Medical Sciences Deoghar, Deoghar, Jharkhand, India
| | - Suvrankar Datta
- Department of Radiodiagnosis, All India Institute of Medical Sciences New Delhi, New Delhi, India
| | - M. Sarthak Swarup
- Department of Radiodiagnosis, Vardhman Mahavir Medical College and Safdarjung Hospital New Delhi, New Delhi, India
| | - Swaha Panda
- Department of Otorhinolaryngology and Head and Neck Surgery, All India Institute of Medical Sciences Deoghar, Deoghar, Jharkhand, India
| | - Debasish Swapnesh Kumar Nayak
- Department of Computer Science and Engineering, SOET, Centurion University of Technology and Management, Bhubaneswar, Odisha, India
| | - Archana Malik
- Department of Pulmonary Medicine, All India Institute of Medical Sciences Deoghar, Deoghar, Jharkhand, India
| | - Ananda Datta
- Department of Pulmonary Medicine, All India Institute of Medical Sciences Deoghar, Deoghar, Jharkhand, India
| | - Himel Mondal
- Department of Physiology, All India Institute of Medical Sciences Deoghar, Deoghar, Jharkhand, India
| |
Collapse
|
40
|
Yan P, Li M, Zhang J, Li G, Jiang Y, Luo H. Cold SegDiffusion: A novel diffusion model for medical image segmentation. Knowl Based Syst 2024; 301:112350. [DOI: 10.1016/j.knosys.2024.112350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/03/2024]
|
41
|
Almanaa M. Trends and Public Perception of Artificial Intelligence in Medical Imaging: A Social Media Analysis. Cureus 2024; 16:e70008. [PMID: 39445247 PMCID: PMC11498353 DOI: 10.7759/cureus.70008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/23/2024] [Indexed: 10/25/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) in medical imaging has generated significant interest and debate among healthcare professionals, researchers, and the general public. This study aims to explore trends and public perception of AI in medical imaging by analyzing social media discussions. Using a retrospective content analysis approach, social media posts from X (formerly known as Twitter) and Reddit were collected, covering discussions from 2019 to 2024. A total of 1,022 posts were analyzed after data cleaning, employing both qualitative and quantitative methods to examine sentiment, themes, and keyword frequencies. The sentiment analysis revealed that 55% of the comments expressed positive sentiments towards AI in medical imaging, emphasizing its potential to enhance diagnostic accuracy and efficiency. Neutral sentiments accounted for 35% of the posts, while 10% expressed negative sentiments, primarily focusing on concerns related to job displacement, ethical issues, and data privacy. Thematic analysis identified four primary themes: ethical and privacy concerns, job displacement, trust and reliability, and workflow efficiency. Keyword frequency analysis highlighted significant discussions around AI, imaging, and radiology. The results underscore both the optimism and concerns associated with AI in medical imaging, emphasizing the need for ongoing dialogue among technology developers, healthcare providers, and the public. Addressing ethical and privacy concerns, and integrating AI responsibly into clinical workflows, is crucial for maximizing its benefits and minimizing potential risks. These findings provide valuable insights into public perceptions and inform strategies for the effective and ethical implementation of AI technologies in healthcare.
Collapse
Affiliation(s)
- Mansour Almanaa
- Radiological Sciences Department, College of Applied Medical Sciences, King Saud University, Riyadh, SAU
| |
Collapse
|
42
|
Quang-Huy T, Sharma B, Theu LT, Tran DT, Chowdhury S, Karthik C, Gurusamy S. Frequency-hopping along with resolution-turning for fast and enhanced reconstruction in ultrasound tomography. Sci Rep 2024; 14:15483. [PMID: 38969737 PMCID: PMC11226711 DOI: 10.1038/s41598-024-66138-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 06/27/2024] [Indexed: 07/07/2024] Open
Abstract
The distorted Born iterative (DBI) method is considered to obtain images with high-contrast and resolution. Besides satisfying the Born approximation condition, the frequency-hopping (FH) technique is necessary to gradually update the sound contrast from the first iteration and progress to the actual sound contrast of the imaged object in subsequent iterations. Inspired by the fact that the higher the frequency, the higher the resolution. Because low-frequency allows for low-resolution object imaging, hence for high-resolution imaging requirements, using low-frequency to possess a high-resolution image from the first iteration will be less efficient. For an effective reconstruction, the object's resolution at low frequencies should be small. And similarly, with high frequencies, the object resolution should be larger. Therefore, in this paper, the FH, and the resolution-turning (RT) technique are proposed to obtain object images with high-contrast and -resolution. The convergence speed in the initial iterations is rapidly achieved by utilizing low frequency in the frequency-turning technique and low image resolution in the resolution-turning technique. It is crucial to ensure accurate object reconstruction for subsequent iterations. The desired spatial resolution is attained by employing high frequency and large image resolution. The resolution-turning distorted Born iterative (RT-DBI) and frequency-hopping distorted Born iterative (FH-DBI) solutions are thoroughly investigated to exploit their best performance. This makes sense because if it is not good to choose the number of iterations for the frequency f1 in FH-DBI and for the resolution of N1 × N1 in RT-DBI, then these solutions give even worse quality than traditional DBI. After that, the RT-FH-DBI integration was investigated in two sub-solutions. We found that the lower frequency f1 used both before and after the RT would get the best performance. Consequently, compared to the traditional DBI approaches, the normalized error and total runtime for the reconstruction process were dramatically decreased, at 83.6% and 18.6%, respectively. Besides fast and quality imaging, the proposed solution RT-FH-DBI is promised to produce high-contrast and high-resolution object images, aiming at object reconstruction at the biological tissue. The development of 3D imaging and experimental verification will be studied further.
Collapse
Affiliation(s)
- Tran Quang-Huy
- Faculty of Physics, Hanoi Pedagogical University 2, Xuan Hoa Ward, Phuc Yen City, Vinh Phuc Province, Vietnam
| | - Bhisham Sharma
- Centre of Research Impact and Outcome, Chitkara University, Rajpura, Punjab, 140401, India
| | | | - Duc-Tan Tran
- Faculty of Electrical and Electronic Engineering, Phenikaa University, Hanoi, 12116, Vietnam
| | - Subrata Chowdhury
- Department of Computer Science and Engineering, Sreenivasa Institute of Technology and Management Studies (SITAMS), Bangalore, India
| | - Chandran Karthik
- Robotics and Automation, Jyothi Engineering College, Thrissur, India
| | - Saravanakumar Gurusamy
- Department of Electrical and Electronics Technology, FDRE Technical and Vocational Training Institute, Addis Ababa, Ethiopia.
| |
Collapse
|
43
|
VanDecker WA. The Integrative Sport of Cardiac Imaging and Clinical Cardiology: Machine Augmentation and an Evolving Odyssey. JACC Cardiovasc Imaging 2024; 17:792-794. [PMID: 38613557 DOI: 10.1016/j.jcmg.2024.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 02/13/2024] [Indexed: 04/15/2024]
Affiliation(s)
- William A VanDecker
- Lewis Katz School of Medicine at Temple University, Philadelphia, Pennsylvania, USA.
| |
Collapse
|
44
|
Glaudemans AW. Heliyon medical imaging: Shaping the future of health. Heliyon 2024; 10:e32395. [PMID: 39183843 PMCID: PMC11341280 DOI: 10.1016/j.heliyon.2024.e32395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 06/03/2024] [Indexed: 08/27/2024] Open
Affiliation(s)
- Andor W.J.M. Glaudemans
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| |
Collapse
|
45
|
Al-Kadi OS, Al-Emaryeen R, Al-Nahhas S, Almallahi I, Braik R, Mahafza W. Empowering brain cancer diagnosis: harnessing artificial intelligence for advanced imaging insights. Rev Neurosci 2024; 35:399-419. [PMID: 38291768 DOI: 10.1515/revneuro-2023-0115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 12/10/2023] [Indexed: 02/01/2024]
Abstract
Artificial intelligence (AI) is increasingly being used in the medical field, specifically for brain cancer imaging. In this review, we explore how AI-powered medical imaging can impact the diagnosis, prognosis, and treatment of brain cancer. We discuss various AI techniques, including deep learning and causality learning, and their relevance. Additionally, we examine current applications that provide practical solutions for detecting, classifying, segmenting, and registering brain tumors. Although challenges such as data quality, availability, interpretability, transparency, and ethics persist, we emphasise the enormous potential of intelligent applications in standardising procedures and enhancing personalised treatment, leading to improved patient outcomes. Innovative AI solutions have the power to revolutionise neuro-oncology by enhancing the quality of routine clinical practice.
Collapse
Affiliation(s)
- Omar S Al-Kadi
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Roa'a Al-Emaryeen
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Sara Al-Nahhas
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Isra'a Almallahi
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Ruba Braik
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Waleed Mahafza
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| |
Collapse
|
46
|
Balasubramanian AA, Al-Heejawi SMA, Singh A, Breggia A, Ahmad B, Christman R, Ryan ST, Amal S. Ensemble Deep Learning-Based Image Classification for Breast Cancer Subtype and Invasiveness Diagnosis from Whole Slide Image Histopathology. Cancers (Basel) 2024; 16:2222. [PMID: 38927927 PMCID: PMC11201924 DOI: 10.3390/cancers16122222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 06/07/2024] [Accepted: 06/10/2024] [Indexed: 06/28/2024] Open
Abstract
Cancer diagnosis and classification are pivotal for effective patient management and treatment planning. In this study, a comprehensive approach is presented utilizing ensemble deep learning techniques to analyze breast cancer histopathology images. Our datasets were based on two widely employed datasets from different centers for two different tasks: BACH and BreakHis. Within the BACH dataset, a proposed ensemble strategy was employed, incorporating VGG16 and ResNet50 architectures to achieve precise classification of breast cancer histopathology images. Introducing a novel image patching technique to preprocess a high-resolution image facilitated a focused analysis of localized regions of interest. The annotated BACH dataset encompassed 400 WSIs across four distinct classes: Normal, Benign, In Situ Carcinoma, and Invasive Carcinoma. In addition, the proposed ensemble was used on the BreakHis dataset, utilizing VGG16, ResNet34, and ResNet50 models to classify microscopic images into eight distinct categories (four benign and four malignant). For both datasets, a five-fold cross-validation approach was employed for rigorous training and testing. Preliminary experimental results indicated a patch classification accuracy of 95.31% (for the BACH dataset) and WSI image classification accuracy of 98.43% (BreakHis). This research significantly contributes to ongoing endeavors in harnessing artificial intelligence to advance breast cancer diagnosis, potentially fostering improved patient outcomes and alleviating healthcare burdens.
Collapse
Affiliation(s)
| | | | - Akarsh Singh
- College of Engineering, Northeastern University, Boston, MA 02115, USA; (S.M.A.A.-H.); (A.S.)
| | - Anne Breggia
- MaineHealth Institute for Research, Scarborough, ME 04074, USA;
| | - Bilal Ahmad
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Robert Christman
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Stephen T. Ryan
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Saeed Amal
- The Roux Institute, Department of Bioengineering, College of Engineering, Northeastern University, Boston, MA 02115, USA
| |
Collapse
|
47
|
Alsaleh AM, Albalawi E, Algosaibi A, Albakheet SS, Khan SB. Few-Shot Learning for Medical Image Segmentation Using 3D U-Net and Model-Agnostic Meta-Learning (MAML). Diagnostics (Basel) 2024; 14:1213. [PMID: 38928629 PMCID: PMC11202447 DOI: 10.3390/diagnostics14121213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 05/24/2024] [Accepted: 05/30/2024] [Indexed: 06/28/2024] Open
Abstract
Deep learning has attained state-of-the-art results in general image segmentation problems; however, it requires a substantial number of annotated images to achieve the desired outcomes. In the medical field, the availability of annotated images is often limited. To address this challenge, few-shot learning techniques have been successfully adapted to rapidly generalize to new tasks with only a few samples, leveraging prior knowledge. In this paper, we employ a gradient-based method known as Model-Agnostic Meta-Learning (MAML) for medical image segmentation. MAML is a meta-learning algorithm that quickly adapts to new tasks by updating a model's parameters based on a limited set of training samples. Additionally, we use an enhanced 3D U-Net as the foundational network for our models. The enhanced 3D U-Net is a convolutional neural network specifically designed for medical image segmentation. We evaluate our approach on the TotalSegmentator dataset, considering a few annotated images for four tasks: liver, spleen, right kidney, and left kidney. The results demonstrate that our approach facilitates rapid adaptation to new tasks using only a few annotated images. In 10-shot settings, our approach achieved mean dice coefficients of 93.70%, 85.98%, 81.20%, and 89.58% for liver, spleen, right kidney, and left kidney segmentation, respectively. In five-shot sittings, the approach attained mean Dice coefficients of 90.27%, 83.89%, 77.53%, and 87.01% for liver, spleen, right kidney, and left kidney segmentation, respectively. Finally, we assess the effectiveness of our proposed approach on a dataset collected from a local hospital. Employing five-shot sittings, we achieve mean Dice coefficients of 90.62%, 79.86%, 79.87%, and 78.21% for liver, spleen, right kidney, and left kidney segmentation, respectively.
Collapse
Affiliation(s)
- Aqilah M. Alsaleh
- College of Computer Science and Information Technology, King Faisal University, Al Hofuf 400-31982, AlAhsa, Saudi Arabia; (E.A.); (A.A.)
- Department of Information Technology, AlAhsa Health Cluster, Al Hofuf 3158-36421, AlAhsa, Saudi Arabia
| | - Eid Albalawi
- College of Computer Science and Information Technology, King Faisal University, Al Hofuf 400-31982, AlAhsa, Saudi Arabia; (E.A.); (A.A.)
| | - Abdulelah Algosaibi
- College of Computer Science and Information Technology, King Faisal University, Al Hofuf 400-31982, AlAhsa, Saudi Arabia; (E.A.); (A.A.)
| | - Salman S. Albakheet
- Department of Radiology, King Faisal General Hospital, Al Hofuf 36361, AlAhsa, Saudi Arabia;
| | - Surbhi Bhatia Khan
- Department of Data Science, School of Science Engineering and Environment, University of Salford, Manchester M5 4WT, UK;
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
| |
Collapse
|
48
|
Zhang X, Chen S, Zhang P, Wang C, Wang Q, Zhou X. Staging of Liver Fibrosis Based on Energy Valley Optimization Multiple Stacking (EVO-MS) Model. Bioengineering (Basel) 2024; 11:485. [PMID: 38790352 PMCID: PMC11117710 DOI: 10.3390/bioengineering11050485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 05/09/2024] [Accepted: 05/10/2024] [Indexed: 05/26/2024] Open
Abstract
Currently, staging the degree of liver fibrosis predominantly relies on liver biopsy, a method fraught with potential risks, such as bleeding and infection. With the rapid development of medical imaging devices, quantification of liver fibrosis through image processing technology has become feasible. Stacking technology is one of the effective ensemble techniques for potential usage, but precise tuning to find the optimal configuration manually is challenging. Therefore, this paper proposes a novel EVO-MS model-a multiple stacking ensemble learning model optimized by the energy valley optimization (EVO) algorithm to select most informatic features for fibrosis quantification. Liver contours are profiled from 415 biopsied proven CT cases, from which 10 shape features are calculated and inputted into a Support Vector Machine (SVM) classifier to generate the accurate predictions, then the EVO algorithm is applied to find the optimal parameter combination to fuse six base models: K-Nearest Neighbors (KNNs), Decision Tree (DT), Naive Bayes (NB), Extreme Gradient Boosting (XGB), Gradient Boosting Decision Tree (GBDT), and Random Forest (RF), to create a well-performing ensemble model. Experimental results indicate that selecting 3-5 feature parameters yields satisfactory results in classification, with features such as the contour roundness non-uniformity (Rmax), maximum peak height of contour (Rp), and maximum valley depth of contour (Rm) significantly influencing classification accuracy. The improved EVO algorithm, combined with a multiple stacking model, achieves an accuracy of 0.864, a precision of 0.813, a sensitivity of 0.912, a specificity of 0.824, and an F1-score of 0.860, which demonstrates the effectiveness of our EVO-MS model in staging the degree of liver fibrosis.
Collapse
Affiliation(s)
- Xuejun Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China; (X.Z.); (P.Z.); (C.W.)
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, China
| | - Shengxiang Chen
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China; (X.Z.); (P.Z.); (C.W.)
| | - Pengfei Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China; (X.Z.); (P.Z.); (C.W.)
| | - Chun Wang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China; (X.Z.); (P.Z.); (C.W.)
| | - Qibo Wang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China; (X.Z.); (P.Z.); (C.W.)
| | - Xiangrong Zhou
- Department of Electrical, Electronic and Computer Engineering, Gifu University, Gifu 501-1193, Japan;
| |
Collapse
|
49
|
Chen WW, Kuo L, Lin YX, Yu WC, Tseng CC, Lin YJ, Huang CC, Chang SL, Wu JCH, Chen CK, Weng CY, Chan S, Lin WW, Hsieh YC, Lin MC, Fu YC, Chen T, Chen SA, Lu HHS. A Deep Learning Approach to Classify Fabry Cardiomyopathy from Hypertrophic Cardiomyopathy Using Cine Imaging on Cardiac Magnetic Resonance. Int J Biomed Imaging 2024; 2024:6114826. [PMID: 38706878 PMCID: PMC11068448 DOI: 10.1155/2024/6114826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 03/20/2024] [Accepted: 03/23/2024] [Indexed: 05/07/2024] Open
Abstract
A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an F1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an F1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.
Collapse
Affiliation(s)
- Wei-Wen Chen
- Institute of Computer Science and Engineering, National Yang-Ming University, Hsinchu, Taiwan
| | - Ling Kuo
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan
| | - Yi-Xun Lin
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Wen-Chung Yu
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Chien-Chao Tseng
- Institute of Computer Science and Engineering, National Yang-Ming University, Hsinchu, Taiwan
| | - Yenn-Jiang Lin
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Ching-Chun Huang
- Institute of Computer Science and Engineering, National Yang-Ming University, Hsinchu, Taiwan
| | - Shih-Lin Chang
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Jacky Chung-Hao Wu
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Chun-Ku Chen
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Ching-Yao Weng
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Siwa Chan
- Department of Radiology, Taichung Veterans General Hospital, Taichung, Taiwan
- Department of Post-Baccalaureate Medicine, National Chung Hsing University, Taichung, Taiwan
| | - Wei-Wen Lin
- Cardiovascular Center, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yu-Cheng Hsieh
- Cardiovascular Center, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Ming-Chih Lin
- Department of Post-Baccalaureate Medicine, National Chung Hsing University, Taichung, Taiwan
- Department of Pediatric Cardiology, Taichung Veterans General Hospital, Taichung, Taiwan
- Children's Medical Center, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yun-Ching Fu
- Department of Pediatric Cardiology, Taichung Veterans General Hospital, Taichung, Taiwan
- Children's Medical Center, Taichung Veterans General Hospital, Taichung, Taiwan
- Department of Pediatrics, School of Medicine, National Chung-Hsing University, Taichung, Taiwan
| | - Tsung Chen
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Shih-Ann Chen
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
- Cardiovascular Center, Taichung Veterans General Hospital, Taichung, Taiwan
- College of Medicine, National Chung Hsing University, Taichung, Taiwan
| | - Henry Horng-Shing Lu
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
- Department of Statistics and Data Science, Cornell University, Ithaca, New York, USA
| |
Collapse
|
50
|
Waheed Z, Gui J. An optimized ensemble model bfased on cuckoo search with Levy Flight for automated gastrointestinal disease detection. MULTIMEDIA TOOLS AND APPLICATIONS 2024; 83:89695-89722. [DOI: 10.1007/s11042-024-18937-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/04/2024] [Accepted: 03/13/2024] [Indexed: 01/15/2025]
|