1
|
Yue T, Nguyen D, Varshney V, Li Y. Assessing the Effectiveness of Neural Networks and Molecular Dynamics Simulations in Predicting Viscosity of Small Organic Molecules. J Phys Chem B 2025; 129:4501-4513. [PMID: 40267179 DOI: 10.1021/acs.jpcb.4c08757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2025]
Abstract
Viscosity is a crucial material property that influences a wide range of applications, including three-dimensional (3D) printing, lubricants, and solvents. However, experimental approaches to measuring viscosity face challenges such as handling multiple samples, high costs, and limited compound availability. To address these limitations, we have developed computational models for viscosity prediction of small organic molecules, utilizing machine learning (ML) and nonequilibrium molecular dynamics (NEMD) simulations. Our ML framework, which includes feed-forward neural networks (FNN) and physics-informed neural networks (PINN), is based on the largest data set of small molecule viscosities compiled from the literature. The PINN model, in particular, incorporates temperature dependence through a four-parameter model, allowing for the direct prediction of continuous temperature-dependent viscosity curves. The ML models demonstrate exceptional prediction accuracy for the viscosity of various organic compounds across a wide range of temperatures. External validation of our models further confirms that the ML prediction models outperform the NEMD approach in predicting viscosity across a diverse range of organic molecules and temperatures. This highlights the potential of ML models to overcome limitations in traditional MD simulations, which often struggle with accuracy for specific molecules or temperature ranges. Our further feature importance analysis revealed a strong correlation between molecular structure and viscosity. We emphasize the key role of substructures in determining viscosity, offering deeper molecular insights for material design with tailored viscosity.
Collapse
Affiliation(s)
- Tianle Yue
- Department of Mechanical Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706, United States
| | - Danh Nguyen
- Department of Mechanical Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706, United States
| | - Vikas Varshney
- Materials and Manufacturing Directorate, Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio 45433, United States
| | - Ying Li
- Department of Mechanical Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706, United States
| |
Collapse
|
2
|
Mead K, Cross T, Roger G, Sabharwal R, Singh S, Giannotti N. MRI deep learning models for assisted diagnosis of knee pathologies: a systematic review. Eur Radiol 2025; 35:2457-2469. [PMID: 39422725 PMCID: PMC12021734 DOI: 10.1007/s00330-024-11105-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 07/30/2024] [Accepted: 09/05/2024] [Indexed: 10/19/2024]
Abstract
OBJECTIVES Despite showing encouraging outcomes, the precision of deep learning (DL) models using different convolutional neural networks (CNNs) for diagnosis remains under investigation. This systematic review aims to summarise the status of DL MRI models developed for assisting the diagnosis of a variety of knee abnormalities. MATERIALS AND METHODS Five databases were systematically searched, employing predefined terms such as 'Knee AND 3D AND MRI AND DL'. Selected inclusion criteria were used to screen publications by title, abstract, and full text. The synthesis of results was performed by two independent reviewers. RESULTS Fifty-four articles were included. The studies focused on anterior cruciate ligament injuries (n = 19, 36%), osteoarthritis (n = 9, 17%), meniscal injuries (n = 13, 24%), abnormal knee appearance (n = 11, 20%), and other (n = 2, 4%). The DL models in this review primarily used the following CNNs: ResNet (n = 11, 21%), VGG (n = 6, 11%), DenseNet (n = 4, 8%), and DarkNet (n = 3, 6%). DL models showed high-performance metrics compared to ground truth. DL models for the detection of a specific injury outperformed those by up to 4.5% for general abnormality detection. CONCLUSION Despite the varied study designs used among the reviewed articles, DL models showed promising outcomes in the assisted detection of selected knee pathologies by MRI. This review underscores the importance of validating these models with larger MRI datasets to close the existing gap between current DL model performance and clinical requirements. KEY POINTS Question What is the status of DL model availability for knee pathology detection in MRI and their clinical potential? Findings Pathology-specific DL models reported higher accuracy compared to DL models for the detection of general abnormalities of the knee. DL model performance was mainly influenced by the quantity and diversity of data available for model training. Clinical relevance These findings should encourage future developments to improve patient care, support personalised diagnosis and treatment, optimise costs, and advance artificial intelligence-based medical imaging practices.
Collapse
Affiliation(s)
- Keiley Mead
- The University of Sydney School of Health Sciences, Sydney, NSW, Australia.
| | - Tom Cross
- The Stadium Sports Medicine Clinic, Sydney, NSW, Australia
| | - Greg Roger
- Vestech Medical Pty Limited, Sydney, NSW, Australia
- The University of Sydney School of Biomedical Engineering, Sydney, NSW, Australia
| | | | - Sahaj Singh
- PRP Diagnostic Imaging, Sydney, NSW, Australia
| | - Nicola Giannotti
- The University of Sydney School of Health Sciences, Sydney, NSW, Australia
| |
Collapse
|
3
|
Liang X, Wang G, Zhu Z, Zhang W, Li Y, Luo J, Wang H, Wu S, Chen R, Deng M, Wu H, Shen C, Hu G, Zhang K, Sun Q, Wang Z. Using pathology images and artificial intelligence to identify bacterial infections and their types. J Microbiol Methods 2025; 232-234:107131. [PMID: 40233851 DOI: 10.1016/j.mimet.2025.107131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2025] [Revised: 04/07/2025] [Accepted: 04/11/2025] [Indexed: 04/17/2025]
Abstract
Bacterial infections pose a significant biosafety concern, making early and accurate diagnosis essential for effective treatment and prognosis. Traditional diagnostic methods, while reliable, are often slow and fail to meet urgent clinical demands. In contrast, emerging technologies offer greater efficiency but are often costly and inaccessible. In this study, we utilized easily accessible pathology images to diagnose bacterial infections. Our initial findings indicate that, in the absence of postmortem phenomena, microscopic examination of pathological images can confirm the presence of a bacterial infection. However, distinguishing between different types of bacterial infections remains challenging due to similarities in pathological changes. To address this limitation, we applied a computational pathology approach by integrating pathology images with artificial intelligence (AI) algorithms. Our model classified bacterial infections at both the patch-level and whole slide image (WSI)-level. The results demonstrated strong performance, with an overall AUC consistently above 0.950 across training, testing, and external validation datasets, indicating high accuracy, robustness, and generalizability. This study highlights AI's potential in identifying bacterial infection types and provides valuable technical support for clinical diagnostics, paving the way for faster and more precise infection management.
Collapse
Affiliation(s)
- Xinggong Liang
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Gongji Wang
- College of Forensic Medicine, NHC Key Laboratory of Drug Addition Medicine, Kunming Medical University, Kunming, Yunnan 650500, China
| | - Zhengyang Zhu
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Wanqing Zhang
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Yuqian Li
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Jianliang Luo
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Han Wang
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Shuo Wu
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Run Chen
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Mingyan Deng
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Hao Wu
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Chen Shen
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Gengwang Hu
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China
| | - Kai Zhang
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China.
| | - Qinru Sun
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China.
| | - Zhenyuan Wang
- Department of Forensic Pathology, College of Forensic Medicine, Xi'an Jiaotong University, Xi'an, Shaanxi 710061, China.
| |
Collapse
|
4
|
De Wilde D, Zanier O, Da Mutten R, Jin M, Regli L, Serra C, Staartjes VE. Strategies for generating synthetic computed tomography-like imaging from radiographs: A scoping review. Med Image Anal 2025; 101:103454. [PMID: 39793215 DOI: 10.1016/j.media.2025.103454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Revised: 11/18/2024] [Accepted: 01/03/2025] [Indexed: 01/13/2025]
Abstract
BACKGROUND Advancements in tomographic medical imaging have revolutionized diagnostics and treatment monitoring by offering detailed 3D visualization of internal structures. Despite the significant value of computed tomography (CT), challenges such as high radiation dosage and cost barriers limit its accessibility, especially in low- and middle-income countries. Recognizing the potential of radiographic imaging in reconstructing CT images, this scoping review aims to explore the emerging field of synthesizing 3D CT-like images from 2D radiographs by examining the current methodologies. METHODS A scoping review was carried out following PRISMA-SR guidelines. Eligibility criteria for the articles included full-text articles published up to September 9, 2024, studying methodologies for the synthesis of 3D CT images from 2D biplanar or four-projection x-ray images. Eligible articles were sourced from PubMed MEDLINE, Embase, and arXiv. RESULTS 76 studies were included. The majority (50.8 %, n = 30) were published between 2010 and 2020 (38.2 %, n = 29) and from 2020 onwards (36.8 %, n = 28), with European (40.8 %, n = 31), North American (26.3 %, n = 20), and Asian (32.9 %, n = 25) institutions being primary contributors. Anatomical regions varied, with 17.1 % (n = 13) of studies not using clinical data. Further, studies focused on the chest (25 %, n = 19), spine and vertebrae (17.1 %, n = 13), coronary arteries (10.5 %, n = 8), and cranial structures (10.5 %, n = 8), among other anatomical regions. Convolutional neural networks (CNN) (19.7 %, n = 15), generative adversarial networks (21.1 %, n = 16) and statistical shape models (15.8 %, n = 12) emerged as the most applied methodologies. A limited number of studies included explored the use of conditional diffusion models, iterative reconstruction algorithms, statistical shape models, and digital tomosynthesis. CONCLUSION This scoping review summarizes current strategies and challenges in synthetic imaging generation. The development of 3D CT-like imaging from 2D radiographs could reduce radiation risk while simultaneously addressing financial and logistical obstacles that impede global access to CT imaging. Despite initial promising results, the field encounters challenges with varied methodologies and frequent lack of proper validation, requiring further research to define synthetic imaging's clinical role.
Collapse
Affiliation(s)
- Daniel De Wilde
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Olivier Zanier
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Raffaele Da Mutten
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Michael Jin
- Department of Neurosurgery, Stanford University, Stanford, California, USA
| | - Luca Regli
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Carlo Serra
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Victor E Staartjes
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland.
| |
Collapse
|
5
|
Yang H, Ko K, Yang C. Evaluating auto-contouring accuracy in reduced CT dose images for radiopharmaceutical therapies: Denoising and evaluation of 177Lu DOTATATE therapy dataset. J Appl Clin Med Phys 2025; 26:e70066. [PMID: 40025651 PMCID: PMC11969114 DOI: 10.1002/acm2.70066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2024] [Revised: 02/07/2025] [Accepted: 02/20/2025] [Indexed: 03/04/2025] Open
Abstract
PURPOSE Reducing radiation dose attributed to computed tomography (CT) may compromise the accuracy of organ segmentation, an important step in 177Lu DOTATATE therapy that affects both activity and mass estimates. This study aimed to facilitate CT dose reduction using deep learning methods for patients undergoing serial single photon emission computed tomography (SPECT)/CT imaging during 177Lu DOTATATE therapy. METHODS The 177Lu DOTATATE patient dataset hosted in Deep Blue Data was used in this study. The noise insertion method incorporating the effect of bowtie filter, automatic exposure control, and electronic noise was applied to simulate images at four reduced dose levels. Organ segmentation was carried out using the TotalSegmentator model, while image denoising was performed with the DenseNet model. The impact of segmentation performance on the dosimetry accuracy of 177Lu DOTATATE therapy was quantified by calculating the percent difference between a dose rate map segmented with a reference mask and the same dose rate map segmented with a test mask (PDdose) for spleen, right kidney, left kidney, and liver. RESULTS Before denoising, the mean ± standard deviation of PDdose for all critical organs were 2.31 ± 2.94%, 4.86 ± 9.42%, 8.39 ± 14.76%, 12.95 ± 19.99% in CT images at dose levels down to 20%, 10%, 5%, 2.5% of the normal dose, respectively. After denoising, the corresponding results were 1.69 ± 2.25%, 2.84 ± 4.46%, 3.72 ± 4.22%, 7.98 ± 15.05% in CT images at dose levels down to 20%, 10%, 5%, 2.5% of the normal dose, respectively. CONCLUSION As dose reduction increased, CT image segmentation gradually deteriorated, which in turn deteriorated the dosimetry accuracy of 177Lu DOTATATE therapy. Improving CT image quality through denoising could enhance 177Lu DOTATATE dosimetry, making it a valuable tool to support CT dose reduction for patients undergoing serial SPECT/CT imaging during treatment.
Collapse
Affiliation(s)
- Hung‐Te Yang
- Department of Radiation OncologyKaohsiung Municipal Siaogang HospitalKaohsiungTaiwan
| | - Kuan‐Yin Ko
- Department of Nuclear MedicineNational Taiwan University Cancer CenterTaipeiTaiwan
| | - Ching‐Ching Yang
- Department of Medical Imaging and Radiological SciencesKaohsiung Medical UniversityKaohsiungTaiwan
- Department of Medical ResearchKaohsiung Medical University HospitalKaohsiungTaiwan
| |
Collapse
|
6
|
Islam SR, Xie Z, He W, Zhi D. Vision Transformer Autoencoders for Unsupervised Representation Learning: Capturing Local and Non-Local Features in Brain Imaging to Reveal Genetic Associations. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.03.24.25324549. [PMID: 40196251 PMCID: PMC11974795 DOI: 10.1101/2025.03.24.25324549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 04/09/2025]
Abstract
The discovery of genetic loci associated with brain architecture can provide deeper insights into neuroscience and improved personalized medicine outcomes. Previously, we designed the Unsupervised Deep learning-derived Imaging Phenotypes (UDIPs) approach to extract endophenotypes from brain imaging using a convolutional (CNN) autoencoder, and conducted brain imaging GWAS on UK Biobank (UKBB). In this work, we leverage a vision transformer (ViT) model due to a different inductive bias and its ability to potentially capture unique patterns through its pairwise attention mechanism. Our approach based on 128 endophenotypes derived from average pooling discovered 10 loci previously unreported by CNN-based UDIP model, 3 of which were not found in the GWAS catalog to have had any associations with brain structure. Our interpretation results demonstrate the ViT's capability in capturing non-local patterns such as left-right hemisphere symmetry within brain MRI data, by leveraging its attention mechanism and positional embeddings. Our results highlight the advantages of transformer-based architectures in feature extraction and representation for genetic discovery.
Collapse
Affiliation(s)
- Samia R Islam
- The University of Texas Health Science Center at Houston, D. Bradley McWilliams School of Biomedical Informatics
| | - Ziqian Xie
- The University of Texas Health Science Center at Houston, D. Bradley McWilliams School of Biomedical Informatics
| | - Wei He
- The University of Texas Health Science Center at Houston, D. Bradley McWilliams School of Biomedical Informatics
| | - Degui Zhi
- The University of Texas Health Science Center at Houston, D. Bradley McWilliams School of Biomedical Informatics
| |
Collapse
|
7
|
Chow JCL. Nanomaterial-Based Molecular Imaging in Cancer: Advances in Simulation and AI Integration. Biomolecules 2025; 15:444. [PMID: 40149980 PMCID: PMC11940464 DOI: 10.3390/biom15030444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2025] [Revised: 03/17/2025] [Accepted: 03/18/2025] [Indexed: 03/29/2025] Open
Abstract
Nanomaterials represent an innovation in cancer imaging by offering enhanced contrast, improved targeting capabilities, and multifunctional imaging modalities. Recent advancements in material engineering have enabled the development of nanoparticles tailored for various imaging techniques, including magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US). These nanoscale agents improve sensitivity and specificity, enabling early cancer detection and precise tumor characterization. Monte Carlo (MC) simulations play a pivotal role in optimizing nanomaterial-based imaging by modeling their interactions with biological tissues, predicting contrast enhancement, and refining dosimetry for radiation-based imaging techniques. These computational methods provide valuable insights into nanoparticle behavior, aiding in the design of more effective imaging agents. Moreover, artificial intelligence (AI) and machine learning (ML) approaches are transforming cancer imaging by enhancing image reconstruction, automating segmentation, and improving diagnostic accuracy. AI-driven models can also optimize MC-based simulations by accelerating data analysis and refining nanoparticle design through predictive modeling. This review explores the latest advancements in nanomaterial-based cancer imaging, highlighting the synergy between nanotechnology, MC simulations, and AI-driven innovations. By integrating these interdisciplinary approaches, future cancer imaging technologies can achieve unprecedented precision, paving the way for more effective diagnostics and personalized treatment strategies.
Collapse
Affiliation(s)
- James C. L. Chow
- Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, ON M5G 1X6, Canada; ; Tel.: +1-416-946-4501
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
- Department of Materials Science and Engineering, University of Toronto, Toronto, ON M5S 3E4, Canada
| |
Collapse
|
8
|
Li X, Tang Z, Liu Y, Du Y, Xing Y, Zhang Z, Xie R. Value of enhanced CT machine learning models combined with clinicoradiological characteristics in predicting lymphatic tissue metastasis in colon cancer. RADIOLOGIE (HEIDELBERG, GERMANY) 2025:10.1007/s00117-024-01412-y. [PMID: 39903282 DOI: 10.1007/s00117-024-01412-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 11/12/2024] [Indexed: 02/06/2025]
Abstract
This study aimed to assess the effectiveness of various machine learning models in identifying lymph node metastasis in colon cancer patients and to explore the potential benefits of combining clinicoradiological and radiomics features for improved diagnosis. A total of 260 patients with pathologically confirmed colon cancer were retrospectively included in study center 1 and study center 2 from January 2015 to August 2024. A total of 198 patients with colon cancer in center 1 were randomly divided into a training set (n = 138) and an internal testing set (n = 60) at a ratio of 7:3. Patients in center 2 were included in the external testing set (n = 62). Five clinical radiological features were used to establish a clinical model. Radiomics features were extracted from the computed tomography venous phase images, and four classifiers, including logistic regression, support vector machine, decision tree, and k‑nearest neighbor, were used to build machine learning models. In addition, a combined model was constructed by joining clinical, radiological, and radiogenomic features. The performance of these models was evaluated in terms of accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), receiver operating curve (ROC) and calibration curves in the training set, internal testing set, and external testing set to determine the diagnostic model with the highest predictive efficiency and to evaluate the stability of the model. Among the four machine learning models, the SVM model had the best predictive performance, with an area under the ROC (AUC) of 0.813, 0.724, and 0.721 for the training set, internal testing set, and external testing set, respectively. The clinical model, radiomics model, and combined model had an AUC of 0.823, 0.813, 0.817, 0.508, 0.724, 0.751, 0.582, 0.721, and 0.744 in the training set, internal testing set, and external testing set, respectively. In conclusion, the combined model performed significantly better than the clinical model (p = 0.017, 0.038), but there was no significant difference between the radiomics model and the combined model (p = 0.556, 0.614).
Collapse
Affiliation(s)
- Xinyi Li
- Department of Radiology, Beijing Ditan Hospital, Capital Medical University, No. 8 Jingshun East Street, 100015, Beijing, Chaoyang District, China
| | - Ziwei Tang
- Department of Radiology, Changde Hospital, Xiangya School of Medicine, Central South University, 415000, Changde, China
| | - Yong Liu
- Department of Forensic Medicine, Tongji Medical College, Hua Zhong University of Science and Technology, 430030, Wuhan, China
| | - Yanni Du
- Department of Radiology, Beijing Ditan Hospital, Capital Medical University, No. 8 Jingshun East Street, 100015, Beijing, Chaoyang District, China
| | - Yuxue Xing
- Department of Radiology, Beijing Ditan Hospital, Capital Medical University, No. 8 Jingshun East Street, 100015, Beijing, Chaoyang District, China
| | - Zixin Zhang
- Department of Radiology, Beijing Ditan Hospital, Capital Medical University, No. 8 Jingshun East Street, 100015, Beijing, Chaoyang District, China
| | - Ruming Xie
- Department of Radiology, Beijing Ditan Hospital, Capital Medical University, No. 8 Jingshun East Street, 100015, Beijing, Chaoyang District, China.
| |
Collapse
|
9
|
Wu Y, Ramai D, Smith ER, Mega PF, Qatomah A, Spadaccini M, Maida M, Papaefthymiou A. Applications of Artificial Intelligence in Gastrointestinal Endoscopic Ultrasound: Current Developments, Limitations and Future Directions. Cancers (Basel) 2024; 16:4196. [PMID: 39766095 PMCID: PMC11674484 DOI: 10.3390/cancers16244196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2024] [Revised: 12/09/2024] [Accepted: 12/14/2024] [Indexed: 01/09/2025] Open
Abstract
Endoscopic ultrasound (EUS) effectively diagnoses malignant and pre-malignant gastrointestinal lesions. In the past few years, artificial intelligence (AI) has shown promising results in enhancing EUS sensitivity and accuracy, particularly for subepithelial lesions (SELs) like gastrointestinal stromal tumors (GISTs). Furthermore, AI models have shown high accuracy in predicting malignancy in gastric GISTs and distinguishing between benign and malignant intraductal papillary mucinous neoplasms (IPMNs). The utility of AI has also been applied to existing and emerging technologies involved in the performance and evaluation of EUS-guided biopsies. These advancements may improve training in EUS, allowing trainees to focus on technical skills and image interpretation. This review evaluates the current state of AI in EUS, covering imaging diagnosis, EUS-guided biopsies, and training advancements. It discusses early feasibility studies and recent developments, while also addressing the limitations and challenges. This article aims to review AI applications to EUS and its applications in clinical practice while addressing pitfalls and challenges.
Collapse
Affiliation(s)
- Yizhong Wu
- Department of Internal Medicine, Baylor Scott & White Round Rock Hospital, Round Rock, TX 78665, USA;
| | - Daryl Ramai
- Division of Gastroenterology, Hepatology and Endoscopy, Brigham and Women’s Hospital, Boston, MA 02115, USA
| | - Eric R. Smith
- Department of Internal Medicine, Baylor Scott & White Round Rock Hospital, Round Rock, TX 78665, USA;
| | - Paulo F. Mega
- Gastrointestinal Endoscopy Unit, Universidade de Sao Paulo Hospital das Clinicas, São Paulo 05403-010, Brazil
| | - Abdulrahman Qatomah
- Division of Gastroenterology, Hepatology and Endoscopy, Brigham and Women’s Hospital, Boston, MA 02115, USA
| | - Marco Spadaccini
- Department of Endoscopy, Humanitas Research Hospital, 20089 Rozzano, Italy;
| | - Marcello Maida
- Department of Medicine and Surgery, School of Medicine and Surgery, University of Enna ‘Kore’, 94100 Enna, Italy;
| | | |
Collapse
|
10
|
Gu P, Mendonca O, Carter D, Dube S, Wang P, Huang X, Li D, Moore JH, McGovern DPB. AI-luminating Artificial Intelligence in Inflammatory Bowel Diseases: A Narrative Review on the Role of AI in Endoscopy, Histology, and Imaging for IBD. Inflamm Bowel Dis 2024; 30:2467-2485. [PMID: 38452040 DOI: 10.1093/ibd/izae030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Indexed: 03/09/2024]
Abstract
Endoscopy, histology, and cross-sectional imaging serve as fundamental pillars in the detection, monitoring, and prognostication of inflammatory bowel disease (IBD). However, interpretation of these studies often relies on subjective human judgment, which can lead to delays, intra- and interobserver variability, and potential diagnostic discrepancies. With the rising incidence of IBD globally coupled with the exponential digitization of these data, there is a growing demand for innovative approaches to streamline diagnosis and elevate clinical decision-making. In this context, artificial intelligence (AI) technologies emerge as a timely solution to address the evolving challenges in IBD. Early studies using deep learning and radiomics approaches for endoscopy, histology, and imaging in IBD have demonstrated promising results for using AI to detect, diagnose, characterize, phenotype, and prognosticate IBD. Nonetheless, the available literature has inherent limitations and knowledge gaps that need to be addressed before AI can transition into a mainstream clinical tool for IBD. To better understand the potential value of integrating AI in IBD, we review the available literature to summarize our current understanding and identify gaps in knowledge to inform future investigations.
Collapse
Affiliation(s)
- Phillip Gu
- F. Widjaja Inflammatory Bowel Disease Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | | | - Dan Carter
- Department of Gastroenterology, Sheba Medical Center, Tel Aviv, Israel
| | - Shishir Dube
- F. Widjaja Inflammatory Bowel Disease Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Paul Wang
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Xiuzhen Huang
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Debiao Li
- Biomedical Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Jason H Moore
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Dermot P B McGovern
- F. Widjaja Inflammatory Bowel Disease Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| |
Collapse
|
11
|
Liu CK, Huang HM. A Novel Self-Supervised Learning-Based Method for Dynamic CT Brain Perfusion Imaging. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01341-1. [PMID: 39633209 DOI: 10.1007/s10278-024-01341-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 11/13/2024] [Accepted: 11/13/2024] [Indexed: 12/07/2024]
Abstract
Dynamic computed tomography (CT)-based brain perfusion imaging is a non-invasive technique that can provide quantitative measurements of cerebral blood flow (CBF), cerebral blood volume (CBV), and mean transit time (MTT). However, due to high radiation dose, dynamic CT scan with a low tube voltage and current protocol is commonly used. Because of this reason, the increased noise degrades the quality and reliability of perfusion maps. In this study, we aim to propose and investigate the feasibility of utilizing a convolutional neural network and a bi-directional long short-term memory model with an attention mechanism to self-supervisedly yield the impulse residue function (IRF) from dynamic CT images. Then, the predicted IRF can be used to compute the perfusion parameters. We evaluated the performance of the proposed method using both simulated and real brain perfusion data and compared the results with those obtained from two existing methods: singular value decomposition and tensor total-variation. The simulation results showed that the overall performance of parameter estimation obtained from the proposed method was superior to that obtained from the other two methods. The experimental results showed that the perfusion maps calculated from the three studied methods were visually similar, but small and significant differences in perfusion parameters between the proposed method and the other two methods were found. We also observed that there were several low-CBF and low-CBV lesions (i.e., suspected infarct core) found by all comparing methods, but only the proposed method revealed longer MTT. The proposed method has the potential to self-supervisedly yield reliable perfusion maps from dynamic CT images.
Collapse
Affiliation(s)
- Chi-Kuang Liu
- Department of Medical Imaging, Changhua Christian Hospital, 135 Nanxiao St., Changhua County 500, Taiwan
| | - Hsuan-Ming Huang
- Institute of Medical Device and Imaging, College of Medicine, Zhongzheng Dist, National Taiwan University, No.1, Sec. 1, Jen Ai Rd, Taipei City, 100, Taiwan.
- Program for Precision Health and Intelligent Medicine, Graduate School of Advanced Technology, Zhongzheng Dist., National Taiwan University, No.1, Sec. 1, Jen Ai Rd., Taipei City, 100, Taiwan.
| |
Collapse
|
12
|
Al-baker B, Ayoub A, Ju X, Mossey P. Patch-based convolutional neural networks for automatic landmark detection of 3D facial images in clinical settings. Eur J Orthod 2024; 46:cjae056. [PMID: 39607679 PMCID: PMC11602742 DOI: 10.1093/ejo/cjae056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
BACKGROUND The facial landmark annotation of 3D facial images is crucial in clinical orthodontics and orthognathic surgeries for accurate diagnosis and treatment planning. While manual landmarking has traditionally been the gold standard, it is labour-intensive and prone to variability. OBJECTIVE This study presents a framework for automated landmark detection in 3D facial images within a clinical context, using convolutional neural networks (CNNs), and it assesses its accuracy in comparison to that of ground-truth data. MATERIAL AND METHODS Initially, an in-house dataset of 408 3D facial images, each annotated with 37 landmarks by an expert, was constructed. Subsequently, a 2.5D patch-based CNN architecture was trained using this dataset to detect the same set of landmarks automatically. RESULTS The developed CNN model demonstrated high accuracy, with an overall mean localization error of 0.83 ± 0.49 mm. The majority of the landmarks had low localization errors, with 95% exhibiting a mean error of less than 1 mm across all axes. Moreover, the method achieved a high success detection rate, with 88% of detections having an error below 1.5 mm and 94% below 2 mm. CONCLUSION The automated method used in this study demonstrated accuracy comparable to that achieved with manual annotations within clinical settings. In addition, the proposed framework for automatic landmark localization exhibited improved accuracy over existing models in the literature. Despite these advancements, it is important to acknowledge the limitations of this research, such as that it was based on a single-centre study and a single annotator. Future work should address computational time challenges to achieve further enhancements. This approach has significant potential to improve the efficiency and accuracy of orthodontic and orthognathic procedures.
Collapse
Affiliation(s)
- Bodore Al-baker
- Orthodontic Department, Hamad Dental Center, Hamad Medical Corporation, Doha, Qatar
| | - Ashraf Ayoub
- Scottish Craniofacial Research Group, Glasgow University Dental Hospital & School, School of Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, United Kingdom
| | - Xiangyang Ju
- Medical Devices Unit, Department of Clinical Physics and Bioengineering, National Health Service of Greater Glasgow and Clyde, Glasgow, United Kingdom
| | - Peter Mossey
- Dental Hospital and School, University of Dundee, Dundee, United Kingdom
| |
Collapse
|
13
|
Yıldız Potter İ, Yeritsyan D, Rodriguez EK, Wu JS, Nazarian A, Vaziri A. Detection and Localization of Spine Disorders from Plain Radiography. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2967-2982. [PMID: 38937344 PMCID: PMC11612062 DOI: 10.1007/s10278-024-01175-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 05/16/2024] [Accepted: 06/09/2024] [Indexed: 06/29/2024]
Abstract
Spine disorders can cause severe functional limitations, including back pain, decreased pulmonary function, and increased mortality risk. Plain radiography is the first-line imaging modality to diagnose suspected spine disorders. Nevertheless, radiographical appearance is not always sufficient due to highly variable patient and imaging parameters, which can lead to misdiagnosis or delayed diagnosis. Employing an accurate automated detection model can alleviate the workload of clinical experts, thereby reducing human errors, facilitating earlier detection, and improving diagnostic accuracy. To this end, deep learning-based computer-aided diagnosis (CAD) tools have significantly outperformed the accuracy of traditional CAD software. Motivated by these observations, we proposed a deep learning-based approach for end-to-end detection and localization of spine disorders from plain radiographs. In doing so, we took the first steps in employing state-of-the-art transformer networks to differentiate images of multiple spine disorders from healthy counterparts and localize the identified disorders, focusing on vertebral compression fractures (VCF) and spondylolisthesis due to their high prevalence and potential severity. The VCF dataset comprised 337 images, with VCFs collected from 138 subjects and 624 normal images collected from 337 subjects. The spondylolisthesis dataset comprised 413 images, with spondylolisthesis collected from 336 subjects and 782 normal images collected from 413 subjects. Transformer-based models exhibited 0.97 Area Under the Receiver Operating Characteristic Curve (AUC) in VCF detection and 0.95 AUC in spondylolisthesis detection. Further, transformers demonstrated significant performance improvements against existing end-to-end approaches by 4-14% AUC (p-values < 10-13) for VCF detection and by 14-20% AUC (p-values < 10-9) for spondylolisthesis detection.
Collapse
Affiliation(s)
| | - Diana Yeritsyan
- Beth Israel Deaconess Medical Center (BIDMC), Carl J. Shapiro Department of Orthopedic Surgery, Harvard Medical School, 330 Brookline Avenue, Stoneman 10, Boston, MA, 02215, USA
- Musculoskeletal Translational Innovation Initiative, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, RN123, USA
| | - Edward K Rodriguez
- Beth Israel Deaconess Medical Center (BIDMC), Carl J. Shapiro Department of Orthopedic Surgery, Harvard Medical School, 330 Brookline Avenue, Stoneman 10, Boston, MA, 02215, USA
- Musculoskeletal Translational Innovation Initiative, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, RN123, USA
| | - Jim S Wu
- Department of Radiology, Massachusetts General Brigham (MGB), Harvard Medical School, 75 Francis Street, Boston, MA, 02215, USA
| | - Ara Nazarian
- Beth Israel Deaconess Medical Center (BIDMC), Carl J. Shapiro Department of Orthopedic Surgery, Harvard Medical School, 330 Brookline Avenue, Stoneman 10, Boston, MA, 02215, USA
- Musculoskeletal Translational Innovation Initiative, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, RN123, USA
- Department of Orthopaedics Surgery, Yerevan State University, 0025, Yerevan, Armenia
| | - Ashkan Vaziri
- BioSensics, LLC, 57 Chapel Street, Newton, MA, 02458, USA
| |
Collapse
|
14
|
Singh R, Singh N, Kaur L. Deep learning methods for 3D magnetic resonance image denoising, bias field and motion artifact correction: a comprehensive review. Phys Med Biol 2024; 69:23TR01. [PMID: 39569887 DOI: 10.1088/1361-6560/ad94c7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2024] [Accepted: 11/19/2024] [Indexed: 11/22/2024]
Abstract
Magnetic resonance imaging (MRI) provides detailed structural information of the internal body organs and soft tissue regions of a patient in clinical diagnosis for disease detection, localization, and progress monitoring. MRI scanner hardware manufacturers incorporate various post-acquisition image-processing techniques into the scanner's computer software tools for different post-processing tasks. These tools provide a final image of adequate quality and essential features for accurate clinical reporting and predictive interpretation for better treatment planning. Different post-acquisition image-processing tasks for MRI quality enhancement include noise removal, motion artifact reduction, magnetic bias field correction, and eddy electric current effect removal. Recently, deep learning (DL) methods have shown great success in many research fields, including image and video applications. DL-based data-driven feature-learning approaches have great potential for MR image denoising and image-quality-degrading artifact correction. Recent studies have demonstrated significant improvements in image-analysis tasks using DL-based convolutional neural network techniques. The promising capabilities and performance of DL techniques in various problem-solving domains have motivated researchers to adapt DL methods to medical image analysis and quality enhancement tasks. This paper presents a comprehensive review of DL-based state-of-the-art MRI quality enhancement and artifact removal methods for regenerating high-quality images while preserving essential anatomical and physiological feature maps without destroying important image information. Existing research gaps and future directions have also been provided by highlighting potential research areas for future developments, along with their importance and advantages in medical imaging.
Collapse
Affiliation(s)
- Ram Singh
- Department of Computer Science & Engineering, Punjabi University, Chandigarh Road, Patiala 147002, Punjab, India
| | - Navdeep Singh
- Department of Computer Science & Engineering, Punjabi University, Chandigarh Road, Patiala 147002, Punjab, India
| | - Lakhwinder Kaur
- Department of Computer Science & Engineering, Punjabi University, Chandigarh Road, Patiala 147002, Punjab, India
| |
Collapse
|
15
|
Li Q, Geng S, Luo H, Wang W, Mo YQ, Luo Q, Wang L, Song GB, Sheng JP, Xu B. Signaling pathways involved in colorectal cancer: pathogenesis and targeted therapy. Signal Transduct Target Ther 2024; 9:266. [PMID: 39370455 PMCID: PMC11456611 DOI: 10.1038/s41392-024-01953-7] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 07/25/2024] [Accepted: 08/16/2024] [Indexed: 10/08/2024] Open
Abstract
Colorectal cancer (CRC) remains one of the leading causes of cancer-related mortality worldwide. Its complexity is influenced by various signal transduction networks that govern cellular proliferation, survival, differentiation, and apoptosis. The pathogenesis of CRC is a testament to the dysregulation of these signaling cascades, which culminates in the malignant transformation of colonic epithelium. This review aims to dissect the foundational signaling mechanisms implicated in CRC, to elucidate the generalized principles underpinning neoplastic evolution and progression. We discuss the molecular hallmarks of CRC, including the genomic, epigenomic and microbial features of CRC to highlight the role of signal transduction in the orchestration of the tumorigenic process. Concurrently, we review the advent of targeted and immune therapies in CRC, assessing their impact on the current clinical landscape. The development of these therapies has been informed by a deepening understanding of oncogenic signaling, leading to the identification of key nodes within these networks that can be exploited pharmacologically. Furthermore, we explore the potential of integrating AI to enhance the precision of therapeutic targeting and patient stratification, emphasizing their role in personalized medicine. In summary, our review captures the dynamic interplay between aberrant signaling in CRC pathogenesis and the concerted efforts to counteract these changes through targeted therapeutic strategies, ultimately aiming to pave the way for improved prognosis and personalized treatment modalities in colorectal cancer.
Collapse
Affiliation(s)
- Qing Li
- The Shapingba Hospital, Chongqing University, Chongqing, China
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Shan Geng
- Central Laboratory, The Affiliated Dazu Hospital of Chongqing Medical University, Chongqing, China
| | - Hao Luo
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
- Cancer Center, Daping Hospital, Army Medical University, Chongqing, China
| | - Wei Wang
- Chongqing Municipal Health and Health Committee, Chongqing, China
| | - Ya-Qi Mo
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China
| | - Qing Luo
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Lu Wang
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China
| | - Guan-Bin Song
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China.
| | - Jian-Peng Sheng
- College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, China.
| | - Bo Xu
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China.
| |
Collapse
|
16
|
Zhong Y, Chen L, Ding F, Ou W, Zhang X, Weng S. Assessing microvascular invasion in HBV-related hepatocellular carcinoma: an online interactive nomogram integrating inflammatory markers, radiomics, and convolutional neural networks. Front Oncol 2024; 14:1401095. [PMID: 39351352 PMCID: PMC11439624 DOI: 10.3389/fonc.2024.1401095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 08/22/2024] [Indexed: 10/04/2024] Open
Abstract
Objective The early recurrence of hepatocellular carcinoma (HCC) correlates with decreased overall survival. Microvascular invasion (MVI) stands out as a prominent hazard influencing post-resection survival status and metastasis in patients with HBV-related HCC. The study focused on developing a web-based nomogram for preoperative prediction of MVI in HBV-HCC. Materials and methods 173 HBV-HCC patients from 2017 to 2022 with complete preoperative clinical data and Gadopentetate dimeglumine-enhanced magnetic resonance images were randomly divided into two groups for the purpose of model training and validation, using a ratio of 7:3. MRI signatures were extracted by pyradiomics and the deep neural network, 3D ResNet. Clinical factors, blood-cell-inflammation markers, and MRI signatures selected by LASSO were incorporated into the predictive nomogram. The evaluation of the predictive accuracy involved assessing the area under the receiver operating characteristic (ROC) curve (AUC), the concordance index (C-index), along with analyses of calibration and decision curves. Results Inflammation marker, neutrophil-to-lymphocyte ratio (NLR), was positively correlated with independent MRI radiomics risk factors for MVI. The performance of prediction model combined serum AFP, AST, NLR, 15 radiomics features and 7 deep features was better than clinical and radiomics models. The combined model achieved C-index values of 0.926 and 0.917, with AUCs of 0.911 and 0.907, respectively. Conclusion NLR showed a positive correlation with MRI radiomics and deep learning features. The nomogram, incorporating NLR and MRI features, accurately predicted individualized MVI risk preoperatively.
Collapse
Affiliation(s)
- Yun Zhong
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Abdominal Surgery Research Institute, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Department of Hepatobiliary and Pancreatic Surgery, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Provincial Key Laboratory of Precision Medicine for Cancer, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
| | - Lingfeng Chen
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Abdominal Surgery Research Institute, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Department of Hepatobiliary and Pancreatic Surgery, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Provincial Key Laboratory of Precision Medicine for Cancer, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
| | - Fadian Ding
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Abdominal Surgery Research Institute, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Department of Hepatobiliary and Pancreatic Surgery, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Provincial Key Laboratory of Precision Medicine for Cancer, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
| | - Wenshi Ou
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Abdominal Surgery Research Institute, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Department of Hepatobiliary and Pancreatic Surgery, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Provincial Key Laboratory of Precision Medicine for Cancer, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
| | - Xiang Zhang
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Abdominal Surgery Research Institute, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Department of Hepatobiliary and Pancreatic Surgery, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Provincial Key Laboratory of Precision Medicine for Cancer, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
| | - Shangeng Weng
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Abdominal Surgery Research Institute, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Department of Hepatobiliary and Pancreatic Surgery, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, China
- Fujian Provincial Key Laboratory of Precision Medicine for Cancer, The First Affiliated Hospital, Fujian Medical University, Fuzhou, China
| |
Collapse
|
17
|
Szabó V, Szabó BT, Orhan K, Veres DS, Manulis D, Ezhov M, Sanders A. Validation of artificial intelligence application for dental caries diagnosis on intraoral bitewing and periapical radiographs. J Dent 2024; 147:105105. [PMID: 38821394 DOI: 10.1016/j.jdent.2024.105105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 05/21/2024] [Accepted: 05/28/2024] [Indexed: 06/02/2024] Open
Abstract
OBJECTIVES This study aimed to assess the reliability of AI-based system that assists the healthcare processes in the diagnosis of caries on intraoral radiographs. METHODS The proximal surfaces of the 323 selected teeth on the intraoral radiographs were evaluated by two independent observers using an AI-based (Diagnocat) system. The presence or absence of carious lesions was recorded during Phase 1. After 4 months, the AI-aided human observers evaluated the same radiographs (Phase 2), and the advanced convolutional neural network (CNN) reassessed the radiographic data (Phase 3). Subsequently, data reflecting human disagreements were excluded (Phase 4). For each phase, the Cohen and Fleiss kappa values, as well as the sensitivity, specificity, positive and negative predictive values, and diagnostic accuracy of Diagnocat, were calculated. RESULTS During the four phases, the range of Cohen kappa values between the human observers and Diagnocat were κ=0.66-1, κ=0.58-0.7, and κ=0.49-0.7. The Fleiss kappa values were κ=0.57-0.8. The sensitivity, specificity and diagnostic accuracy values ranged between 0.51-0.76, 0.88-0.97 and 0.76-0.86, respectively. CONCLUSIONS The Diagnocat CNN supports the evaluation of intraoral radiographs for caries diagnosis, as determined by consensus between human and AI system observers. CLINICAL SIGNIFICANCE Our study may aid in the understanding of deep learning-based systems developed for dental imaging modalities for dentists and contribute to expanding the body of results in the field of AI-supported dental radiology..
Collapse
Affiliation(s)
- Viktor Szabó
- Department of Oral Diagnostics, Faculty of Dentistry, Semmelweis University, Budapest, Hungary
| | - Bence Tamás Szabó
- Department of Oral Diagnostics, Faculty of Dentistry, Semmelweis University, Budapest, Hungary.
| | - Kaan Orhan
- Department of Oral Diagnostics, Faculty of Dentistry, Semmelweis University, Budapest, Hungary; Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey; Medical Design Application, and Research Center (MEDITAM), Ankara University, Ankara, Turkey
| | - Dániel Sándor Veres
- Department of Biophysics and Radiation Biology, Semmelweis University, Budapest, Hungary
| | | | | | | |
Collapse
|
18
|
Chang J, Lee KJ, Wang TH, Chen CM. Utilizing ChatGPT for Curriculum Learning in Developing a Clinical Grade Pneumothorax Detection Model: A Multisite Validation Study. J Clin Med 2024; 13:4042. [PMID: 39064082 PMCID: PMC11277936 DOI: 10.3390/jcm13144042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 07/05/2024] [Accepted: 07/07/2024] [Indexed: 07/28/2024] Open
Abstract
Background: Pneumothorax detection is often challenging, particularly when radiographic features are subtle. This study introduces a deep learning model that integrates curriculum learning and ChatGPT to enhance the detection of pneumothorax in chest X-rays. Methods: The model training began with large, easily detectable pneumothoraces, gradually incorporating smaller, more complex cases to prevent performance plateauing. The training dataset comprised 6445 anonymized radiographs, validated across multiple sites, and further tested for generalizability in diverse clinical subgroups. Performance metrics were analyzed using descriptive statistics. Results: The model achieved a sensitivity of 0.97 and a specificity of 0.97, with an area under the curve (AUC) of 0.98, demonstrating a performance comparable to that of many FDA-approved devices. Conclusions: This study suggests that a structured approach to training deep learning models, through curriculum learning and enhanced data extraction via natural language processing, can facilitate and improve the training of AI models for pneumothorax detection.
Collapse
Affiliation(s)
- Joseph Chang
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei 100, Taiwan
- EverFortune.AI Co., Ltd., Taichung 403, Taiwan
| | | | - Ti-Hao Wang
- EverFortune.AI Co., Ltd., Taichung 403, Taiwan
- Department of Medicine, China Medical University, Taichung 404, Taiwan
- Department of Radiation Oncology, China Medical University Hospital, Taichung 404, Taiwan
| | - Chung-Ming Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei 100, Taiwan
| |
Collapse
|
19
|
Zhang H, Cai Z. ConvNextUNet: A small-region attentioned model for cardiac MRI segmentation. Comput Biol Med 2024; 177:108592. [PMID: 38781642 DOI: 10.1016/j.compbiomed.2024.108592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 04/08/2024] [Accepted: 05/09/2024] [Indexed: 05/25/2024]
Abstract
Cardiac MRI segmentation is a significant research area in medical image processing, holding immense clinical and scientific importance in assisting the diagnosis and treatment of heart diseases. Currently, existing cardiac MRI segmentation algorithms are often constrained by specific datasets and conditions, leading to a notable decrease in segmentation performance when applied to diverse datasets. These limitations affect the algorithm's overall performance and generalization capabilities. Inspired by ConvNext, we introduce a two-dimensional cardiac MRI segmentation U-shaped network called ConvNextUNet. It is the first application of a combination of ConvNext and the U-shaped architecture in the field of cardiac MRI segmentation. Firstly, we incorporate up-sampling modules into the original ConvNext architecture and combine it with the U-shaped framework to achieve accurate reconstruction. Secondly, we integrate Input Stem into ConvNext, and introduce attention mechanisms along the bridging path. By merging features extracted from both the encoder and decoder, a probability distribution is obtained through linear and nonlinear transformations, serving as attention weights, thereby enhancing the signal of the same region of interest. The resulting attention weights are applied to the decoder features, highlighting the region of interest. This allows the model to simultaneously consider local context and global details during the learning phase, fully leveraging the advantages of both global and local perception for a more comprehensive understanding of cardiac anatomical structures. Consequently, the model demonstrates a clear advantage and robust generalization capability, especially in small-region segmentation. Experimental results on the ACDC, LVQuan19, and RVSC datasets confirm that the ConvNextUNet model outperforms the current state-of-the-art models, particularly in small-region segmentation tasks. Furthermore, we conducted cross-dataset training and testing experiments, which revealed that the pre-trained model can accurately segment diverse cardiac datasets, showcasing its powerful generalization capabilities. The source code of this project is available at https://github.com/Zemin-Cai/ConvNextUNet.
Collapse
Affiliation(s)
- Huiyi Zhang
- The Department of Electronic Engineering, Shantou University, Shantou, Guangdong 515063, PR China; Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Shantou, Guangdong 515063, PR China
| | - Zemin Cai
- The Department of Electronic Engineering, Shantou University, Shantou, Guangdong 515063, PR China; Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Shantou, Guangdong 515063, PR China.
| |
Collapse
|
20
|
Aden D, Zaheer S, Khan S. Possible benefits, challenges, pitfalls, and future perspective of using ChatGPT in pathology. REVISTA ESPANOLA DE PATOLOGIA : PUBLICACION OFICIAL DE LA SOCIEDAD ESPANOLA DE ANATOMIA PATOLOGICA Y DE LA SOCIEDAD ESPANOLA DE CITOLOGIA 2024; 57:198-210. [PMID: 38971620 DOI: 10.1016/j.patol.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/22/2024] [Accepted: 04/16/2024] [Indexed: 07/08/2024]
Abstract
The much-hyped artificial intelligence (AI) model called ChatGPT developed by Open AI can have great benefits for physicians, especially pathologists, by saving time so that they can use their time for more significant work. Generative AI is a special class of AI model, which uses patterns and structures learned from existing data and can create new data. Utilizing ChatGPT in Pathology offers a multitude of benefits, encompassing the summarization of patient records and its promising prospects in Digital Pathology, as well as its valuable contributions to education and research in this field. However, certain roadblocks need to be dealt like integrating ChatGPT with image analysis which will act as a revolution in the field of pathology by increasing diagnostic accuracy and precision. The challenges with the use of ChatGPT encompass biases from its training data, the need for ample input data, potential risks related to bias and transparency, and the potential adverse outcomes arising from inaccurate content generation. Generation of meaningful insights from the textual information which will be efficient in processing different types of image data, such as medical images, and pathology slides. Due consideration should be given to ethical and legal issues including bias.
Collapse
Affiliation(s)
- Durre Aden
- Department of Pathology, Hamdard Institute of Medical Sciences and Research, Jamia Hamdard, New Delhi, India
| | - Sufian Zaheer
- Department of Pathology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India.
| | - Sabina Khan
- Department of Pathology, Hamdard Institute of Medical Sciences and Research, Jamia Hamdard, New Delhi, India
| |
Collapse
|
21
|
Mastropietro A, Casali N, Taccogna MG, D’Angelo MG, Rizzo G, Peruzzo D. Classification of Muscular Dystrophies from MR Images Improves Using the Swin Transformer Deep Learning Model. Bioengineering (Basel) 2024; 11:580. [PMID: 38927816 PMCID: PMC11200745 DOI: 10.3390/bioengineering11060580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 05/28/2024] [Accepted: 06/05/2024] [Indexed: 06/28/2024] Open
Abstract
Muscular dystrophies present diagnostic challenges, requiring accurate classification for effective diagnosis and treatment. This study investigates the efficacy of deep learning methodologies in classifying these disorders using skeletal muscle MRI scans. Specifically, we assess the performance of the Swin Transformer (SwinT) architecture against traditional convolutional neural networks (CNNs) in distinguishing between healthy individuals, Becker muscular dystrophy (BMD), and limb-girdle muscular Dystrophy type 2 (LGMD2) patients. Moreover, 3T MRI scans from a retrospective dataset of 75 scans (from 54 subjects) were utilized, with multiparametric protocols capturing various MRI contrasts, including T1-weighted and Dixon sequences. The dataset included 17 scans from healthy volunteers, 27 from BMD patients, and 31 from LGMD2 patients. SwinT and CNNs were trained and validated using a subset of the dataset, with the performance evaluated based on accuracy and F-score. Results indicate the superior accuracy of SwinT (0.96), particularly when employing fat fraction (FF) images as input; it served as a valuable parameter for enhancing classification accuracy. Despite limitations, including a modest cohort size, this study provides valuable insights into the application of AI-driven approaches for precise neuromuscular disorder classification, with potential implications for improving patient care.
Collapse
Affiliation(s)
- Alfonso Mastropietro
- Istituto di Sistemi e Tecnologie Industriali Intelligenti per il Manifatturiero Avanzato, Consiglio Nazionale delle Ricerche, 20133 Milan, Italy; (A.M.); (N.C.)
| | - Nicola Casali
- Istituto di Sistemi e Tecnologie Industriali Intelligenti per il Manifatturiero Avanzato, Consiglio Nazionale delle Ricerche, 20133 Milan, Italy; (A.M.); (N.C.)
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Maria Giovanna Taccogna
- Istituto di Tecnologie Biomediche, Consiglio Nazionale delle Ricerche, 20054 Segrate, Milan, Italy;
| | - Maria Grazia D’Angelo
- Unit of Rehabilitation of Rare Diseases of the Central and Peripheral Nervous System, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Lecco, Italy;
| | - Giovanna Rizzo
- Istituto di Sistemi e Tecnologie Industriali Intelligenti per il Manifatturiero Avanzato, Consiglio Nazionale delle Ricerche, 20133 Milan, Italy; (A.M.); (N.C.)
| | - Denis Peruzzo
- Neuroimaging Unit, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Lecco, Italy
| |
Collapse
|
22
|
Li C, Lai D, Jiang X, Zhang K. FERI: A Multitask-based Fairness Achieving Algorithm with Applications to Fair Organ Transplantation. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2024; 2024:593-602. [PMID: 38827050 PMCID: PMC11141863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Liver transplantation often faces fairness challenges across subgroups defined by sensitive attributes such as age group, gender, and race/ethnicity. Machine learning models for outcome prediction can introduce additional biases. Therefore, we introduce Fairness through the Equitable Rate of Improvement in Multitask Learning (FERI) algorithm for fair predictions of graft failure risk in liver transplant patients. FERI constrains subgroup loss by balancing learning rates and preventing subgroup dominance in the training process. Our results show that FERI maintained high predictive accuracy with AUROC and AUPRC comparable to baseline models. More importantly, FERI demonstrated an ability to improve fairness without sacrificing accuracy. Specifically, for the gender, FERI reduced the demographic parity disparity by 71.74%, and for the age group, it decreased the equalized odds disparity by 40.46%. Therefore, the FERI algorithm advanced fairness-aware predictive modeling in healthcare and provides an invaluable tool for equitable healthcare systems.
Collapse
Affiliation(s)
- Can Li
- Department of Biostatistics and Data Science, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Dejian Lai
- Department of Biostatistics and Data Science, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Xiaoqian Jiang
- Department of Health Data Science and Artificial Intelligence, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Kai Zhang
- Department of Health Data Science and Artificial Intelligence, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, USA
| |
Collapse
|
23
|
S V A, G DB, Raman R. Automatic Identification and Severity Classification of Retinal Biomarkers in SD-OCT Using Dilated Depthwise Separable Convolution ResNet with SVM Classifier. Curr Eye Res 2024; 49:513-523. [PMID: 38251704 DOI: 10.1080/02713683.2024.2303713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 01/03/2024] [Indexed: 01/23/2024]
Abstract
PURPOSE Diagnosis of Uveitic Macular Edema (UME) using Spectral Domain OCT (SD-OCT) is a promising method for early detection and monitoring of sight-threatening visual impairment. Viewing multiple B-scans and identifying biomarkers is challenging and time-consuming for clinical practitioners. To overcome these challenges, this paper proposes an image classification hybrid framework for predicting the presence of biomarkers such as intraretinal cysts (IRC), hyperreflective foci (HRF), hard exudates (HE) and neurosensory detachment (NSD) in OCT B-scans along with their severity. METHODS A dataset of 10880 B-scans from 85 Uveitic patients is collected and graded by two board-certified ophthalmologists for the presence of biomarkers. A novel image classification framework, Dilated Depthwise Separable Convolution ResNet (DDSC-RN) with SVM classifier, is developed to achieve network compression with a larger receptive field that captures both low and high-level features of the biomarkers without loss of classification accuracy. The severity level of each biomarker is predicted from the feature map, extracted by the proposed DDSC-RN network. RESULTS The proposed hybrid model is evaluated using ground truth labels from the hospital. The deep learning model initially, identified the presence of biomarkers in B-scans. It achieved an overall accuracy of 98.64%, which is comparable to the performance of other state-of-the-art models, such as DRN-C-42 and ResNet-34. The SVM classifier then predicted the severity of each biomarker, achieving an overall accuracy of 89.3%. CONCLUSIONS A new hybrid model accurately identifies four retinal biomarkers on a tissue map and predicts their severity. The model outperforms other methods for identifying multiple biomarkers in complex OCT B-scans. This helps clinicians to screen multiple B-scans of UME more effectively, leading to better treatment outcomes.
Collapse
Affiliation(s)
- Adithiya S V
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Dharani Bai G
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India
| |
Collapse
|
24
|
Storelli L, Pagani E, Rubin M, Margoni M, Filippi M, Rocca MA. A Fully Automatic Method to Segment Choroid Plexuses in Multiple Sclerosis Using Conventional MRI Sequences. J Magn Reson Imaging 2024; 59:1643-1652. [PMID: 37530734 DOI: 10.1002/jmri.28937] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 07/19/2023] [Accepted: 07/20/2023] [Indexed: 08/03/2023] Open
Abstract
BACKGROUND Choroid plexus (CP) volume has been recently proposed as a proxy for brain neuroinflammation in multiple sclerosis (MS). PURPOSE To develop and validate a fast automatic method to segment CP using routinely acquired brain T1-weighted and FLAIR MRI. STUDY TYPE Retrospective. POPULATION Fifty-five MS patients (33 relapsing-remitting, 22 progressive; mean age = 46.8 ± 10.2 years; 31 women) and 60 healthy controls (HC; mean age = 36.1 ± 12.6 years, 33 women). FIELD STRENGTH/SEQUENCE 3D T2-weighted FLAIR and 3D T1-weighted gradient echo sequences at 3.0 T. ASSESSMENT Brain tissues were segmented on T1-weighted sequences and a Gaussian Mixture Model (GMM) was fitted to FLAIR image intensities obtained from the ventricle masks of the SIENAX. A second GMM was then applied on the thresholded and filtered ventricle mask. CP volumes were automatically determined and compared with those from manual segmentation by two raters (with 3 and 10 years' experience; reference standard). CP volumes from previously published automatic segmentation methods (freely available Freesurfer [FS] and FS-GMM) were also compared with reference standard. Expanded Disability Status Scale (EDSS) score was assessed within 3 days of MRI. Computational time was assessed for each automatic technique and manual segmentation. STATISTICAL TESTS Comparisons of CP volumes with reference standard were evaluated with Bland Altman analysis. Dice similarity coefficients (DSC) were computed to assess automatic CP segmentations. Volume differences between MS and HC for each method were assessed with t-tests and correlations of CP volumes with EDSS were assessed with Pearson's correlation coefficients (R). A P value <0.05 was considered statistically significant. RESULTS Compared to manual segmentation, the proposed method had the highest segmentation accuracy (mean DSC = 0.65 ± 0.06) compared to FS (mean DSC = 0.37 ± 0.08) and FS-GMM (0.58 ± 0.06). The percentage CP volume differences relative to manual segmentation were -0.1% ± 0.23, 4.6% ± 2.5, and -0.48% ± 2 for the proposed method, FS, and FS-GMM, respectively. The Pearson's correlations between automatically obtained CP volumes and the manually obtained volumes were 0.70, 0.54, and 0.56 for the proposed method, FS, and FS-GMM, respectively. A significant correlation between CP volume and EDSS was found for the proposed automatic pipeline (R = 0.2), for FS-GMM (R = 0.3) and for manual segmentation (R = 0.4). Computational time for the proposed method (32 ± 2 minutes) was similar to the manual segmentation (20 ± 5 minutes) but <25% of the FS (120 ± 15 minutes) and FS-GMM (125 ± 15 minutes) methods. DATA CONCLUSION This study developed an accurate and easily implementable method for automatic CP segmentation in MS using T1-weighted and FLAIR MRI. EVIDENCE LEVEL 1 TECHNICAL EFFICACY: Stage 4.
Collapse
Affiliation(s)
- Loredana Storelli
- Neuroimaging Research Unit, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Elisabetta Pagani
- Neuroimaging Research Unit, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Martina Rubin
- Neuroimaging Research Unit, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Neurology Unit, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Monica Margoni
- Neuroimaging Research Unit, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Neurology Unit, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Massimo Filippi
- Neuroimaging Research Unit, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Neurology Unit, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Neurorehabilitation Unit, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Neurophysiology Service, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Vita-Salute San Raffaele University, Milan, Italy
| | - Maria A Rocca
- Neuroimaging Research Unit, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Neurology Unit, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Vita-Salute San Raffaele University, Milan, Italy
| |
Collapse
|
25
|
Gu C, Lee M. Deep Transfer Learning Using Real-World Image Features for Medical Image Classification, with a Case Study on Pneumonia X-ray Images. Bioengineering (Basel) 2024; 11:406. [PMID: 38671827 PMCID: PMC11048359 DOI: 10.3390/bioengineering11040406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 04/14/2024] [Accepted: 04/18/2024] [Indexed: 04/28/2024] Open
Abstract
Deep learning has profoundly influenced various domains, particularly medical image analysis. Traditional transfer learning approaches in this field rely on models pretrained on domain-specific medical datasets, which limits their generalizability and accessibility. In this study, we propose a novel framework called real-world feature transfer learning, which utilizes backbone models initially trained on large-scale general-purpose datasets such as ImageNet. We evaluate the effectiveness and robustness of this approach compared to models trained from scratch, focusing on the task of classifying pneumonia in X-ray images. Our experiments, which included converting grayscale images to RGB format, demonstrate that real-world-feature transfer learning consistently outperforms conventional training approaches across various performance metrics. This advancement has the potential to accelerate deep learning applications in medical imaging by leveraging the rich feature representations learned from general-purpose pretrained models. The proposed methodology overcomes the limitations of domain-specific pretrained models, thereby enabling accelerated innovation in medical diagnostics and healthcare. From a mathematical perspective, we formalize the concept of real-world feature transfer learning and provide a rigorous mathematical formulation of the problem. Our experimental results provide empirical evidence supporting the effectiveness of this approach, laying the foundation for further theoretical analysis and exploration. This work contributes to the broader understanding of feature transferability across domains and has significant implications for the development of accurate and efficient models for medical image analysis, even in resource-constrained settings.
Collapse
Affiliation(s)
- Chanhoe Gu
- Department of Intelligent Semiconductor Engineering, Chung-Ang University, Seoul 06974, Republic of Korea;
| | - Minhyeok Lee
- Department of Intelligent Semiconductor Engineering, Chung-Ang University, Seoul 06974, Republic of Korea;
- School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Republic of Korea
| |
Collapse
|
26
|
Mohsin ASM, Choudhury SH. Label-free quantification of gold nanoparticles at the single-cell level using a multi-column convolutional neural network (MC-CNN). Analyst 2024; 149:2412-2419. [PMID: 38487894 DOI: 10.1039/d3an01982a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Gold nanoparticles (AuNPs) are extensively used in cellular imaging, single-particle tracking, disease diagnosis, studying membrane protein interaction, and drug delivery. Understanding the dynamics of AuNP uptake in live cells is crucial for optimizing their efficacy and safety. Traditional manual methods for quantifying AuNP uptake are time-consuming and subjective, limiting their scalability and accuracy. The available fluorescence-based techniques are limited to photobleaching and photoblinking. Optical microscopy techniques are limited by diffraction limits. Electron microscopy-based imaging techniques are destructive and unsuitable for live cell imaging. Furthermore, the resulting images may contain hundreds of particles with varied intensities, blurring, and substantial occlusion, making it difficult to manually quantify AuNP uptake. To overcome this issue and measure AuNP uptake by live cells, we annotated a dataset of dark-field images of 50 nanometer-radius AuNPs at different incubation durations. Then, to count the number of particles present in a cell, we created a customized multi-column convolutional neural network (MC-CNN). The customized MC-CNN outperformed typical particle counting architectures when compared to spectroscopy-based counting. This will allow researchers to gain a better understanding of AuNP behavior and interactions with cells, paving the way for advancements in nanomedicine, drug delivery, and biomedical research. The code for this paper is available at the following link: https://github.com/Namerlight/LabelFree_AuNP_Quantification.
Collapse
Affiliation(s)
- Abu S M Mohsin
- Nanotechnology, IoT and Applied Machine Learning Research Group, Brac University, Dhaka, Bangladesh.
| | - Shadab H Choudhury
- Nanotechnology, IoT and Applied Machine Learning Research Group, Brac University, Dhaka, Bangladesh.
| |
Collapse
|
27
|
Yuh WT, Khil EK, Yoon YS, Kim B, Yoon H, Lim J, Lee KY, Yoo YS, An KD. Deep Learning-Assisted Quantitative Measurement of Thoracolumbar Fracture Features on Lateral Radiographs. Neurospine 2024; 21:30-43. [PMID: 38569629 PMCID: PMC10992637 DOI: 10.14245/ns.2347366.683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Revised: 01/24/2024] [Accepted: 02/02/2024] [Indexed: 04/05/2024] Open
Abstract
OBJECTIVE This study aimed to develop and validate a deep learning (DL) algorithm for the quantitative measurement of thoracolumbar (TL) fracture features, and to evaluate its efficacy across varying levels of clinical expertise. METHODS Using the pretrained Mask Region-Based Convolutional Neural Networks model, originally developed for vertebral body segmentation and fracture detection, we fine-tuned the model and added a new module for measuring fracture metrics-compression rate (CR), Cobb angle (CA), Gardner angle (GA), and sagittal index (SI)-from lumbar spine lateral radiographs. These metrics were derived from six-point labeling by 3 radiologists, forming the ground truth (GT). Training utilized 1,000 nonfractured and 318 fractured radiographs, while validations employed 213 internal and 200 external fractured radiographs. The accuracy of the DL algorithm in quantifying fracture features was evaluated against GT using the intraclass correlation coefficient. Additionally, 4 readers with varying expertise levels, including trainees and an attending spine surgeon, performed measurements with and without DL assistance, and their results were compared to GT and the DL model. RESULTS The DL algorithm demonstrated good to excellent agreement with GT for CR, CA, GA, and SI in both internal (0.860, 0.944, 0.932, and 0.779, respectively) and external (0.836, 0.940, 0.916, and 0.815, respectively) validations. DL-assisted measurements significantly improved most measurement values, particularly for trainees. CONCLUSION The DL algorithm was validated as an accurate tool for quantifying TL fracture features using radiographs. DL-assisted measurement is expected to expedite the diagnostic process and enhance reliability, particularly benefiting less experienced clinicians.
Collapse
Affiliation(s)
- Woon Tak Yuh
- Department of Neurosurgery, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| | - Eun Kyung Khil
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
- Department of Radiology, Fastbone Orthopedic Hospital, Hwaseong, Korea
| | - Yu Sung Yoon
- Department of Radiology, Kyungpook National University Hospital, School of Medicine, Kyungpook National University, Daegu, Korea
| | | | | | - Jihe Lim
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| | - Kyoung Yeon Lee
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| | - Yeong Seo Yoo
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| | - Kyeong Deuk An
- Department of Neurosurgery, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| |
Collapse
|
28
|
Tian R, Lu G, Tang S, Sang L, Ma H, Qian W, Yang W. Benign and malignant classification of breast tumor ultrasound images using conventional radiomics and transfer learning features: A multicenter retrospective study. Med Eng Phys 2024; 125:104117. [PMID: 38508797 DOI: 10.1016/j.medengphy.2024.104117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 01/25/2024] [Accepted: 02/13/2024] [Indexed: 03/22/2024]
Abstract
This study aims to establish an effective benign and malignant classification model for breast tumor ultrasound images by using conventional radiomics and transfer learning features. We collaborated with a local hospital and collected a base dataset (Dataset A) consisting of 1050 cases of single lesion 2D ultrasound images from patients, with a total of 593 benign and 357 malignant tumor cases. The experimental approach comprises three main parts: conventional radiomics, transfer learning, and feature fusion. Furthermore, we assessed the model's generalizability by utilizing multicenter data obtained from Datasets B and C. The results from conventional radiomics indicated that the SVM classifier achieved the highest balanced accuracy of 0.791, while XGBoost obtained the highest AUC of 0.854. For transfer learning, we extracted deep features from ResNet50, Inception-v3, DenseNet121, MNASNet, and MobileNet. Among these models, MNASNet, with 640-dimensional deep features, yielded the optimal performance, with a balanced accuracy of 0.866, AUC of 0.937, sensitivity of 0.819, and specificity of 0.913. In the feature fusion phase, we trained SVM, ExtraTrees, XGBoost, and LightGBM with early fusion features and evaluated them with weighted voting. This approach achieved the highest balanced accuracy of 0.964 and AUC of 0.981. Combining conventional radiomics and transfer learning features demonstrated clear advantages over using individual features for breast tumor ultrasound image classification. This automated diagnostic model can ease patient burden and provide additional diagnostic support to radiologists. The performance of this model encourages future prospective research in this domain.
Collapse
Affiliation(s)
- Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Department of Nuclear Medicine, General Hospital of Northern Theatre Command, Shenyang, China
| | - Shiting Tang
- Department of Orthopedics, Joint Surgery and Sports Medicine, The First Hospital of China Medical University, Shenyang, China
| | - Liang Sang
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, China
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer, Hospital & Institute, Shenyang, China.
| |
Collapse
|
29
|
Rolfe M, Hayes S, Smith M, Owen M, Spruth M, McCarthy C, Forkan A, Banerjee A, Hocking RK. An AI based smart-phone system for asbestos identification. JOURNAL OF HAZARDOUS MATERIALS 2024; 463:132853. [PMID: 37918071 DOI: 10.1016/j.jhazmat.2023.132853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 10/13/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Asbestos identification is a complex environmental and economic challenge. Typical commercial identification of asbestos involves sending samples to a laboratory where someone learned in the field uses light microscopy and specialized mounting to identify the morphologically distinct signatures of Asbestos. In this work we investigate the use of a portable (30x) microscope which works with a smart phone camera to develop an image recognition system. 7328 images from over 1000 distinct samples of cement sheet from Melbourne, Australia were used to train a phone-based image recognition system for Asbestos identification. Three common CNN's were tested ResNet101, InceptionV3 and VGG_16 with ResNet101 achieving the best result. The distinctiveness of Asbestos was found to be identified correctly 90% of the time using a phone-based system and no specialized mounting. The image recognition system was trained with ResNet101 a convolutional neural network deep learning model which weights layers with a residual function. Resulting in an accuracy of 98.46% and loss of 3.8% ResNet101 was found to produce a more accurate model for this use-case than other deep learning neural networks.
Collapse
Affiliation(s)
- Michael Rolfe
- Department of Chemistry and Biotechnology and Department of Computing Technologies, School of Science, Computing and Engineering Technologies, Swinburne University of Technology Melbourne, VIC 3122, Australia
| | - Samantha Hayes
- Agon Environmental Pty, Ltd 63-85 Turner Street, Port Melbourne, VIC 3207, Australia
| | - Meaghan Smith
- Department of Chemistry and Biotechnology and Department of Computing Technologies, School of Science, Computing and Engineering Technologies, Swinburne University of Technology Melbourne, VIC 3122, Australia
| | - Matthew Owen
- Identifibre Pty Ltd., 67 Atherton Road, Oakleigh, VIC 3166, Australia
| | - Michael Spruth
- Agon Environmental Pty, Ltd 63-85 Turner Street, Port Melbourne, VIC 3207, Australia
| | - Chris McCarthy
- Department of Chemistry and Biotechnology and Department of Computing Technologies, School of Science, Computing and Engineering Technologies, Swinburne University of Technology Melbourne, VIC 3122, Australia
| | - Abdur Forkan
- Department of Chemistry and Biotechnology and Department of Computing Technologies, School of Science, Computing and Engineering Technologies, Swinburne University of Technology Melbourne, VIC 3122, Australia
| | - Abhik Banerjee
- Department of Chemistry and Biotechnology and Department of Computing Technologies, School of Science, Computing and Engineering Technologies, Swinburne University of Technology Melbourne, VIC 3122, Australia
| | - Rosalie K Hocking
- Department of Chemistry and Biotechnology and Department of Computing Technologies, School of Science, Computing and Engineering Technologies, Swinburne University of Technology Melbourne, VIC 3122, Australia.
| |
Collapse
|
30
|
Waldner MJ, Strobel D. Ultrasound Diagnosis of Hepatocellular Carcinoma: Is the Future Defined by Artificial Intelligence? ULTRASCHALL IN DER MEDIZIN (STUTTGART, GERMANY : 1980) 2024; 45:8-12. [PMID: 38301631 DOI: 10.1055/a-2171-2674] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Affiliation(s)
| | - Deike Strobel
- Medical Clinic 1, Erlangen University Hospital, Erlangen, Germany
| |
Collapse
|
31
|
Mese I, Altintas Taslicay C, Sivrioglu AK. Synergizing photon-counting CT with deep learning: potential enhancements in medical imaging. Acta Radiol 2024; 65:159-166. [PMID: 38146126 DOI: 10.1177/02841851231217995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
Abstract
This review article highlights the potential of integrating photon-counting computed tomography (CT) and deep learning algorithms in medical imaging to enhance diagnostic accuracy, improve image quality, and reduce radiation exposure. The use of photon-counting CT provides superior image quality, reduced radiation dose, and material decomposition capabilities, while deep learning algorithms excel in automating image analysis and improving diagnostic accuracy. The integration of these technologies can lead to enhanced material decomposition and classification, spectral image analysis, predictive modeling for individualized medicine, workflow optimization, and radiation dose management. However, data requirements, computational resources, and regulatory and ethical concerns remain challenges that need to be addressed to fully realize the potential of this technology. The fusion of photon-counting CT and deep learning algorithms is poised to revolutionize medical imaging and transform patient care.
Collapse
Affiliation(s)
- Ismail Mese
- Department of Radiology, Health Sciences University, Erenkoy Mental Health and Neurology Training and Research Hospital, Istanbul, Turkey
| | | | | |
Collapse
|
32
|
Park S, Kim JH, Ahn Y, Lee CH, Kim YG, Yuh WT, Hyun SJ, Kim CH, Kim KJ, Chung CK. Multi-pose-based convolutional neural network model for diagnosis of patients with central lumbar spinal stenosis. Sci Rep 2024; 14:203. [PMID: 38168665 PMCID: PMC10761871 DOI: 10.1038/s41598-023-50885-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 12/27/2023] [Indexed: 01/05/2024] Open
Abstract
Although the role of plain radiographs in diagnosing lumbar spinal stenosis (LSS) has declined in importance since the advent of magnetic resonance imaging (MRI), diagnostic ability of plain radiographs has improved dramatically when combined with deep learning. Previously, we developed a convolutional neural network (CNN) model using a radiograph for diagnosing LSS. In this study, we aimed to improve and generalize the performance of CNN models and overcome the limitation of the single-pose-based CNN (SP-CNN) model using multi-pose radiographs. Individuals with severe or no LSS, confirmed using MRI, were enrolled. Lateral radiographs of patients in three postures were collected. We developed a multi-pose-based CNN (MP-CNN) model using the encoders of the three SP-CNN model (extension, flexion, and neutral postures). We compared the validation results of the MP-CNN model using four algorithms pretrained with ImageNet. The MP-CNN model underwent additional internal and external validations to measure generalization performance. The ResNet50-based MP-CNN model achieved the largest area under the receiver operating characteristic curve (AUROC) of 91.4% (95% confidence interval [CI] 90.9-91.8%) for internal validation. The AUROC of the MP-CNN model were 91.3% (95% CI 90.7-91.9%) and 79.5% (95% CI 78.2-80.8%) for the extra-internal and external validation, respectively. The MP-CNN based heatmap offered a logical decision-making direction through optimized visualization. This model holds potential as a screening tool for LSS diagnosis, offering an explainable rationale for its prediction.
Collapse
Affiliation(s)
- Seyeon Park
- Transdisciplinary Department of Medicine & Advanced Technology, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
| | - Jun-Hoe Kim
- Department of Neurosurgery, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
| | - Youngbin Ahn
- Transdisciplinary Department of Medicine & Advanced Technology, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
| | - Chang-Hyun Lee
- Department of Neurosurgery, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea.
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea.
| | - Young-Gon Kim
- Transdisciplinary Department of Medicine & Advanced Technology, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea.
| | - Woon Tak Yuh
- Department of Neurosurgery, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
| | - Seung-Jae Hyun
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Chi Heon Kim
- Department of Neurosurgery, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Ki-Jeong Kim
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Chun Kee Chung
- Department of Neurosurgery, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Brain and Cognitive Sciences, Seoul National University College of Natural Sciences, Seoul, Republic of Korea
| |
Collapse
|
33
|
Singh K, Kaur N, Prabhu A. Combating COVID-19 Crisis using Artificial Intelligence (AI) Based Approach: Systematic Review. Curr Top Med Chem 2024; 24:737-753. [PMID: 38318824 DOI: 10.2174/0115680266282179240124072121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 12/19/2023] [Accepted: 12/27/2023] [Indexed: 02/07/2024]
Abstract
BACKGROUND SARS-CoV-2, the unique coronavirus that causes COVID-19, has wreaked damage around the globe, with victims displaying a wide range of difficulties that have encouraged medical professionals to look for innovative technical solutions and therapeutic approaches. Artificial intelligence-based methods have contributed a significant part in tackling complicated issues, and some institutions have been quick to embrace and tailor these solutions in response to the COVID-19 pandemic's obstacles. Here, in this review article, we have covered a few DL techniques for COVID-19 detection and diagnosis, as well as ML techniques for COVID-19 identification, severity classification, vaccine and drug development, mortality rate prediction, contact tracing, risk assessment, and public distancing. This review illustrates the overall impact of AI/ML tools on tackling and managing the outbreak. PURPOSE The focus of this research was to undertake a thorough evaluation of the literature on the part of Artificial Intelligence (AI) as a complete and efficient solution in the battle against the COVID-19 epidemic in the domains of detection and diagnostics of disease, mortality prediction and vaccine as well as drug development. METHODS A comprehensive exploration of PubMed, Web of Science, and Science Direct was conducted using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) regulations to find all possibly suitable papers conducted and made publicly available between December 1, 2019, and August 2023. COVID-19, along with AI-specific words, was used to create the query syntax. RESULTS During the period covered by the search strategy, 961 articles were published and released online. Out of these, a total of 135 papers were chosen for additional investigation. Mortality rate prediction, early detection and diagnosis, vaccine as well as drug development, and lastly, incorporation of AI for supervising and controlling the COVID-19 pandemic were the four main topics focused entirely on AI applications used to tackle the COVID-19 crisis. Out of 135, 60 research papers focused on the detection and diagnosis of the COVID-19 pandemic. Next, 19 of the 135 studies applied a machine-learning approach for mortality rate prediction. Another 22 research publications emphasized the vaccine as well as drug development. Finally, the remaining studies were concentrated on controlling the COVID-19 pandemic by applying AI AI-based approach to it. CONCLUSION We compiled papers from the available COVID-19 literature that used AI-based methodologies to impart insights into various COVID-19 topics in this comprehensive study. Our results suggest crucial characteristics, data types, and COVID-19 tools that can aid in medical and translational research facilitation.
Collapse
Affiliation(s)
- Kavya Singh
- Department of Biotechnology, Banasthali University, Banasthali Vidyapith, Banasthali, 304022, Rajasthan, India
| | - Navjeet Kaur
- Department of Chemistry & Division of Research and Development, Lovely Professional University, Phagwara, 144411, Punjab, India
| | - Ashish Prabhu
- Biotechnology Department, NIT Warangal, Warangal, 506004, Telangana, India
| |
Collapse
|
34
|
Ghods K, Azizi A, Jafari A, Ghods K. Application of Artificial Intelligence in Clinical Dentistry, a Comprehensive Review of Literature. JOURNAL OF DENTISTRY (SHIRAZ, IRAN) 2023; 24:356-371. [PMID: 38149231 PMCID: PMC10749440 DOI: 10.30476/dentjods.2023.96835.1969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 01/04/2023] [Accepted: 03/05/2023] [Indexed: 12/28/2023]
Abstract
Statement of the Problem In recent years, the use of artificial intelligence (AI) has become increasingly popular in dentistry because it facilitates the process of diagnosis and clinical decision-making. However, AI holds multiple prominent drawbacks, which restrict its wide application today. It is necessary for dentists to be aware of AI's pros and cons before its implementation. Purpose Therefore, the present study was conducted to comprehensively review various applications of AI in all dental branches along with its advantages and disadvantages. Materials and Method For this review article, a complete query was carried out on PubMed and Google Scholar databases and the studies published during 2010-2022 were collected using the keywords "Artificial Intelligence", "Dentistry," "Machine learning," "Deep learning," and "Diagnostic System." Ultimately, 116 relevant articles focused on artificial intelligence in dentistry were selected and evaluated. Results In new research AI applications in detecting dental abnormalities and oral malignancies based on radiographic view and histopathological features, designing dental implants and crowns, determining tooth preparation finishing line, analyzing growth patterns, estimating biological age, predicting the viability of dental pulp stem cells, analyzing the gene expression of periapical lesions, forensic dentistry, and predicting the success rate of treatments, have been mentioned. Despite AI's benefits in clinical dentistry, three controversial challenges including ease of use, financial return on investment, and evidence of performance exist and need to be managed. Conclusion As evidenced by the obtained results, the most crucial progression of AI is in oral malignancies' diagnostic systems. However, AI's newest advancements in various branches of dentistry require further scientific work before being applied to clinical practice. Moreover, the immense use of AI in clinical dentistry is only achievable when its challenges are appropriately managed.
Collapse
Affiliation(s)
- Kimia Ghods
- Student of Dentistry, Membership of Dental Material Research Center, Tehran Medical Sciences, Islamic Azad University, Tehran, Iran
| | - Arash Azizi
- Dept. Oral Medicine, Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, Tehran, Iran
| | - Aryan Jafari
- Student of Dentistry, Membership of Dental Material Research Center, Tehran
| | - Kian Ghods
- Dept. of Mathematics and Industrial Engineering, Polytechnique Montreal, Montreal, Canada
| |
Collapse
|
35
|
Fum WKS, Md Shah MN, Raja Aman RRA, Abd Kadir KA, Wen DW, Leong S, Tan LK. Generation of fluoroscopy-alike radiographs as alternative datasets for deep learning in interventional radiology. Phys Eng Sci Med 2023; 46:1535-1552. [PMID: 37695509 DOI: 10.1007/s13246-023-01317-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 08/03/2023] [Indexed: 09/12/2023]
Abstract
In fluoroscopy-guided interventions (FGIs), obtaining large quantities of labelled data for deep learning (DL) can be difficult. Synthetic labelled data can serve as an alternative, generated via pseudo 2D projections of CT volumetric data. However, contrasted vessels have low visibility in simple 2D projections of contrasted CT data. To overcome this, we propose an alternative method to generate fluoroscopy-like radiographs from contrasted head CT Angiography (CTA) volumetric data. The technique involves segmentation of brain tissue, bone, and contrasted vessels from CTA volumetric data, followed by an algorithm to adjust HU values, and finally, a standard ray-based projection is applied to generate the 2D image. The resulting synthetic images were compared to clinical fluoroscopy images for perceptual similarity and subject contrast measurements. Good perceptual similarity was demonstrated on vessel-enhanced synthetic images as compared to the clinical fluoroscopic images. Statistical tests of equivalence show that enhanced synthetic and clinical images have statistically equivalent mean subject contrast within 25% bounds. Furthermore, validation experiments confirmed that the proposed method for generating synthetic images improved the performance of DL models in certain regression tasks, such as localizing anatomical landmarks in clinical fluoroscopy images. Through enhanced pseudo 2D projection of CTA volume data, synthetic images with similar features to real clinical fluoroscopic images can be generated. The use of synthetic images as an alternative source for DL datasets represents a potential solution to the application of DL in FGIs procedures.
Collapse
Affiliation(s)
- Wilbur K S Fum
- Department of Biomedical Imaging, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
- Division of Radiological Sciences, Singapore General Hospital, Outram Road, Singapore, 169608, Singapore
| | - Mohammad Nazri Md Shah
- Department of Biomedical Imaging, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| | | | - Khairul Azmi Abd Kadir
- Department of Biomedical Imaging, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| | - David Wei Wen
- Department of Vascular and Interventional Radiology, Singapore General Hospital, Outram Road, Singapore, 169608, Singapore
| | - Sum Leong
- Department of Vascular and Interventional Radiology, Singapore General Hospital, Outram Road, Singapore, 169608, Singapore
| | - Li Kuo Tan
- Department of Biomedical Imaging, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia.
| |
Collapse
|
36
|
Han IH. Commentary on "Development and Validation of an Online Calculator to Predict Proximal Junctional Kyphosis After Adult Spinal Deformity Surgery Using Machine Learning". Neurospine 2023; 20:1281-1283. [PMID: 38171295 PMCID: PMC10762415 DOI: 10.14245/ns.2347302.651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2024] Open
Affiliation(s)
- In Ho Han
- Department of Neurosurgery, Pusan National University Hospital, Pusan National University School of Medicine, Busan, Korea
| |
Collapse
|
37
|
Salari E, Elsamaloty H, Ray A, Hadziahmetovic M, Parsai EI. Differentiating Radiation Necrosis and Metastatic Progression in Brain Tumors Using Radiomics and Machine Learning. Am J Clin Oncol 2023; 46:486-495. [PMID: 37580873 PMCID: PMC10589425 DOI: 10.1097/coc.0000000000001036] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
OBJECTIVES Distinguishing between radiation necrosis (RN) and metastatic progression is extremely challenging due to their similarity in conventional imaging. This is crucial from a therapeutic point of view as this determines the outcome of the treatment. This study aims to establish an automated technique to differentiate RN from brain metastasis progression using radiomics with machine learning. METHODS Eighty-six patients with brain metastasis after they underwent stereotactic radiosurgery as primary treatment were selected. Discrete wavelets transform, Laplacian-of-Gaussian, Gradient, and Square were applied to magnetic resonance post-contrast T1-weighted images to extract radiomics features. After feature selection, dataset was randomly split into train/test (80%/20%) datasets. Random forest classification, logistic regression, and support vector classification were trained and subsequently validated using test set. The classification performance was measured by area under the curve (AUC) value of receiver operating characteristic curve, accuracy, sensitivity, and specificity. RESULTS The best performance was achieved using random forest classification with a Gradient filter (AUC=0.910±0.047, accuracy 0.8±0.071, sensitivity=0.796±0.055, specificity=0.922±0.059). For, support vector classification the best result obtains using wavelet_HHH with a high AUC of 0.890±0.89, accuracy of 0.777±0.062, sensitivity=0.701±0.084, and specificity=0.85±0.112. Logistic regression using wavelet_HHH provides a poor result with AUC=0.882±0.051, accuracy of 0.753±0.08, sensitivity=0.717±0.208, and specificity=0.816±0.123. CONCLUSION This type of machine-learning approach can help accurately distinguish RN from recurrence in magnetic resonance imaging, without the need for biopsy. This has the potential to improve the therapeutic outcome.
Collapse
Affiliation(s)
| | | | - Aniruddha Ray
- Department of Physics and Astronomy, Adjunct Faculty
- Department of Radiation Oncology, University of Toledo, Toledo, OH
| | | | | |
Collapse
|
38
|
Murad N, Pan MC, Hsu YF. Optimizing diffuse optical imaging for breast tissues with a dual-encoder neural network to preserve small structural information and fine features. J Med Imaging (Bellingham) 2023; 10:066003. [PMID: 38074624 PMCID: PMC10704257 DOI: 10.1117/1.jmi.10.6.066003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 11/15/2023] [Accepted: 11/17/2023] [Indexed: 12/08/2024] Open
Abstract
Purpose Various laboratory sources have recently achieved progress in implementing deep learning models on biomedical optical imaging of soft biological tissues. The highly scattered nature of tissues at specific optical wavelengths results in poor spatial resolution. This opens up opportunities for diffuse optical imaging to improve the spatial resolution of obtained optical properties suffering from artifacts. This study aims to investigate a dual-encoder deep learning model for successfully detecting tumors in different phantoms w.r.t tumor size on diffuse optical imaging. Approach Our proposed dual-encoder network extends U-net by adding a parallel branch of signal data to get information directly from the base source. This allows the trained network to localize the inclusions without degrading or merging with the background. The signals from the forward model and the images from the inverse problem are combined in a single decoder, filling the gap between existing direct processing and post-processing. Results Absorption and reduced scattering coefficients are well reconstructed in both simulation and phantom test datasets. The proposed and implemented dual-encoder networks characterize better optical-property images than the signal-encoder and image-encoder networks, and the contrast-and-size detail resolution of the dual-encoder networks outperforms the other two approaches. From the measures of performance evaluation, the structural similarity and peak signal-to-noise ratio of the reconstructed images obtained by the dual-encoder networks remain the highest values. Conclusions In this study, we synthesized the advantages of boundary data direct reconstruction, namely the extracted signals and iterative methods, from the obtained images into a unified network architecture.
Collapse
Affiliation(s)
- Nazish Murad
- National Central University, Department of Mechanical Engineering, Taoyuan City, Taiwan
| | - Min-Chun Pan
- National Central University, Department of Mechanical Engineering, Taoyuan City, Taiwan
| | - Ya-Fen Hsu
- Landseed Hospital International, Department of Surgery, Taoyuan City, Taiwan
| |
Collapse
|
39
|
Garcia-Mendez JP, Lal A, Herasevich S, Tekin A, Pinevich Y, Lipatov K, Wang HY, Qamar S, Ayala IN, Khapov I, Gerberi DJ, Diedrich D, Pickering BW, Herasevich V. Machine Learning for Automated Classification of Abnormal Lung Sounds Obtained from Public Databases: A Systematic Review. Bioengineering (Basel) 2023; 10:1155. [PMID: 37892885 PMCID: PMC10604310 DOI: 10.3390/bioengineering10101155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/15/2023] [Accepted: 09/26/2023] [Indexed: 10/29/2023] Open
Abstract
Pulmonary auscultation is essential for detecting abnormal lung sounds during physical assessments, but its reliability depends on the operator. Machine learning (ML) models offer an alternative by automatically classifying lung sounds. ML models require substantial data, and public databases aim to address this limitation. This systematic review compares characteristics, diagnostic accuracy, concerns, and data sources of existing models in the literature. Papers published from five major databases between 1990 and 2022 were assessed. Quality assessment was accomplished with a modified QUADAS-2 tool. The review encompassed 62 studies utilizing ML models and public-access databases for lung sound classification. Artificial neural networks (ANN) and support vector machines (SVM) were frequently employed in the ML classifiers. The accuracy ranged from 49.43% to 100% for discriminating abnormal sound types and 69.40% to 99.62% for disease class classification. Seventeen public databases were identified, with the ICBHI 2017 database being the most used (66%). The majority of studies exhibited a high risk of bias and concerns related to patient selection and reference standards. Summarizing, ML models can effectively classify abnormal lung sounds using publicly available data sources. Nevertheless, inconsistent reporting and methodologies pose limitations to advancing the field, and therefore, public databases should adhere to standardized recording and labeling procedures.
Collapse
Affiliation(s)
- Juan P. Garcia-Mendez
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Amos Lal
- Department of Medicine, Division of Pulmonary and Critical Care Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Svetlana Herasevich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Aysun Tekin
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Yuliya Pinevich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
- Department of Cardiac Anesthesiology and Intensive Care, Republican Clinical Medical Center, 223052 Minsk, Belarus
| | - Kirill Lipatov
- Division of Pulmonary Medicine, Mayo Clinic Health Systems, Essentia Health, Duluth, MN 55805, USA
| | - Hsin-Yi Wang
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
- Department of Anesthesiology, Taipei Veterans General Hospital, National Yang Ming Chiao Tung University, Taipei 11217, Taiwan
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan 320317, Taiwan
| | - Shahraz Qamar
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Ivan N. Ayala
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Ivan Khapov
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | | | - Daniel Diedrich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| |
Collapse
|
40
|
Cano C, Mohammadian Rad N, Gholampour A, van Sambeek M, Pluim J, Lopata R, Wu M. Deep learning assisted classification of spectral photoacoustic imaging of carotid plaques. PHOTOACOUSTICS 2023; 33:100544. [PMID: 37671317 PMCID: PMC10475504 DOI: 10.1016/j.pacs.2023.100544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 07/31/2023] [Accepted: 08/11/2023] [Indexed: 09/07/2023]
Abstract
Spectral photoacoustic imaging (sPAI) is an emerging modality that allows real-time, non-invasive, and radiation-free assessment of tissue, benefiting from their optical contrast. sPAI is ideal for morphology assessment in arterial plaques, where plaque composition provides relevant information on plaque progression and its vulnerability. However, since sPAI is affected by spectral coloring, general spectroscopy unmixing techniques cannot provide reliable identification of such complicated sample composition. In this study, we employ a convolutional neural network (CNN) for the classification of plaque composition using sPAI. For this study, nine carotid endarterectomy plaques were imaged and were then annotated and validated using multiple histological staining. Our results show that a CNN can effectively differentiate constituent regions within plaques without requiring fluence or spectra correction, with the potential to eventually support vulnerability assessment in plaques.
Collapse
Affiliation(s)
- Camilo Cano
- Department of Biomedical Engineering, Eindhoven University of Technology, De Rondom 70, Eindhoven, the Netherlands
| | - Nastaran Mohammadian Rad
- Department of Biomedical Engineering, Eindhoven University of Technology, De Rondom 70, Eindhoven, the Netherlands
- Department of Precision Medicine, Maastricht University, Minderbroedersberg 4-6, Maastricht, the Netherlands
| | - Amir Gholampour
- Department of Biomedical Engineering, Eindhoven University of Technology, De Rondom 70, Eindhoven, the Netherlands
| | - Marc van Sambeek
- Department of Biomedical Engineering, Eindhoven University of Technology, De Rondom 70, Eindhoven, the Netherlands
- Department of Vascular Surgery, Catharina Ziekenhuis Eindhoven, Michelangelolaan 2, State Two, the Netherlands
| | - Josien Pluim
- Department of Biomedical Engineering, Eindhoven University of Technology, De Rondom 70, Eindhoven, the Netherlands
| | - Richard Lopata
- Department of Biomedical Engineering, Eindhoven University of Technology, De Rondom 70, Eindhoven, the Netherlands
| | - Min Wu
- Department of Biomedical Engineering, Eindhoven University of Technology, De Rondom 70, Eindhoven, the Netherlands
| |
Collapse
|
41
|
Mohammadi S, Ghaderi S, Ghaderi K, Mohammadi M, Pourasl MH. Automated segmentation of meningioma from contrast-enhanced T1-weighted MRI images in a case series using a marker-controlled watershed segmentation and fuzzy C-means clustering machine learning algorithm. Int J Surg Case Rep 2023; 111:108818. [PMID: 37716060 PMCID: PMC10514425 DOI: 10.1016/j.ijscr.2023.108818] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/07/2023] [Accepted: 09/09/2023] [Indexed: 09/18/2023] Open
Abstract
INTRODUCTION AND IMPORTANCE Accurate segmentation of meningiomas from contrast-enhanced T1-weighted (CE T1-w) magnetic resonance imaging (MRI) is crucial for diagnosis and treatment planning. Manual segmentation is time-consuming and prone to variability. To evaluate an automated segmentation approach for meningiomas using marker-controlled watershed segmentation (MCWS) and fuzzy c-means (FCM) algorithms. CASE PRESENTATION AND METHODS CE T1-w MRI of 3 female patients (aged 59, 44, 67 years) with right frontal meningiomas were analyzed. Images were converted to grayscale and preprocessed with Otsu's thresholding and FCM clustering. MCWS segmentation was performed. Segmentation accuracy was assessed by comparing automated segmentations to manual delineations. CLINICAL DISCUSSION The approach successfully segmented meningiomas in all cases. Mean sensitivity was 0.8822, indicating accurate identification of tumors. Mean Dice similarity coefficient between Otsu's and FCM1 was 0.6599, suggesting good overlap between segmentation methods. CONCLUSION The MCWS and FCM approach enables accurate automated segmentation of meningiomas from CE T1-w MRI. With further validation on larger datasets, this could provide an efficient tool to assist in delineating meningioma boundaries for clinical management.
Collapse
Affiliation(s)
- Sana Mohammadi
- Department of Medical Sciences, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Sadegh Ghaderi
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran.
| | - Kayvan Ghaderi
- Department of Information Technology and Computer Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj 66177-15175, Iran
| | - Mahdi Mohammadi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
42
|
Cevik J, Seth I, Rozen WM. Transforming breast reconstruction: the pioneering role of artificial intelligence in preoperative planning. Gland Surg 2023; 12:1271-1275. [PMID: 37842522 PMCID: PMC10570966 DOI: 10.21037/gs-23-265] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 09/02/2023] [Indexed: 10/17/2023]
Abstract
Autologous breast reconstruction surgery is a vital part of the recovery process for patients with breast cancer. While various reconstructive options exist, the deep inferior epigastric artery perforator (DIEP) flap is often favoured for its ability to closely mimic natural breast tissue. However, the complex vascular anatomy associated with the deep inferior epigastric artery (DIEA) presents challenges for surgeons during DIEP flap execution. Preoperative imaging, such as computed tomography angiography (CTA), is commonly used to understand vascular architecture and aid in selecting appropriate perforators. Conventional reporting of CTA scans is a labour-intensive process that can be challenging and requires specific expertise. The integration of artificial intelligence (AI) and machine learning (ML) algorithms in medical imaging has the potential to address these challenges. AI can enhance CTA through improved data acquisition, image post-processing, and potentially interpretation. By automating the perforator selection process, AI applications can significantly reduce the time spent on preoperative imaging analysis and potentially improve accuracy and reliability. While AI shows promise in optimizing efficiency, accuracy, and reliability in breast reconstruction planning, challenges and ethical considerations need to be addressed. This article explores the challenges, opportunities, and future directions of using AI in the preoperative planning of autologous breast reconstruction.
Collapse
Affiliation(s)
- Jevan Cevik
- Department of Plastic and Reconstructive Surgery, Peninsula Health, Frankston, Victoria, Australia
- Peninsula Clinical School, Central Clinical School, Faculty of Medicine, Monash University, Frankston, Victoria, Australia
| | - Ishith Seth
- Department of Plastic and Reconstructive Surgery, Peninsula Health, Frankston, Victoria, Australia
- Peninsula Clinical School, Central Clinical School, Faculty of Medicine, Monash University, Frankston, Victoria, Australia
| | - Warren M. Rozen
- Department of Plastic and Reconstructive Surgery, Peninsula Health, Frankston, Victoria, Australia
- Peninsula Clinical School, Central Clinical School, Faculty of Medicine, Monash University, Frankston, Victoria, Australia
| |
Collapse
|
43
|
Patro KK, Allam JP, Neelapu BC, Tadeusiewicz R, Acharya UR, Hammad M, Yildirim O, Pławiak P. Application of Kronecker convolutions in deep learning technique for automated detection of kidney stones with coronal CT images. Inf Sci (N Y) 2023; 640:119005. [DOI: 10.1016/j.ins.2023.119005] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/23/2024]
|
44
|
Azam H, Tariq H, Shehzad D, Akbar S, Shah H, Khan ZA. Fully Automated Skull Stripping from Brain Magnetic Resonance Images Using Mask RCNN-Based Deep Learning Neural Networks. Brain Sci 2023; 13:1255. [PMID: 37759856 PMCID: PMC10526767 DOI: 10.3390/brainsci13091255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 08/09/2023] [Accepted: 08/21/2023] [Indexed: 09/29/2023] Open
Abstract
This research comprises experiments with a deep learning framework for fully automating the skull stripping from brain magnetic resonance (MR) images. Conventional techniques for segmentation have progressed to the extent of Convolutional Neural Networks (CNN). We proposed and experimented with a contemporary variant of the deep learning framework based on mask region convolutional neural network (Mask-RCNN) for all anatomical orientations of brain MR images. We trained the system from scratch to build a model for classification, detection, and segmentation. It is validated by images taken from three different datasets: BrainWeb; NAMIC, and a local hospital. We opted for purposive sampling to select 2000 images of T1 modality from data volumes followed by a multi-stage random sampling technique to segregate the dataset into three batches for training (75%), validation (15%), and testing (10%) respectively. We utilized a robust backbone architecture, namely ResNet-101 and Functional Pyramid Network (FPN), to achieve optimal performance with higher accuracy. We subjected the same data to two traditional methods, namely Brain Extraction Tools (BET) and Brain Surface Extraction (BSE), to compare their performance results. Our proposed method had higher mean average precision (mAP) = 93% and content validity index (CVI) = 0.95%, which were better than comparable methods. We contributed by training Mask-RCNN from scratch for generating reusable learning weights known as transfer learning. We contributed to methodological novelty by applying a pragmatic research lens, and used a mixed method triangulation technique to validate results on all anatomical modalities of brain MR images. Our proposed method improved the accuracy and precision of skull stripping by fully automating it and reducing its processing time and operational cost and reliance on technicians. This research study has also provided grounds for extending the work to the scale of explainable artificial intelligence (XAI).
Collapse
Affiliation(s)
- Humera Azam
- Department of Computer Science, University of Karachi, Karachi 75270, Pakistan
| | - Humera Tariq
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA;
| | - Danish Shehzad
- Department of Computer Science, The Superior University, Lahore 54590, Pakistan
| | - Saad Akbar
- College of Computing and Information Sciences, Karachi Institute of Economics and Technology, Karachi 75190, Pakistan;
| | - Habib Shah
- Department of Computer Science, College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia;
| | - Zamin Ali Khan
- Department of Computer Science, IQRA University, Karachi 71500, Pakistan;
| |
Collapse
|
45
|
Jha AK, Mithun S, Sherkhane UB, Dwivedi P, Puts S, Osong B, Traverso A, Purandare N, Wee L, Rangarajan V, Dekker A. Emerging role of quantitative imaging (radiomics) and artificial intelligence in precision oncology. EXPLORATION OF TARGETED ANTI-TUMOR THERAPY 2023; 4:569-582. [PMID: 37720353 PMCID: PMC10501896 DOI: 10.37349/etat.2023.00153] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 04/20/2023] [Indexed: 09/19/2023] Open
Abstract
Cancer is a fatal disease and the second most cause of death worldwide. Treatment of cancer is a complex process and requires a multi-modality-based approach. Cancer detection and treatment starts with screening/diagnosis and continues till the patient is alive. Screening/diagnosis of the disease is the beginning of cancer management and continued with the staging of the disease, planning and delivery of treatment, treatment monitoring, and ongoing monitoring and follow-up. Imaging plays an important role in all stages of cancer management. Conventional oncology practice considers that all patients are similar in a disease type, whereas biomarkers subgroup the patients in a disease type which leads to the development of precision oncology. The utilization of the radiomic process has facilitated the advancement of diverse imaging biomarkers that find application in precision oncology. The role of imaging biomarkers and artificial intelligence (AI) in oncology has been investigated by many researchers in the past. The existing literature is suggestive of the increasing role of imaging biomarkers and AI in oncology. However, the stability of radiomic features has also been questioned. The radiomic community has recognized that the instability of radiomic features poses a danger to the global generalization of radiomic-based prediction models. In order to establish radiomic-based imaging biomarkers in oncology, the robustness of radiomic features needs to be established on a priority basis. This is because radiomic models developed in one institution frequently perform poorly in other institutions, most likely due to radiomic feature instability. To generalize radiomic-based prediction models in oncology, a number of initiatives, including Quantitative Imaging Network (QIN), Quantitative Imaging Biomarkers Alliance (QIBA), and Image Biomarker Standardisation Initiative (IBSI), have been launched to stabilize the radiomic features.
Collapse
Affiliation(s)
- Ashish Kumar Jha
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, 6200 Maastricht, The Netherlands
- Department of Nuclear Medicine, Tata Memorial Hospital, Mumbai 400012, Maharashtra, India
- Homi Bhabha National Institute, BARC Training School Complex, Anushaktinagar, Mumbai 400094, Maharashtra, India
| | - Sneha Mithun
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, 6200 Maastricht, The Netherlands
- Department of Nuclear Medicine, Tata Memorial Hospital, Mumbai 400012, Maharashtra, India
- Homi Bhabha National Institute, BARC Training School Complex, Anushaktinagar, Mumbai 400094, Maharashtra, India
| | - Umeshkumar B. Sherkhane
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, 6200 Maastricht, The Netherlands
- Department of Nuclear Medicine, Tata Memorial Hospital, Mumbai 400012, Maharashtra, India
| | - Pooj Dwivedi
- Homi Bhabha National Institute, BARC Training School Complex, Anushaktinagar, Mumbai 400094, Maharashtra, India
- Department of Nuclear Medicine, Advance Center for Treatment, Research, Education in Cancer, Kharghar, Navi-Mumbai 410210, Maharashtra, India
| | - Senders Puts
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, 6200 Maastricht, The Netherlands
| | - Biche Osong
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, 6200 Maastricht, The Netherlands
| | - Alberto Traverso
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, 6200 Maastricht, The Netherlands
| | - Nilendu Purandare
- Department of Nuclear Medicine, Tata Memorial Hospital, Mumbai 400012, Maharashtra, India
- Homi Bhabha National Institute, BARC Training School Complex, Anushaktinagar, Mumbai 400094, Maharashtra, India
| | - Leonard Wee
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, 6200 Maastricht, The Netherlands
| | - Venkatesh Rangarajan
- Department of Nuclear Medicine, Tata Memorial Hospital, Mumbai 400012, Maharashtra, India
- Homi Bhabha National Institute, BARC Training School Complex, Anushaktinagar, Mumbai 400094, Maharashtra, India
| | - Andre Dekker
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, 6200 Maastricht, The Netherlands
| |
Collapse
|
46
|
Cevik J, Seth I, Hunter-Smith DJ, Rozen WM. A History of Innovation: Tracing the Evolution of Imaging Modalities for the Preoperative Planning of Microsurgical Breast Reconstruction. J Clin Med 2023; 12:5246. [PMID: 37629288 PMCID: PMC10455834 DOI: 10.3390/jcm12165246] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/09/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
Breast reconstruction is an essential component in the multidisciplinary management of breast cancer patients. Over the years, preoperative planning has played a pivotal role in assisting surgeons in planning operative decisions prior to the day of surgery. The evolution of preoperative planning can be traced back to the introduction of modalities such as ultrasound and colour duplex ultrasonography, enabling surgeons to evaluate the donor site's vasculature and thereby plan operations more accurately. However, the limitations of these techniques paved the way for the implementation of modern three-dimensional imaging technologies. With the advancements in 3D imaging, including computed tomography and magnetic resonance imaging, surgeons gained the ability to obtain detailed anatomical information. Moreover, numerous adjuncts have been developed to aid in the planning process. The integration of 3D-printing technologies has made significant contributions, enabling surgeons to create complex haptic models of the underlying anatomy. Direct infrared thermography provides a non-invasive, visual assessment of abdominal wall vascular physiology. Additionally, augmented reality technologies are poised to reshape surgical planning by providing an immersive and interactive environment for surgeons to visualize and manipulate 3D reconstructions. Still, the future of preoperative planning in breast reconstruction holds immense promise. Most recently, artificial intelligence algorithms, utilising machine learning and deep learning techniques, have the potential to automate and enhance preoperative planning processes. This review provides a comprehensive assessment of the history of innovation in preoperative planning for breast reconstruction, while also outlining key future directions, and the impact of artificial intelligence in this field.
Collapse
Affiliation(s)
- Jevan Cevik
- Department of Plastic and Reconstructive Surgery, Peninsula Health, Frankston, VIC 3199, Australia
- Peninsula Clinical School, Central Clinical School, Faculty of Medicine, Monash University, Frankston, VIC 3199, Australia
| | - Ishith Seth
- Department of Plastic and Reconstructive Surgery, Peninsula Health, Frankston, VIC 3199, Australia
- Peninsula Clinical School, Central Clinical School, Faculty of Medicine, Monash University, Frankston, VIC 3199, Australia
| | - David J. Hunter-Smith
- Department of Plastic and Reconstructive Surgery, Peninsula Health, Frankston, VIC 3199, Australia
- Peninsula Clinical School, Central Clinical School, Faculty of Medicine, Monash University, Frankston, VIC 3199, Australia
| | - Warren M. Rozen
- Department of Plastic and Reconstructive Surgery, Peninsula Health, Frankston, VIC 3199, Australia
- Peninsula Clinical School, Central Clinical School, Faculty of Medicine, Monash University, Frankston, VIC 3199, Australia
| |
Collapse
|
47
|
Shin K, Lee JS, Lee JY, Lee H, Kim J, Byeon JS, Jung HY, Kim DH, Kim N. An Image Turing Test on Realistic Gastroscopy Images Generated by Using the Progressive Growing of Generative Adversarial Networks. J Digit Imaging 2023; 36:1760-1769. [PMID: 36914855 PMCID: PMC10406771 DOI: 10.1007/s10278-023-00803-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 02/21/2023] [Accepted: 02/23/2023] [Indexed: 03/16/2023] Open
Abstract
Generative adversarial networks (GAN) in medicine are valuable techniques for augmenting unbalanced rare data, anomaly detection, and avoiding patient privacy issues. However, there were limits to generating high-quality endoscopic images with various characteristics, such as peristalsis, viewpoints, light sources, and mucous patterns. This study used the progressive growing of GAN (PGGAN) within the normal distribution dataset to confirm the ability to generate high-quality gastrointestinal images and investigated what barriers PGGAN has to generate endoscopic images. We trained the PGGAN with 107,060 gastroscopy images from 4165 normal patients to generate highly realistic 5122 pixel-sized images. For the evaluation, visual Turing tests were conducted on 100 real and 100 synthetic images to distinguish the authenticity of images by 19 endoscopists. The endoscopists were divided into three groups based on their years of clinical experience for subgroup analysis. The overall accuracy, sensitivity, and specificity of the 19 endoscopist groups were 61.3%, 70.3%, and 52.4%, respectively. The mean accuracy of the three endoscopist groups was 62.4 [Group I], 59.8 [Group II], and 59.1% [Group III], which was not considered a significant difference. There were no statistically significant differences in the location of the stomach. However, the real images with the anatomical landmark pylorus had higher detection sensitivity. The images generated by PGGAN showed highly realistic depictions that were difficult to distinguish, regardless of their expertise as endoscopists. However, it was necessary to establish GANs that could better represent the rugal folds and mucous membrane texture.
Collapse
Affiliation(s)
- Keewon Shin
- Biomedical Engineering Research Center, Asan Medical Center, Seoul, Republic of Korea
| | - Jung Su Lee
- Department of Gastroenterology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
- Seoul Samsung Internal Medicine Clinic, Seoul, Republic of Korea
| | - Ji Young Lee
- Department of Health Screening and Promotion Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Hyunsu Lee
- Department of Medical Informatics, Keimyung University School of Medicine, Daegu, Republic of Korea
| | - Jeongseok Kim
- Department of Internal Medicine, Keimyung University School of Medicine, Daegu, Republic of Korea
| | - Jeong-Sik Byeon
- Department of Gastroenterology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Hwoon-Yong Jung
- Department of Gastroenterology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Do Hoon Kim
- Department of Gastroenterology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| | - Namkug Kim
- Biomedical Engineering Research Center, Asan Medical Center, Seoul, Republic of Korea.
- Department of Convergence Medicine, University of Ulsan College of Medicine & Asan Medical Center, Seoul, Republic of Korea.
| |
Collapse
|
48
|
Pu J, Leme AS, de Lima e Silva C, Beeche C, Nyunoya T, Königshoff M, Chandra D. Deep-Masker: A Deep Learning-based Tool to Assess Chord Length from Murine Lung Images. Am J Respir Cell Mol Biol 2023; 69:126-134. [PMID: 37236629 PMCID: PMC10399147 DOI: 10.1165/rcmb.2023-0051ma] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 05/22/2023] [Indexed: 05/28/2023] Open
Abstract
Chord length is an indirect measure of alveolar size and a critical endpoint in animal models of chronic obstructive pulmonary disease (COPD). In assessing chord length, the lumens of nonalveolar structures are eliminated from measurement by various methods, including manual masking. However, manual masking is resource intensive and can introduce variability and bias. We created a fully automated deep learning-based tool to mask murine lung images and assess chord length to facilitate mechanistic and therapeutic discovery in COPD called Deep-Masker (available at http://47.93.0.75:8110/login). We trained the deep learning algorithm for Deep-Masker using 1,217 images from 137 mice from 12 strains exposed to room air or cigarette smoke for 6 months. We validated this algorithm against manual masking. Deep-Masker demonstrated high accuracy with an average difference in chord length compared with manual masking of -0.3 ± 1.4% (rs = 0.99) for room-air-exposed mice and 0.7 ± 1.9% (rs = 0.99) for cigarette-smoke-exposed mice. The difference between Deep-Masker and manually masked images for change in chord length because of cigarette smoke exposure was 6.0 ± 9.2% (rs = 0.95). These values exceed published estimates for interobserver variability for manual masking (rs = 0.65) and the accuracy of published algorithms by a significant margin. We validated the performance of Deep-Masker using an independent set of images. Deep-Masker can be an accurate, precise, fully automated method to standardize chord length measurement in murine models of lung disease.
Collapse
Affiliation(s)
- Jiantao Pu
- Department of Radiology
- Department of Bioengineering, and
| | - Adriana S. Leme
- Division of Pulmonary, Allergy, and Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Camilla de Lima e Silva
- Division of Pulmonary, Allergy, and Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | | | - Toru Nyunoya
- Division of Pulmonary, Allergy, and Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Melanie Königshoff
- Division of Pulmonary, Allergy, and Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Divay Chandra
- Division of Pulmonary, Allergy, and Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| |
Collapse
|
49
|
Lasala A, Fiorentino MC, Micera S, Bandini A, Moccia S. Exploiting class activation mappings as prior to generate fetal brain ultrasound images with GANs. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083494 DOI: 10.1109/embc40787.2023.10340469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The identification of fetal-head standard planes (FHSPs) from ultrasound (US) images is of fundamental importance to visualize cerebral structures and diagnose neural anomalies during gestation in a standardized way. To support the activity of healthcare operators, deep-learning algorithms have been proposed to classify these planes. To date, the translation of such algorithms in clinical practice is hampered by several factors, including the lack of large annotated datasets to train robust and generalizable algorithms. This paper proposes an approach to generate synthetic FHSP images with conditional generative adversarial network (cGAN), using class activation maps (CAMs) obtained from FHSP classification algorithms as cGAN conditional prior. Using the largest publicly available FHSP dataset, we generated realistic images of the three common FHSPs: trans-cerebellum, trans-thalamic and trans-ventricular. The evaluation through t-SNE shows the potential of the proposed approach to attenuate the problem of limited availability of annotated FHSP images.
Collapse
|
50
|
Jafari M, Shoeibi A, Khodatars M, Ghassemi N, Moridian P, Alizadehsani R, Khosravi A, Ling SH, Delfan N, Zhang YD, Wang SH, Gorriz JM, Alinejad-Rokny H, Acharya UR. Automated diagnosis of cardiovascular diseases from cardiac magnetic resonance imaging using deep learning models: A review. Comput Biol Med 2023; 160:106998. [PMID: 37182422 DOI: 10.1016/j.compbiomed.2023.106998] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 03/01/2023] [Accepted: 04/28/2023] [Indexed: 05/16/2023]
Abstract
In recent years, cardiovascular diseases (CVDs) have become one of the leading causes of mortality globally. At early stages, CVDs appear with minor symptoms and progressively get worse. The majority of people experience symptoms such as exhaustion, shortness of breath, ankle swelling, fluid retention, and other symptoms when starting CVD. Coronary artery disease (CAD), arrhythmia, cardiomyopathy, congenital heart defect (CHD), mitral regurgitation, and angina are the most common CVDs. Clinical methods such as blood tests, electrocardiography (ECG) signals, and medical imaging are the most effective methods used for the detection of CVDs. Among the diagnostic methods, cardiac magnetic resonance imaging (CMRI) is increasingly used to diagnose, monitor the disease, plan treatment and predict CVDs. Coupled with all the advantages of CMR data, CVDs diagnosis is challenging for physicians as each scan has many slices of data, and the contrast of it might be low. To address these issues, deep learning (DL) techniques have been employed in the diagnosis of CVDs using CMR data, and much research is currently being conducted in this field. This review provides an overview of the studies performed in CVDs detection using CMR images and DL techniques. The introduction section examined CVDs types, diagnostic methods, and the most important medical imaging techniques. The following presents research to detect CVDs using CMR images and the most significant DL methods. Another section discussed the challenges in diagnosing CVDs from CMRI data. Next, the discussion section discusses the results of this review, and future work in CVDs diagnosis from CMR images and DL techniques are outlined. Finally, the most important findings of this study are presented in the conclusion section.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Internship in BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia
| | - Afshin Shoeibi
- Internship in BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia; Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Navid Ghassemi
- Internship in BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia
| | - Parisa Moridian
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Australia
| | - Abbas Khosravi
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Australia
| | - Sai Ho Ling
- Faculty of Engineering and IT, University of Technology Sydney (UTS), Australia
| | - Niloufar Delfan
- Faculty of Computer Engineering, Dept. of Artificial Intelligence Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK
| | - Shui-Hua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - Hamid Alinejad-Rokny
- BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia; UNSW Data Science Hub, The University of New South Wales, Sydney, NSW, 2052, Australia; Health Data Analytics Program, Centre for Applied Artificial Intelligence, Macquarie University, Sydney, 2109, Australia
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Dept. of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|