1
|
Hakim A, Zubak I, Marx C, Rhomberg T, Maragkou T, Slotboom J, Murek M. Feasibility of using Gramian angular field for preprocessing MR spectroscopy data in AI classification tasks: Differentiating glioblastoma from lymphoma. Eur J Radiol 2025; 184:111957. [PMID: 39892374 DOI: 10.1016/j.ejrad.2025.111957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2024] [Revised: 01/13/2025] [Accepted: 01/28/2025] [Indexed: 02/03/2025]
Abstract
OBJECTIVES To convert 1D spectra into 2D images using the Gramian angular field, to be used as input for convolutional neural network for classification tasks such as glioblastoma versus lymphoma. MATERIALS AND METHODS Retrospective study including patients with histologically confirmed glioblastoma and lymphoma between 2009-2020 who underwent preoperative MR spectroscopy, using single voxel spectroscopy acquired with a short echo time (TE 30). We compared: 1) the Fourier-transformed raw spectra, and 2) the fitted spectra generated during post-processing. Both spectra were independently converted into images using the Gramian angular field, and then served as inputs for a pretrained neural network. We compared the classification performance using data from the Fourier-transformed raw spectra and the post-processed fitted spectra. RESULTS This feasibility study included 98 patients, of whom 65 were diagnosed with glioblastomas and 33 with lymphomas. For algorithm testing, 20 % of the cases (19 in total) were randomly selected. By applying the Gramian angular field technique to the Fourier-transformed spectra, we achieved an accuracy of 73.7 % and a sensitivity of 92 % in classifying glioblastoma versus lymphoma, slightly higher than the fitted spectra pathway. CONCLUSION Spectroscopy data can be effectively transformed into distinct color graphical images using the Gramian angular field technique, enabling their use as input for deep learning algorithms. Accuracy tends to be higher when utilizing data derived from Fourier-transformed spectra compared to fitted spectra. This finding underscores the potential of using MR spectroscopy data in neural network-based classification purposes.
Collapse
Affiliation(s)
- Arsany Hakim
- University Institute of Diagnostic and Interventional Neuroradiology, Bern University Hospital, Inselspital, University of Bern, Bern, Switzerland.
| | - Irena Zubak
- Department of Neurosurgery, Bern University Hospital, Inselspital, University of Bern, Bern, Switzerland
| | - Christina Marx
- University Institute of Diagnostic and Interventional Neuroradiology, Bern University Hospital, Inselspital, University of Bern, Bern, Switzerland
| | - Thomas Rhomberg
- Department of Neurosurgery, Bern University Hospital, Inselspital, University of Bern, Bern, Switzerland
| | - Theoni Maragkou
- Institute of Tissue Medicine and Pathology, University of Bern, Bern, Switzerland
| | - Johannes Slotboom
- University Institute of Diagnostic and Interventional Neuroradiology, Bern University Hospital, Inselspital, University of Bern, Bern, Switzerland
| | - Michael Murek
- Department of Neurosurgery, Bern University Hospital, Inselspital, University of Bern, Bern, Switzerland
| |
Collapse
|
2
|
Sivan Sulaja J, Kannath SK, Kalaparti Sri Venkata Ganesh V, Thomas B. Evaluation of multiple deep neural networks for detection of intracranial dural arteriovenous fistula on susceptibility weighted angiography imaging. Neuroradiol J 2025; 38:72-78. [PMID: 39089849 PMCID: PMC11571296 DOI: 10.1177/19714009241269491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 06/08/2024] [Indexed: 08/04/2024] Open
Abstract
BACKGROUND The natural history of intracranial dural arteriovenous fistula (DAVF) is variable and early diagnosis is crucial in order to positively impact the clinical course of aggressive DAVF. Artificial intelligence (AI) based techniques can be promising in this regard, and in this study, we used various deep neural network (DNN) architectures to determine whether DAVF could be reliably identified on susceptibility-weighted angiography images (SWAN). MATERIALS AND METHODS A total of 3965 SWAN image slices from 30 digital subtraction angiographically proven DAVF patients and 4380 SWAN image slices from 40 age-matched patients with normal MRI findings as control group were included. The images were categorized as either DAVF or normal and the data was trained using various DNN such as VGG-16, EfficientNet-B0, and ResNet-50. RESULTS Various DNN architectures showed the accuracy of 95.96% (VGG-16), 91.75% (EfficientNet-B0), and 86.23% (ResNet-50) on the SWAN image dataset. ROC analysis yielded an area under the curve of 0.796 (p < .001), best for VGG-16 model. Criterion of seven consecutive positive slices for DAVF diagnosis yielded a sensitivity of 74.68% with a specificity of 69.15%, while setting eight slices improved the sensitivity to above 80.38%, with a decrease of specificity up to 56.38%. Based on seven consecutive positive slices criteria, EfficientNet-B0 yielded a sensitivity of 73.21% with a specificity of 45.92% and ResNet-50 yielded a sensitivity of 72.39% with a specificity of 67.42%. CONCLUSION This study shows that DNN can extract discriminative features of SWAN for the classification of DAVF from normal with good accuracy, reasonably good sensitivity and specificity.
Collapse
Affiliation(s)
- Jithin Sivan Sulaja
- Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Kerala, India
| | - Santhosh K. Kannath
- Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Kerala, India
| | | | - Bejoy Thomas
- Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Kerala, India
| |
Collapse
|
3
|
Kamminga NCW, Kievits JEC, Plaisier PW, Burgers JS, van der Veldt AM, van den Brand JAGJ, Mulder M, Wakkee M, Lugtenberg M, Nijsten T. Do large language model chatbots perform better than established patient information resources in answering patient questions? A comparative study on melanoma. Br J Dermatol 2025; 192:306-315. [PMID: 39365602 DOI: 10.1093/bjd/ljae377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Revised: 08/26/2024] [Accepted: 09/24/2024] [Indexed: 10/05/2024]
Abstract
BACKGROUND Large language models (LLMs) have a potential role in providing adequate patient information. OBJECTIVES To compare the quality of LLM responses with established Dutch patient information resources (PIRs) in answering patient questions regarding melanoma. METHODS Responses from ChatGPT versions 3.5 and 4.0, Gemini, and three leading Dutch melanoma PIRs to 50 melanoma-specific questions were examined at baseline and for LLMs again after 8 months. Outcomes included (medical) accuracy, completeness, personalization, readability and, additionally, reproducibility for LLMs. Comparative analyses were performed within LLMs and PIRs using Friedman's Anova, and between best-performing LLMs and gold-standard (GS) PIRs using the Wilcoxon signed-rank test. RESULTS Within LLMs, ChatGPT-3.5 demonstrated the highest accuracy (P = 0.009). Gemini performed best in completeness (P < 0.001), personalization (P = 0.007) and readability (P < 0.001). PIRs were consistent in accuracy and completeness, with the general practitioner's website excelling in personalization (P = 0.013) and readability (P < 0.001). The best-performing LLMs outperformed the GS-PIR on completeness and personalization, yet it was less accurate and less readable. Over time, response reproducibility decreased for all LLMs, showing variability across outcomes. CONCLUSIONS Although LLMs show potential in providing highly personalized and complete responses to patient questions regarding melanoma, improving and safeguarding accuracy, reproducibility and accessibility is crucial before they can replace or complement conventional PIRs.
Collapse
Affiliation(s)
- Nadia C W Kamminga
- Department of Dermatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, the Netherlands
| | - June E C Kievits
- Department of Dermatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, the Netherlands
- Department of Surgery, Albert Schweitzer Hospital, Dordrecht, the Netherlands
| | - Peter W Plaisier
- Department of Surgery, Albert Schweitzer Hospital, Dordrecht, the Netherlands
| | - Jako S Burgers
- Dutch College of General Practitioners, PO Box 3231, Utrecht, the Netherlands
- Care and Public Health Research Institute, Department Family Medicine, Maastricht UMC+, Maastricht, the Netherlands
| | - Astrid M van der Veldt
- Department of Radiology & Nuclear Medicine, Erasmus MC Cancer Institute, University Medical Center Rotterdam, the Netherlands
- Department of Medical Oncology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, the Netherlands
| | | | - Mark Mulder
- Department of Medical Oncology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, the Netherlands
| | - Marlies Wakkee
- Department of Dermatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, the Netherlands
| | - Marjolein Lugtenberg
- Department of Dermatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, the Netherlands
- Department Tranzo, Tilburg School of Social and Behavioral Sciences, Tilburg University, Tilburg, the Netherlands
| | - Tamar Nijsten
- Department of Dermatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, the Netherlands
| |
Collapse
|
4
|
Hafeez Y, Memon K, AL-Quraishi MS, Yahya N, Elferik S, Ali SSA. Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It. Diagnostics (Basel) 2025; 15:168. [PMID: 39857052 PMCID: PMC11764244 DOI: 10.3390/diagnostics15020168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Revised: 12/22/2024] [Accepted: 01/08/2025] [Indexed: 01/27/2025] Open
Abstract
Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in the direction of designing and developing computer aided diagnosis (CAD) tools to serve as assistants to doctors, their large-scale adoption and integration into the healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET) scans have been widely and very effectively employed by radiologists and neurologists for the differential diagnoses of neurological disorders for decades, yet no AI-powered systems to analyze such scans have been incorporated into the standard operating procedures of healthcare systems. Why? It is absolutely understandable that in diagnostic medicine, precious human lives are on the line, and hence there is no room even for the tiniest of mistakes. Nevertheless, with the advent of explainable artificial intelligence (XAI), the old-school black boxes of deep learning (DL) systems have been unraveled. Would XAI be the turning point for medical experts to finally embrace AI in diagnostic radiology? This review is a humble endeavor to find the answers to these questions. Methods: In this review, we present the journey and contributions of AI in developing systems to recognize, preprocess, and analyze brain MRI scans for differential diagnoses of various neurological disorders, with special emphasis on CAD systems embedded with explainability. A comprehensive review of the literature from 2017 to 2024 was conducted using host databases. We also present medical domain experts' opinions and summarize the challenges up ahead that need to be addressed in order to fully exploit the tremendous potential of XAI in its application to medical diagnostics and serve humanity. Results: Forty-seven studies were summarized and tabulated with information about the XAI technology and datasets employed, along with performance accuracies. The strengths and weaknesses of the studies have also been discussed. In addition, the opinions of seven medical experts from around the world have been presented to guide engineers and data scientists in developing such CAD tools. Conclusions: Current CAD research was observed to be focused on the enhancement of the performance accuracies of the DL regimens, with less attention being paid to the authenticity and usefulness of explanations. A shortage of ground truth data for explainability was also observed. Visual explanation methods were found to dominate; however, they might not be enough, and more thorough and human professor-like explanations would be required to build the trust of healthcare professionals. Special attention to these factors along with the legal, ethical, safety, and security issues can bridge the current gap between XAI and routine clinical practice.
Collapse
Affiliation(s)
- Yasir Hafeez
- Faculty of Science and Engineering, University of Nottingham, Jalan Broga, Semenyih 43500, Selangor Darul Ehsan, Malaysia;
| | - Khuhed Memon
- Centre for Intelligent Signal and Imaging Research, Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Perak Darul Ridzuan, Malaysia; (K.M.); (N.Y.)
| | - Maged S. AL-Quraishi
- Interdisciplinary Research Center for Smart Mobility and Logistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia; (M.S.A.-Q.); (S.E.)
| | - Norashikin Yahya
- Centre for Intelligent Signal and Imaging Research, Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Perak Darul Ridzuan, Malaysia; (K.M.); (N.Y.)
| | - Sami Elferik
- Interdisciplinary Research Center for Smart Mobility and Logistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia; (M.S.A.-Q.); (S.E.)
| | - Syed Saad Azhar Ali
- Aerospace Engineering Department and Interdisciplinary Research Center for Smart Mobility and Logistics, and Interdisciplinary Research Center Aviation and Space Exploration, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
| |
Collapse
|
5
|
Siddiqui UA, Nasir R, Bajwa MH, Khan SA, Siddiqui YS, Shahzad Z, Arif A, Iftikhar H, Aftab K. Quality assessment of critical and non-critical domains of systematic reviews on artificial intelligence in gliomas using AMSTAR II: A systematic review. J Clin Neurosci 2025; 131:110926. [PMID: 39612612 DOI: 10.1016/j.jocn.2024.110926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Revised: 11/14/2024] [Accepted: 11/15/2024] [Indexed: 12/01/2024]
Abstract
INTRODUCTION Gliomas are the most common primary malignant intraparenchymal brain tumors with a dismal prognosis. With growing advances in artificial intelligence, machine learning and deep learning models are being utilized for preoperative, intraoperative and postoperative neurological decision-making. We aimed to compile published literature in one format and evaluate the quality of level 1a evidence currently available. METHODOLOGY Using PRISMA guidelines, a comprehensive literature search was conducted within databases including Medline, Scopus, and Cochrane Library, and records with the application of artificial intelligence in glioma management were included. The AMSTAR 2 tool was used to assess the quality of systematic reviews and meta-analyses by two independent researchers. RESULTS From 812 studies, 23 studies were included. AMSTAR II appraised most reviews as either low or critically low in quality. Most reviews failed to deliver in critical domains related to the exclusion of studies, appropriateness of meta-analytical methods, and assessment of publication bias. Similarly, compliance was lowest in non-critical areas related to study design selection and the disclosure of funding sources in individual records. Evidence is moderate to low in quality in reviews on multiple neuro-oncological applications, low quality in glioma diagnosis and individual molecular markers like MGMT promoter methylation status, IDH, and 1p19q identification, and critically low in tumor segmentation, glioma grading, and multiple molecular markers identification. CONCLUSION AMSTAR 2 is a robust tool to identify high-quality systematic reviews. There is a paucity of high-quality systematic reviews on the utility of artificial intelligence in glioma management, with some demonstrating critically low quality. Therefore, caution must be exercised when drawing inferences from these results.
Collapse
Affiliation(s)
| | - Roua Nasir
- Section of Neurosurgery, Department of Surgery, Aga Khan University, Karachi, Pakistan
| | - Mohammad Hamza Bajwa
- Section of Neurosurgery, Department of Surgery, Aga Khan University, Karachi, Pakistan.
| | - Saad Akhtar Khan
- Department of Neurosurgery, Liaquat National Hospital, Karachi, Pakistan.
| | | | - Zenab Shahzad
- Department of Neurosurgery, Liaquat National Hospital, Karachi, Pakistan
| | | | | | - Kiran Aftab
- Section of Neurosurgery, Department of Surgery, Aga Khan University, Karachi, Pakistan; University of Cambridge, UK.
| |
Collapse
|
6
|
Zhang H, Zhao F. Deep Learning-Based Carotid Plaque Ultrasound Image Detection and Classification Study. Rev Cardiovasc Med 2024; 25:454. [PMID: 39742249 PMCID: PMC11683696 DOI: 10.31083/j.rcm2512454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 07/25/2024] [Accepted: 08/13/2024] [Indexed: 01/03/2025] Open
Abstract
Background This study aimed to develop and evaluate the detection and classification performance of different deep learning models on carotid plaque ultrasound images to achieve efficient and precise ultrasound screening for carotid atherosclerotic plaques. Methods This study collected 5611 carotid ultrasound images from 3683 patients from four hospitals between September 17, 2020, and December 17, 2022. By cropping redundant information from the images and annotating them using professional physicians, the dataset was divided into a training set (3927 images) and a test set (1684 images). Four deep learning models, You Only Look Once Version 7 (YOLO V7) and Faster Region-Based Convolutional Neural Network (Faster RCNN) were employed for image detection and classification to distinguish between vulnerable and stable carotid plaques. Model performance was evaluated using accuracy, sensitivity, specificity, F1 score, and area under curve (AUC), with p < 0.05 indicating a statistically significant difference. Results We constructed and compared deep learning models based on different network architectures. In the test set, the Faster RCNN (ResNet 50) model exhibited the best classification performance (accuracy (ACC) = 0.88, sensitivity (SEN) = 0.94, specificity (SPE) = 0.71, AUC = 0.91), significantly outperforming the other models. The results suggest that deep learning technology has significant potential for application in detecting and classifying carotid plaque ultrasound images. Conclusions The Faster RCNN (ResNet 50) model demonstrated high accuracy and reliability in classifying carotid atherosclerotic plaques, with diagnostic capabilities approaching that of intermediate-level physicians. It has the potential to enhance the diagnostic abilities of primary-level ultrasound physicians and assist in formulating more effective strategies for preventing ischemic stroke.
Collapse
Affiliation(s)
- Hongzhen Zhang
- Precision Medicine Innovation Institute, Anhui University of Science and Technology, 232001 Huainan, Anhui, China
| | - Feng Zhao
- General Surgery Department, The First Hospital of Anhui University of Science & Technology (Huai Nan First People’s Hospital), 232002 Huainan, Anhui, China
| |
Collapse
|
7
|
Sohrabi-Ashlaghi A, Azizi N, Abbastabar H, Shakiba M, Zebardast J, Firouznia K. Accuracy of radiomics-Based models in distinguishing between ruptured and unruptured intracranial aneurysms: A systematic review and meta-Analysis. Eur J Radiol 2024; 181:111739. [PMID: 39293240 DOI: 10.1016/j.ejrad.2024.111739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Revised: 08/13/2024] [Accepted: 09/14/2024] [Indexed: 09/20/2024]
Abstract
INTRODUCTION Intracranial aneurysms (IAs) pose a severe health risk due to the potential for subarachnoid hemorrhage upon rupture. This study aims to conduct a systematic review and meta-analysis on the accuracy of radiomics features derived from computed tomography angiography (CTA) in differentiating ruptured from unruptured IAs. MATERIALS AND METHODS A systematic search was performed across multiple databases for articles published up to January 2024. Observational studies analyzing CTA using radiomics features were included. The area under the curve (AUC) for classifying ruptured vs. unruptured IAs was pooled using a random-effects model. Subgroup analyses were conducted based on the use of radiomics-only features versus radiomics plus additional image-based features, as well as the type of filters used for image processing. RESULTS Six studies with 4,408 patients were included. The overall pooled AUC for radiomics features in differentiating ruptured from unruptured IAs was 0.86 (95% CI: 0.84-0.88). The AUC was 0.85 (95% CI: 0.82-0.88) for studies using only radiomics features and 0.87 (95% CI: 0.83-0.91) for studies incorporating radiomics plus additional image-based features. Subgroup analysis based on filter type showed an AUC of 0.87 (95% CI: 0.83-0.90) for original filters and 0.86 (95% CI: 0.81-0.90) for studies using additional filters. CONCLUSION Radiomics-based models demonstrate very good diagnostic accuracy in classifying ruptured and unruptured IAs, with AUC values exceeding 0.8. This highlights the potential of radiomics as a useful tool in the non-invasive assessment of aneurysm rupture risk, particularly in the management of patients with multiple aneurysms.
Collapse
Affiliation(s)
- Ahmadreza Sohrabi-Ashlaghi
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran
| | - Narges Azizi
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran
| | - Hedayat Abbastabar
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran
| | - Madjid Shakiba
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran
| | - Jayran Zebardast
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran
| | - Kavous Firouznia
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran.
| |
Collapse
|
8
|
Ong W, Lee A, Tan WC, Fong KTD, Lai DD, Tan YL, Low XZ, Ge S, Makmur A, Ong SJ, Ting YH, Tan JH, Kumar N, Hallinan JTPD. Oncologic Applications of Artificial Intelligence and Deep Learning Methods in CT Spine Imaging-A Systematic Review. Cancers (Basel) 2024; 16:2988. [PMID: 39272846 PMCID: PMC11394591 DOI: 10.3390/cancers16172988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 08/14/2024] [Accepted: 08/26/2024] [Indexed: 09/15/2024] Open
Abstract
In spinal oncology, integrating deep learning with computed tomography (CT) imaging has shown promise in enhancing diagnostic accuracy, treatment planning, and patient outcomes. This systematic review synthesizes evidence on artificial intelligence (AI) applications in CT imaging for spinal tumors. A PRISMA-guided search identified 33 studies: 12 (36.4%) focused on detecting spinal malignancies, 11 (33.3%) on classification, 6 (18.2%) on prognostication, 3 (9.1%) on treatment planning, and 1 (3.0%) on both detection and classification. Of the classification studies, 7 (21.2%) used machine learning to distinguish between benign and malignant lesions, 3 (9.1%) evaluated tumor stage or grade, and 2 (6.1%) employed radiomics for biomarker classification. Prognostic studies included three (9.1%) that predicted complications such as pathological fractures and three (9.1%) that predicted treatment outcomes. AI's potential for improving workflow efficiency, aiding decision-making, and reducing complications is discussed, along with its limitations in generalizability, interpretability, and clinical integration. Future directions for AI in spinal oncology are also explored. In conclusion, while AI technologies in CT imaging are promising, further research is necessary to validate their clinical effectiveness and optimize their integration into routine practice.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Aric Lee
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Wei Chuan Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Kuan Ting Dominic Fong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Daoyong David Lai
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Yi Liang Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Shuliang Ge
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Shao Jin Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Yong Han Ting
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Jiong Hao Tan
- National University Spine Institute, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Naresh Kumar
- National University Spine Institute, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
9
|
Tang H, Hong M, Yu L, Song Y, Cao M, Xiang L, Zhou Y, Suo S. Deep learning reconstruction for lumbar spine MRI acceleration: a prospective study. Eur Radiol Exp 2024; 8:67. [PMID: 38902467 PMCID: PMC11189847 DOI: 10.1186/s41747-024-00470-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 04/11/2024] [Indexed: 06/22/2024] Open
Abstract
BACKGROUND We compared magnetic resonance imaging (MRI) turbo spin-echo images reconstructed using a deep learning technique (TSE-DL) with standard turbo spin-echo (TSE-SD) images of the lumbar spine regarding image quality and detection performance of common degenerative pathologies. METHODS This prospective, single-center study included 31 patients (15 males and 16 females; aged 51 ± 16 years (mean ± standard deviation)) who underwent lumbar spine exams with both TSE-SD and TSE-DL acquisitions for degenerative spine diseases. Images were analyzed by two radiologists and assessed for qualitative image quality using a 4-point Likert scale, quantitative signal-to-noise ratio (SNR) of anatomic landmarks, and detection of common pathologies. Paired-sample t, Wilcoxon, and McNemar tests, unweighted/linearly weighted Cohen κ statistics, and intraclass correlation coefficients were used. RESULTS Scan time for TSE-DL and TSE-SD protocols was 2:55 and 5:17 min:s, respectively. The overall image quality was either significantly higher for TSE-DL or not significantly different between TSE-SD and TSE-DL. TSE-DL demonstrated higher SNR and subject noise scores than TSE-SD. For pathology detection, the interreader agreement was substantial to almost perfect for TSE-DL, with κ values ranging from 0.61 to 1.00; the interprotocol agreement was almost perfect for both readers, with κ values ranging from 0.84 to 1.00. There was no significant difference in the diagnostic confidence or detection rate of common pathologies between the two sequences (p ≥ 0.081). CONCLUSIONS TSE-DL allowed for a 45% reduction in scan time over TSE-SD in lumbar spine MRI without compromising the overall image quality and showed comparable detection performance of common pathologies in the evaluation of degenerative lumbar spine changes. RELEVANCE STATEMENT Deep learning-reconstructed lumbar spine MRI protocol enabled a 45% reduction in scan time compared with conventional reconstruction, with comparable image quality and detection performance of common degenerative pathologies. KEY POINTS • Lumbar spine MRI with deep learning reconstruction has broad application prospects. • Deep learning reconstruction of lumbar spine MRI saved 45% scan time without compromising overall image quality. • When compared with standard sequences, deep learning reconstruction showed similar detection performance of common degenerative lumbar spine pathologies.
Collapse
Affiliation(s)
- Hui Tang
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, No. 160, Pujian Road, Pudong New District, Shanghai, 200127, China
| | - Ming Hong
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, No. 160, Pujian Road, Pudong New District, Shanghai, 200127, China
| | - Lu Yu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, No. 160, Pujian Road, Pudong New District, Shanghai, 200127, China
| | - Yang Song
- MR Research Collaboration Team, Siemens Healthineers Ltd., Shanghai, China
| | - Mengqiu Cao
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, No. 160, Pujian Road, Pudong New District, Shanghai, 200127, China
| | | | - Yan Zhou
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, No. 160, Pujian Road, Pudong New District, Shanghai, 200127, China.
| | - Shiteng Suo
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, No. 160, Pujian Road, Pudong New District, Shanghai, 200127, China.
- Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
10
|
Brain ME, Amukotuwa S, Bammer R. Deep learning denoising reconstruction enables faster T2-weighted FLAIR sequence acquisition with satisfactory image quality. J Med Imaging Radiat Oncol 2024; 68:377-384. [PMID: 38577926 DOI: 10.1111/1754-9485.13649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 03/21/2024] [Indexed: 04/06/2024]
Abstract
INTRODUCTION Deep learning reconstruction (DLR) technologies are the latest methods attempting to solve the enduring problem of reducing MRI acquisition times without compromising image quality. The clinical utility of this reconstruction technique is yet to be fully established. This study aims to assess whether a commercially available DLR technique applied to 2D T2-weighted FLAIR brain images allows a reduction in scan time, without compromising image quality and thus diagnostic accuracy. METHODS 47 participants (24 male, mean age 55.9 ± 18.7 SD years, range 20-89 years) underwent routine, clinically indicated brain MRI studies in March 2022, that included a standard-of-care (SOC) T2-weighted FLAIR sequence, and an accelerated acquisition that was reconstructed using the DLR denoising product. Overall image quality, lesion conspicuity, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and artefacts for each sequence, and preferred sequence on direct comparison, were subjectively assessed by two readers. RESULTS There was a strong preference for SOC FLAIR sequence for overall image quality (P = 0.01) and head-to-head comparison (P < 0.001). No difference was observed for lesion conspicuity (P = 0.49), perceived SNR (P = 1.0), and perceived CNR (P = 0.84). There was no difference in motion (P = 0.57) nor Gibbs ringing (P = 0.86) artefacts. Phase ghosting (P = 0.038) and pseudolesions were significantly more frequent (P < 0.001) on DLR images. CONCLUSION DLR algorithm allowed faster FLAIR acquisition times with comparable image quality and lesion conspicuity. However, an increased incidence and severity of phase ghosting artefact and presence of pseudolesions using this technique may result in a reduction in reading speed, efficiency, and diagnostic confidence.
Collapse
Affiliation(s)
- Matthew E Brain
- Department of Diagnostic Imaging, Monash Health, Monash Medical Centre, Melbourne, Victoria, Australia
| | - Shalini Amukotuwa
- Department of Diagnostic Imaging, Monash Health, Monash Medical Centre, Melbourne, Victoria, Australia
| | - Roland Bammer
- Department of Diagnostic Imaging, Monash Health, Monash Medical Centre, Melbourne, Victoria, Australia
| |
Collapse
|
11
|
Kalanjiyam GP, Chandramohan T, Raman M, Kalyanasundaram H. Artificial intelligence: a new cutting-edge tool in spine surgery. Asian Spine J 2024; 18:458-471. [PMID: 38917854 PMCID: PMC11222879 DOI: 10.31616/asj.2023.0382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 01/07/2024] [Accepted: 01/11/2024] [Indexed: 06/27/2024] Open
Abstract
The purpose of this narrative review was to comprehensively elaborate the various components of artificial intelligence (AI), their applications in spine surgery, practical concerns, and future directions. Over the years, spine surgery has been continuously transformed in various aspects, including diagnostic strategies, surgical approaches, procedures, and instrumentation, to provide better-quality patient care. Surgeons have also augmented their surgical expertise with rapidly growing technological advancements. AI is an advancing field that has the potential to revolutionize many aspects of spine surgery. We performed a comprehensive narrative review of the various aspects of AI and machine learning in spine surgery. To elaborate on the current role of AI in spine surgery, a review of the literature was performed using PubMed and Google Scholar databases for articles published in English in the last 20 years. The initial search using the keywords "artificial intelligence" AND "spine," "machine learning" AND "spine," and "deep learning" AND "spine" extracted a total of 78, 60, and 37 articles and 11,500, 4,610, and 2,270 articles on PubMed and Google Scholar. After the initial screening and exclusion of unrelated articles, duplicates, and non-English articles, 405 articles were identified. After the second stage of screening, 93 articles were included in the review. Studies have shown that AI can be used to analyze patient data and provide personalized treatment recommendations in spine care. It also provides valuable insights for planning surgeries and assisting with precise surgical maneuvers and decisionmaking during the procedures. As more data become available and with further advancements, AI is likely to improve patient outcomes.
Collapse
Affiliation(s)
- Guna Pratheep Kalanjiyam
- Spine Surgery Unit, Department of Orthopaedics, Meenakshi Mission Hospital and Research Centre, Madurai,
India
| | - Thiyagarajan Chandramohan
- Department of Orthopaedics, Government Stanley Medical College, Chennai,
India
- Department of Emergency Medicine, Government Stanley Medical College, Chennai,
India
| | - Muthu Raman
- Department of Orthopaedics, Tenkasi Government Hospital, Tenkasi,
India
| | | |
Collapse
|
12
|
Awan KM, Goncalves Filho ALM, Tabari A, Applewhite BP, Lang M, Lo WC, Sellers R, Kollasch P, Clifford B, Nickel D, Husseni J, Rapalino O, Schaefer P, Cauley S, Huang SY, Conklin J. Diagnostic evaluation of deep learning accelerated lumbar spine MRI. Neuroradiol J 2024; 37:323-331. [PMID: 38195418 PMCID: PMC11138337 DOI: 10.1177/19714009231224428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2024] Open
Abstract
BACKGROUND AND PURPOSE Deep learning (DL) accelerated MR techniques have emerged as a promising approach to accelerate routine MR exams. While prior studies explored DL acceleration for specific lumbar MRI sequences, a gap remains in comprehending the impact of a fully DL-based MRI protocol on scan time and diagnostic quality for routine lumbar spine MRI. To address this, we assessed the image quality and diagnostic performance of a DL-accelerated lumbar spine MRI protocol in comparison to a conventional protocol. METHODS We prospectively evaluated 36 consecutive outpatients undergoing non-contrast enhanced lumbar spine MRIs. Both protocols included sagittal T1, T2, STIR, and axial T2-weighted images. Two blinded neuroradiologists independently reviewed images for foraminal stenosis, spinal canal stenosis, nerve root compression, and facet arthropathy. Grading comparison employed the Wilcoxon signed rank test. For the head-to-head comparison, a 5-point Likert scale to assess image quality, considering artifacts, signal-to-noise ratio (SNR), anatomical structure visualization, and overall diagnostic quality. We applied a 15% noninferiority margin to determine whether the DL-accelerated protocol was noninferior. RESULTS No significant differences existed between protocols when evaluating foraminal and spinal canal stenosis, nerve compression, or facet arthropathy (all p > .05). The DL-spine protocol was noninferior for overall diagnostic quality and visualization of the cord, CSF, intervertebral disc, and nerve roots. However, it exhibited reduced SNR and increased artifact perception. Interobserver reproducibility ranged from moderate to substantial (κ = 0.50-0.76). CONCLUSION Our study indicates that DL reconstruction in spine imaging effectively reduces acquisition times while maintaining comparable diagnostic quality to conventional MRI.
Collapse
Affiliation(s)
- Komal M Awan
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA
- Harvard Medical School, USA
| | | | - Azadeh Tabari
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA
- Harvard Medical School, USA
| | - Brooks P Applewhite
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA
- Harvard Medical School, USA
| | - Min Lang
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA
- Harvard Medical School, USA
| | | | | | | | | | | | - Jad Husseni
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA
- Harvard Medical School, USA
| | - Otto Rapalino
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA
- Harvard Medical School, USA
| | - Pamela Schaefer
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA
- Harvard Medical School, USA
| | | | - Susie Y Huang
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA
- Harvard Medical School, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, USA
| | - John Conklin
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA
- Harvard Medical School, USA
| |
Collapse
|
13
|
Mokhtarpour K, Akbarzadehmoallemkolaei M, Rezaei N. A viral attack on brain tumors: the potential of oncolytic virus therapy. J Neurovirol 2024; 30:229-250. [PMID: 38806994 DOI: 10.1007/s13365-024-01209-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 04/17/2024] [Accepted: 04/18/2024] [Indexed: 05/30/2024]
Abstract
Managing malignant brain tumors remains a significant therapeutic hurdle that necessitates further research to comprehend their treatment potential fully. Oncolytic viruses (OVs) offer many opportunities for predicting and combating tumors through several mechanisms, with both preclinical and clinical studies demonstrating potential. OV therapy has emerged as a potent and effective method with a dual mechanism. Developing innovative and effective strategies for virus transduction, coupled with immune checkpoint inhibitors or chemotherapy drugs, strengthens this new technique. Furthermore, the discovery and creation of new OVs that can seamlessly integrate gene therapy strategies, such as cytotoxic, anti-angiogenic, and immunostimulatory, are promising advancements. This review presents an overview of the latest advancements in OVs transduction for brain cancer, focusing on the safety and effectiveness of G207, G47Δ, M032, rQNestin34.5v.2, C134, DNX-2401, Ad-TD-nsIL12, NSC-CRAd-S-p7, TG6002, and PVSRIPO. These are evaluated in both preclinical and clinical models of various brain tumors.
Collapse
Affiliation(s)
- Kasra Mokhtarpour
- Animal Model Integrated Network (AMIN), Universal Scientific Education and Research Network (USERN), Tehran, 1419733151, Iran
| | - Milad Akbarzadehmoallemkolaei
- Animal Model Integrated Network (AMIN), Universal Scientific Education and Research Network (USERN), Tehran, 1419733151, Iran
- Research Center for Immunodeficiencies, Children's Medical Center, Tehran University of Medical Sciences, Dr. Gharib St, Keshavarz Blvd, Tehran, 1419733151, Iran
| | - Nima Rezaei
- Animal Model Integrated Network (AMIN), Universal Scientific Education and Research Network (USERN), Tehran, 1419733151, Iran.
- Research Center for Immunodeficiencies, Children's Medical Center, Tehran University of Medical Sciences, Dr. Gharib St, Keshavarz Blvd, Tehran, 1419733151, Iran.
- Department of Immunology, School of Medicine, Tehran University of Medical Sciences, Tehran, 1417653761, Iran.
| |
Collapse
|
14
|
Nowinski WL. Taxonomy of Acute Stroke: Imaging, Processing, and Treatment. Diagnostics (Basel) 2024; 14:1057. [PMID: 38786355 PMCID: PMC11119045 DOI: 10.3390/diagnostics14101057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 05/01/2024] [Accepted: 05/16/2024] [Indexed: 05/25/2024] Open
Abstract
Stroke management employs a variety of diagnostic imaging modalities, image processing and analysis methods, and treatment procedures. This work categorizes methods for stroke imaging, image processing and analysis, and treatment, and provides their taxonomies illustrated by a state-of-the-art review. Imaging plays a critical role in stroke management, and the most frequently employed modalities are computed tomography (CT) and magnetic resonance (MR). CT includes unenhanced non-contrast CT as the first-line diagnosis, CT angiography, and CT perfusion. MR is the most complete method to examine stroke patients. MR angiography is useful to evaluate the severity of artery stenosis, vascular occlusion, and collateral flow. Diffusion-weighted imaging is the gold standard for evaluating ischemia. MR perfusion-weighted imaging assesses the penumbra. The stroke image processing methods are divided into non-atlas/template-based and atlas/template-based. The non-atlas/template-based methods are subdivided into intensity and contrast transformations, local segmentation-related, anatomy-guided, global density-guided, and artificial intelligence/deep learning-based. The atlas/template-based methods are subdivided into intensity templates and atlases with three atlas types: anatomy atlases, vascular atlases, and lesion-derived atlases. The treatment procedures for arterial and venous strokes include intravenous and intraarterial thrombolysis and mechanical thrombectomy. This work captures the state-of-the-art in stroke management summarized in the form of comprehensive and straightforward taxonomy diagrams. All three introduced taxonomies in diagnostic imaging, image processing and analysis, and treatment are widely illustrated and compared against other state-of-the-art classifications.
Collapse
Affiliation(s)
- Wieslaw L Nowinski
- Sano Centre for Computational Personalised Medicine, Czarnowiejska 36, 30-054 Krakow, Poland
| |
Collapse
|
15
|
Ali SSA. Brain MRI sequence and view plane identification using deep learning. Front Neuroinform 2024; 18:1373502. [PMID: 38716062 PMCID: PMC11074364 DOI: 10.3389/fninf.2024.1373502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 04/03/2024] [Indexed: 01/06/2025] Open
Abstract
Brain magnetic resonance imaging (MRI) scans are available in a wide variety of sequences, view planes, and magnet strengths. A necessary preprocessing step for any automated diagnosis is to identify the MRI sequence, view plane, and magnet strength of the acquired image. Automatic identification of the MRI sequence can be useful in labeling massive online datasets used by data scientists in the design and development of computer aided diagnosis (CAD) tools. This paper presents a deep learning (DL) approach for brain MRI sequence and view plane identification using scans of different data types as input. A 12-class classification system is presented for commonly used MRI scans, including T1, T2-weighted, proton density (PD), fluid attenuated inversion recovery (FLAIR) sequences in axial, coronal and sagittal view planes. Multiple online publicly available datasets have been used to train the system, with multiple infrastructures. MobileNet-v2 offers an adequate performance accuracy of 99.76% with unprocessed MRI scans and a comparable accuracy with skull-stripped scans and has been deployed in a tool for public use. The tool has been tested on unseen data from online and hospital sources with a satisfactory performance accuracy of 99.84 and 86.49%, respectively.
Collapse
Affiliation(s)
- Syed Saad Azhar Ali
- Aerospace Engineering Department and Interdisciplinary Research Center for Smart Mobility and Logistics, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia
| |
Collapse
|
16
|
Peng Y, Liu J, Yao R, Wu J, Li J, Dai L, Gu S, Yao Y, Li Y, Chen S, Wang J. Deep learning-assisted diagnosis of large vessel occlusion in acute ischemic stroke based on four-dimensional computed tomography angiography. Front Neurosci 2024; 18:1329718. [PMID: 38660224 PMCID: PMC11039833 DOI: 10.3389/fnins.2024.1329718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 04/02/2024] [Indexed: 04/26/2024] Open
Abstract
Purpose To develop deep learning models based on four-dimensional computed tomography angiography (4D-CTA) images for automatic detection of large vessel occlusion (LVO) in the anterior circulation that cause acute ischemic stroke. Methods This retrospective study included 104 LVO patients and 105 non-LVO patients for deep learning models development. Another 30 LVO patients and 31 non-LVO patients formed the time-independent validation set. Four phases of 4D-CTA (arterial phase P1, arterial-venous phase P2, venous phase P3 and late venous phase P4) were arranged and combined and two input methods was used: combined input and superimposed input. Totally 26 models were constructed using a modified HRNet network. Assessment metrics included the areas under the curve (AUC), accuracy, sensitivity, specificity and F1 score. Kappa analysis was performed to assess inter-rater agreement between the best model and radiologists of different seniority. Results The P1 + P2 model (combined input) had the best diagnostic performance. In the internal validation set, the AUC was 0.975 (95%CI: 0.878-0.999), accuracy was 0.911, sensitivity was 0.889, specificity was 0.944, and the F1 score was 0.909. In the time-independent validation set, the model demonstrated consistently high performance with an AUC of 0.942 (95%CI: 0.851-0.986), accuracy of 0.902, sensitivity of 0.867, specificity of 0.935, and an F1 score of 0.901. The best model showed strong consistency with the diagnostic efficacy of three radiologists of different seniority (k = 0.84, 0.80, 0.70, respectively). Conclusion The deep learning model, using combined arterial and arterial-venous phase, was highly effective in detecting LVO, alerting radiologists to speed up the diagnosis.
Collapse
Affiliation(s)
- Yuling Peng
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Jiayang Liu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Rui Yao
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Jiajing Wu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Jing Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Linquan Dai
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Sirun Gu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yunzhuo Yao
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yongmei Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Shanxiong Chen
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Jingjie Wang
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| |
Collapse
|
17
|
Ito K, Hirahara N, Muraoka H, Sawada E, Tokunaga S, Komatsu T, Kaneda T. Graphical user interface-based convolutional neural network models for detecting nasopalatine duct cysts using panoramic radiography. Sci Rep 2024; 14:7699. [PMID: 38565866 PMCID: PMC10987649 DOI: 10.1038/s41598-024-57632-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 03/20/2024] [Indexed: 04/04/2024] Open
Abstract
Nasopalatine duct cysts are difficult to detect on panoramic radiographs due to obstructive shadows and are often overlooked. Therefore, sensitive detection using panoramic radiography is clinically important. This study aimed to create a trained model to detect nasopalatine duct cysts from panoramic radiographs in a graphical user interface-based environment. This study was conducted on panoramic radiographs and CT images of 115 patients with nasopalatine duct cysts. As controls, 230 age- and sex-matched patients without cysts were selected from the same database. The 345 pre-processed panoramic radiographs were divided into 216 training data sets, 54 validation data sets, and 75 test data sets. Deep learning was performed for 400 epochs using pretrained-LeNet and pretrained-VGG16 as the convolutional neural networks to classify the cysts. The deep learning system's accuracy, sensitivity, and specificity using LeNet and VGG16 were calculated. LeNet and VGG16 showed an accuracy rate of 85.3% and 88.0%, respectively. A simple deep learning method using a graphical user interface-based Windows machine was able to create a trained model to detect nasopalatine duct cysts from panoramic radiographs, and may be used to prevent such cysts being overlooked during imaging.
Collapse
Affiliation(s)
- Kotaro Ito
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan.
| | - Naohisa Hirahara
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| | - Hirotaka Muraoka
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| | - Eri Sawada
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| | - Satoshi Tokunaga
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| | - Tomohiro Komatsu
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| | - Takashi Kaneda
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| |
Collapse
|
18
|
Liang Q, Jing H, Shao Y, Wang Y, Zhang H. Artificial Intelligence Imaging for Predicting High-risk Molecular Markers of Gliomas. Clin Neuroradiol 2024; 34:33-43. [PMID: 38277059 DOI: 10.1007/s00062-023-01375-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 12/20/2023] [Indexed: 01/27/2024]
Abstract
Gliomas, the most prevalent primary malignant tumors of the central nervous system, present significant challenges in diagnosis and prognosis. The fifth edition of the World Health Organization Classification of Tumors of the Central Nervous System (WHO CNS5) published in 2021, has emphasized the role of high-risk molecular markers in gliomas. These markers are crucial for enhancing glioma grading and influencing survival and prognosis. Noninvasive prediction of these high-risk molecular markers is vital. Genetic testing after biopsy, the current standard for determining molecular type, is invasive and time-consuming. Magnetic resonance imaging (MRI) offers a non-invasive alternative, providing structural and functional insights into gliomas. Advanced MRI methods can potentially reflect the pathological characteristics associated with glioma molecular markers; however, they struggle to fully represent gliomas' high heterogeneity. Artificial intelligence (AI) imaging, capable of processing vast medical image datasets, can extract critical molecular information. AI imaging thus emerges as a noninvasive and efficient method for identifying high-risk molecular markers in gliomas, a recent focus of research. This review presents a comprehensive analysis of AI imaging's role in predicting glioma high-risk molecular markers, highlighting challenges and future directions.
Collapse
Affiliation(s)
- Qian Liang
- Department of Radiology, First Hospital of Shanxi Medical University, 030001, Taiyuan, Shanxi Province, China
- College of Medical Imaging, Shanxi Medical University, 030001, Taiyuan, Shanxi Province, China
| | - Hui Jing
- Department of MRI, The Sixth Hospital, Shanxi Medical University, 030008, Taiyuan, Shanxi Province, China
| | - Yingbo Shao
- Department of Radiology, First Hospital of Shanxi Medical University, 030001, Taiyuan, Shanxi Province, China
- College of Medical Imaging, Shanxi Medical University, 030001, Taiyuan, Shanxi Province, China
| | - Yinhua Wang
- Department of Radiology, First Hospital of Shanxi Medical University, 030001, Taiyuan, Shanxi Province, China
- College of Medical Imaging, Shanxi Medical University, 030001, Taiyuan, Shanxi Province, China
| | - Hui Zhang
- Department of Radiology, First Hospital of Shanxi Medical University, 030001, Taiyuan, Shanxi Province, China.
- College of Medical Imaging, Shanxi Medical University, 030001, Taiyuan, Shanxi Province, China.
- Shanxi Key Laboratory of Intelligent Imaging and Nanomedicine, First Hospital of Shanxi Medical University, 030001, Taiyuan, Shanxi Province, China.
- Intelligent Imaging Big Data and Functional Nano-imaging Engineering Research Center of Shanxi Province, First Hospital of Shanxi Medical University, 030001, Taiyuan, Shanxi Province, China.
| |
Collapse
|
19
|
Bharadwaj UU, Chin CT, Majumdar S. Practical Applications of Artificial Intelligence in Spine Imaging: A Review. Radiol Clin North Am 2024; 62:355-370. [PMID: 38272627 DOI: 10.1016/j.rcl.2023.10.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Artificial intelligence (AI), a transformative technology with unprecedented potential in medical imaging, can be applied to various spinal pathologies. AI-based approaches may improve imaging efficiency, diagnostic accuracy, and interpretation, which is essential for positive patient outcomes. This review explores AI algorithms, techniques, and applications in spine imaging, highlighting diagnostic impact and challenges with future directions for integrating AI into spine imaging workflow.
Collapse
Affiliation(s)
- Upasana Upadhyay Bharadwaj
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 1700 4th Street, Byers Hall, Suite 203, Room 203D, San Francisco, CA 94158, USA
| | - Cynthia T Chin
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Avenue, Box 0628, San Francisco, CA 94143, USA.
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 1700 4th Street, Byers Hall, Suite 203, Room 203D, San Francisco, CA 94158, USA
| |
Collapse
|
20
|
Russe MF, Rebmann P, Tran PH, Kellner E, Reisert M, Bamberg F, Kotter E, Kim S. AI-based X-ray fracture analysis of the distal radius: accuracy between representative classification, detection and segmentation deep learning models for clinical practice. BMJ Open 2024; 14:e076954. [PMID: 38262641 PMCID: PMC10823998 DOI: 10.1136/bmjopen-2023-076954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 12/21/2023] [Indexed: 01/25/2024] Open
Abstract
OBJECTIVES To aid in selecting the optimal artificial intelligence (AI) solution for clinical application, we directly compared performances of selected representative custom-trained or commercial classification, detection and segmentation models for fracture detection on musculoskeletal radiographs of the distal radius by aligning their outputs. DESIGN AND SETTING This single-centre retrospective study was conducted on a random subset of emergency department radiographs from 2008 to 2018 of the distal radius in Germany. MATERIALS AND METHODS An image set was created to be compatible with training and testing classification and segmentation models by annotating examinations for fractures and overlaying fracture masks, if applicable. Representative classification and segmentation models were trained on 80% of the data. After output binarisation, their derived fracture detection performances as well as that of a standard commercially available solution were compared on the remaining X-rays (20%) using mainly accuracy and area under the receiver operating characteristic (AUROC). RESULTS A total of 2856 examinations with 712 (24.9%) fractures were included in the analysis. Accuracies reached up to 0.97 for the classification model, 0.94 for the segmentation model and 0.95 for BoneView. Cohen's kappa was at least 0.80 in pairwise comparisons, while Fleiss' kappa was 0.83 for all models. Fracture predictions were visualised with all three methods at different levels of detail, ranking from downsampled image region for classification over bounding box for detection to single pixel-level delineation for segmentation. CONCLUSIONS All three investigated approaches reached high performances for detection of distal radius fractures with simple preprocessing and postprocessing protocols on the custom-trained models. Despite their underlying structural differences, selection of one's fracture analysis AI tool in the frame of this study reduces to the desired flavour of automation: automated classification, AI-assisted manual fracture reading or minimised false negatives.
Collapse
Affiliation(s)
- Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Philipp Rebmann
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Phuong Hien Tran
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Elias Kellner
- Department of Medical Physics, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Marco Reisert
- Department of Medical Physics, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Suam Kim
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| |
Collapse
|
21
|
Guimarães P, Serranho P, Duarte JV, Crisóstomo J, Moreno C, Gomes L, Bernardes R, Castelo-Branco M. The hemodynamic response function as a type 2 diabetes biomarker: a data-driven approach. Front Neuroinform 2024; 17:1321178. [PMID: 38250018 PMCID: PMC10796780 DOI: 10.3389/fninf.2023.1321178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 12/14/2023] [Indexed: 01/23/2024] Open
Abstract
Introduction There is a need to better understand the neurophysiological changes associated with early brain dysfunction in Type 2 diabetes mellitus (T2DM) before vascular or structural lesions. Our aim was to use a novel unbiased data-driven approach to detect and characterize hemodynamic response function (HRF) alterations in T2DM patients, focusing on their potential as biomarkers. Methods We meshed task-based event-related (visual speed discrimination) functional magnetic resonance imaging with DL to show, from an unbiased perspective, that T2DM patients' blood-oxygen-level dependent response is altered. Relevance analysis determined which brain regions were more important for discrimination. We combined explainability with deconvolution generalized linear model to provide a more accurate picture of the nature of the neural changes. Results The proposed approach to discriminate T2DM patients achieved up to 95% accuracy. Higher performance was achieved at higher stimulus (speed) contrast, showing a direct relationship with stimulus properties, and in the hemispherically dominant left visual hemifield, demonstrating biological interpretability. Differences are explained by physiological asymmetries in cortical spatial processing (right hemisphere dominance) and larger neural signal-to-noise ratios related to stimulus contrast. Relevance analysis revealed the most important regions for discrimination, such as extrastriate visual cortex, parietal cortex, and insula. These are disease/task related, providing additional evidence for pathophysiological significance. Our data-driven design allowed us to compute the unbiased HRF without assumptions. Conclusion We can accurately differentiate T2DM patients using a data-driven classification of the HRF. HRF differences hold promise as biomarkers and could contribute to a deeper understanding of neurophysiological changes associated with T2DM.
Collapse
Affiliation(s)
- Pedro Guimarães
- University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), Coimbra, Portugal
| | - Pedro Serranho
- University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), Coimbra, Portugal
- Department of Sciences and Technology, Universidade Aberta, Lisbon, Portugal
| | - João V. Duarte
- University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), Coimbra, Portugal
- University of Coimbra, Faculty of Medicine (FMUC), Coimbra, Portugal
| | - Joana Crisóstomo
- University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), Coimbra, Portugal
| | - Carolina Moreno
- Department of Endocrinology, University Hospital of Coimbra (CHUC), Coimbra, Portugal
| | - Leonor Gomes
- Department of Endocrinology, University Hospital of Coimbra (CHUC), Coimbra, Portugal
| | - Rui Bernardes
- University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), Coimbra, Portugal
- University of Coimbra, Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), Coimbra, Portugal
| | - Miguel Castelo-Branco
- University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), Coimbra, Portugal
- University of Coimbra, Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), Coimbra, Portugal
| |
Collapse
|
22
|
Pacchiano F, Tortora M, Criscuolo S, Jaber K, Acierno P, De Simone M, Tortora F, Briganti F, Caranci F. Artificial intelligence applied in acute ischemic stroke: from child to elderly. LA RADIOLOGIA MEDICA 2024; 129:83-92. [PMID: 37878222 PMCID: PMC10808481 DOI: 10.1007/s11547-023-01735-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/28/2023] [Indexed: 10/26/2023]
Abstract
This review will summarize artificial intelligence developments in acute ischemic stroke in recent years and forecasts for the future. Stroke is a major healthcare concern due to its effects on the patient's quality of life and its dependence on the timing of the identification as well as the treatment. In recent years, attention increased on the use of artificial intelligence (AI) systems to help categorize, prognosis, and to channel these patients toward the right therapeutic procedure. Machine learning (ML) and in particular deep learning (DL) systems using convoluted neural networks (CNN) are becoming increasingly popular. Various studies over the years evaluated the use of these methods of analysis and prediction in the assessment of stroke patients, and at the same time, several applications and software have been developed to support the neuroradiologists and the stroke team to improve patient outcomes.
Collapse
Affiliation(s)
- Francesco Pacchiano
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Caserta, Italy
| | - Mario Tortora
- Department of Advanced Biomedical Sciences, University "Federico II", Via Pansini, 5, 80131, Naples, Italy.
| | - Sabrina Criscuolo
- Pediatric University Department, Bambino Gesù Children Hospital, Rome, Italy
| | - Katya Jaber
- Department of Elektrotechnik und Informatik, Hochschule Bremen, Bremen, Germany
| | | | - Marta De Simone
- UOC Neuroradiology, AORN San Giuseppe Moscati, Avellino, Italy
| | - Fabio Tortora
- Department of Advanced Biomedical Sciences, University "Federico II", Via Pansini, 5, 80131, Naples, Italy
| | - Francesco Briganti
- Department of Advanced Biomedical Sciences, University "Federico II", Via Pansini, 5, 80131, Naples, Italy
| | - Ferdinando Caranci
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Caserta, Italy
| |
Collapse
|
23
|
Chen Q, Fu C, Qiu X, He J, Zhao T, Zhang Q, Hu X, Hu H. Machine-learning-based performance comparison of two-dimensional (2D) and three-dimensional (3D) CT radiomics features for intracerebral haemorrhage expansion. Clin Radiol 2024; 79:e26-e33. [PMID: 37926647 DOI: 10.1016/j.crad.2023.10.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 09/07/2023] [Accepted: 10/02/2023] [Indexed: 11/07/2023]
Abstract
AIM To investigate the value of non-contrast CT (NCCT)-based two-dimensional (2D) radiomics features in predicting haematoma expansion (HE) after spontaneous intracerebral haemorrhage (ICH) and compare its predictive ability with the three-dimensional (3D) signature. MATERIALS AND METHODS Three hundred and seven ICH patients who received baseline NCCT within 6 h of ictus from two stroke centres were analysed retrospectively. 2D and 3D radiomics features were extracted in the manner of one-to-one correspondence. The 2D and 3D models were generated by four different machine-learning algorithms (regularised L1 logistic regression, decision tree, support vector machine and AdaBoost), and the receiver operating characteristic (ROC) curve was used to compare their predictive performance. A robustness analysis was performed according to baseline haematoma volume. RESULTS Each feature type of 2D and 3D modalities used for subsequent analyses had excellent consistency (mean ICC >0.9). Among the different machine-learning algorithms, pairwise comparison showed no significant difference in both the training (mean area under the ROC curve [AUC] 0.858 versus 0.802, all p>0.05) and validation datasets (mean AUC 0.725 versus 0.678, all p>0.05), and the 10-fold cross-validation evaluation yielded similar results. The AUCs of the 2D and 3D models were comparable either in the binary or tertile volume analysis (all p>0.5). CONCLUSION NCCT-derived 2D radiomics features exhibited acceptable and similar performance to the 3D features in predicting HE, and this comparability seemed unaffected by initial haematoma volume. The 2D signature may be preferred in future HE-related radiomic works given its compatibility with emergency condition of ICH.
Collapse
Affiliation(s)
- Q Chen
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - C Fu
- Department of Radiology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - X Qiu
- Department of Radiology, Qian Tang District of Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - J He
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - T Zhao
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Q Zhang
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - X Hu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - H Hu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China.
| |
Collapse
|
24
|
Shrivastava M, Ye L. Neuroimaging and artificial intelligence for assessment of chronic painful temporomandibular disorders-a comprehensive review. Int J Oral Sci 2023; 15:58. [PMID: 38155153 PMCID: PMC10754947 DOI: 10.1038/s41368-023-00254-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/19/2023] [Accepted: 10/20/2023] [Indexed: 12/30/2023] Open
Abstract
Chronic Painful Temporomandibular Disorders (TMD) are challenging to diagnose and manage due to their complexity and lack of understanding of brain mechanism. In the past few decades' neural mechanisms of pain regulation and perception have been clarified by neuroimaging research. Advances in the neuroimaging have bridged the gap between brain activity and the subjective experience of pain. Neuroimaging has also made strides toward separating the neural mechanisms underlying the chronic painful TMD. Recently, Artificial Intelligence (AI) is transforming various sectors by automating tasks that previously required humans' intelligence to complete. AI has started to contribute to the recognition, assessment, and understanding of painful TMD. The application of AI and neuroimaging in understanding the pathophysiology and diagnosis of chronic painful TMD are still in its early stages. The objective of the present review is to identify the contemporary neuroimaging approaches such as structural, functional, and molecular techniques that have been used to investigate the brain of chronic painful TMD individuals. Furthermore, this review guides practitioners on relevant aspects of AI and how AI and neuroimaging methods can revolutionize our understanding on the mechanisms of painful TMD and aid in both diagnosis and management to enhance patient outcomes.
Collapse
Affiliation(s)
- Mayank Shrivastava
- Adams School of Dentistry, University of North Carolina, Chapel Hill, NC, USA
| | - Liang Ye
- Department of Rehabilitation Medicine, University of Minnesota Medical School, Minneapolis, MN, USA.
| |
Collapse
|
25
|
Garaba A, Ponzio F, Grasso EA, Brinjikji W, Fontanella MM, De Maria L. Radiomics for Differentiation of Pediatric Posterior Fossa Tumors: A Meta-Analysis and Systematic Review of the Literature. Cancers (Basel) 2023; 15:5891. [PMID: 38136435 PMCID: PMC10742196 DOI: 10.3390/cancers15245891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 11/11/2023] [Accepted: 12/14/2023] [Indexed: 12/24/2023] Open
Abstract
PURPOSE To better define the overall performance of the current radiomics-based models for the discrimination of pediatric posterior fossa tumors. METHODS A comprehensive literature search of the databases PubMed, Ovid MEDLINE, Ovid EMBASE, Web of Science, and Scopus was designed and conducted by an experienced librarian. We estimated overall sensitivity (SEN) and specificity (SPE). Event rates were pooled across studies using a random-effects meta-analysis, and the χ2 test was performed to assess the heterogeneity. RESULTS Overall SEN and SPE for differentiation between MB, PA, and EP were found to be promising, with SEN values of 93% (95% CI = 0.88-0.96), 83% (95% CI = 0.66-0.93), and 85% (95% CI = 0.71-0.93), and corresponding SPE values of 87% (95% CI = 0.82-0.90), 95% (95% CI = 0.90-0.98) and 90% (95% CI = 0.84-0.94), respectively. For MB, there is a better trend for LR classifiers, while textural features are the most used and the best performing (ACC 96%). As for PA and EP, a synergistic employment of LR and NN classifiers, accompanied by geometrical or morphological features, demonstrated superior performance (ACC 94% and 96%, respectively). CONCLUSIONS The diagnostic performance is high, making radiomics a helpful method to discriminate these tumor types. In the forthcoming years, we expect even more precise models.
Collapse
Affiliation(s)
- Alexandru Garaba
- Department of Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, 25121 Brescia, Italy; (M.M.F.); or (L.D.M.)
- Unit of Neurosurgery, Spedali Civili Hospital, Largo Spedali Civili 1, 25123 Brescia, Italy
| | - Francesco Ponzio
- Interuniversity Department of Regional and Urban Studies and Planning, Politecnico di Torino, 10129 Torino, Italy;
| | - Eleonora Agata Grasso
- Department of Pediatrics, Children’s Hospital of Philadelphia, Philadelphia, PA 19146, USA;
| | - Waleed Brinjikji
- Department of Neurosurgery and Interventional Neuroradiology, Mayo Clinic, Rochester, MN 55905, USA;
| | - Marco Maria Fontanella
- Department of Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, 25121 Brescia, Italy; (M.M.F.); or (L.D.M.)
| | - Lucio De Maria
- Department of Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, 25121 Brescia, Italy; (M.M.F.); or (L.D.M.)
- Department of Clinical Neuroscience, Geneva University Hospitals (HUG), 1205 Geneva, Switzerland
| |
Collapse
|
26
|
Zhu F, Yang C, Zou J, Ma W, Wei Y, Zhao Z. The classification of benign and malignant lung nodules based on CT radiomics: a systematic review, quality score assessment, and meta-analysis. Acta Radiol 2023; 64:3074-3084. [PMID: 37817511 DOI: 10.1177/02841851231205737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/12/2023]
Abstract
Radiomics methods are increasingly used to identify benign and malignant lung nodules, and early monitoring is essential in prognosis and treatment strategy formulation. To evaluate the diagnostic performance of computed tomography (CT)-based radiomics for distinguishing between benign and malignant lung nodules by performing a meta-analysis. Between January 2000 and December 2021, we searched the PubMed and Embase electronic databases for studies in English. Studies were included if they demonstrated the sensitivity and specificity of CT-based radiomics for diagnosing benign and malignant lung nodules. The studies were evaluated using the QUADAS-2 and radiomics quality scores (RQS). The inhomogeneity of the data and publishing bias were also evaluated. Some subgroup analyses were performed to investigate the impact of diagnostic efficiency. The Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) Guidelines were followed for this meta-analysis. A total of 20 studies involving 3793 patients were included. The combined sensitivity, specificity, diagnostic odds ratio, and area under the summary receiver operating characteristic curve based on CT radiomics diagnosis of benign and malignant lung nodules were 0.81, 0.86, 27.00, and 0.91, respectively. Deek's funnel plot asymmetry test confirmed no significant publication bias in all studies. Fagan nomograms showed a 40% increase in post-test probability among pretest-positive patients. Current evidence shows that CT-based radiomics has high accuracy in the diagnosis of benign and malignant lung nodules.
Collapse
Affiliation(s)
- Fandong Zhu
- Department of Radiology, Key Laboratory of Functional Molecular Imaging of Tumor and Interventional Diagnosis and Treatment of Shaoxing City, Shaoxing People's Hospital, Shaoxing, PR China
| | - Chen Yang
- Department of Radiology, Key Laboratory of Functional Molecular Imaging of Tumor and Interventional Diagnosis and Treatment of Shaoxing City, Shaoxing People's Hospital, Shaoxing, PR China
| | - Jiajun Zou
- Department of Radiology, Key Laboratory of Functional Molecular Imaging of Tumor and Interventional Diagnosis and Treatment of Shaoxing City, Shaoxing People's Hospital, Shaoxing, PR China
| | - Weili Ma
- Department of Radiology, Key Laboratory of Functional Molecular Imaging of Tumor and Interventional Diagnosis and Treatment of Shaoxing City, Shaoxing People's Hospital, Shaoxing, PR China
| | - Yuguo Wei
- Precision Health Institution, GE Healthcare, Hangzhou, Zhejiang, PR China
| | - Zhenhua Zhao
- Department of Radiology, Key Laboratory of Functional Molecular Imaging of Tumor and Interventional Diagnosis and Treatment of Shaoxing City, Shaoxing People's Hospital, Shaoxing, PR China
| |
Collapse
|
27
|
Ong W, Liu RW, Makmur A, Low XZ, Sng WJ, Tan JH, Kumar N, Hallinan JTPD. Artificial Intelligence Applications for Osteoporosis Classification Using Computed Tomography. Bioengineering (Basel) 2023; 10:1364. [PMID: 38135954 PMCID: PMC10741220 DOI: 10.3390/bioengineering10121364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 11/21/2023] [Accepted: 11/23/2023] [Indexed: 12/24/2023] Open
Abstract
Osteoporosis, marked by low bone mineral density (BMD) and a high fracture risk, is a major health issue. Recent progress in medical imaging, especially CT scans, offers new ways of diagnosing and assessing osteoporosis. This review examines the use of AI analysis of CT scans to stratify BMD and diagnose osteoporosis. By summarizing the relevant studies, we aimed to assess the effectiveness, constraints, and potential impact of AI-based osteoporosis classification (severity) via CT. A systematic search of electronic databases (PubMed, MEDLINE, Web of Science, ClinicalTrials.gov) was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 39 articles were retrieved from the databases, and the key findings were compiled and summarized, including the regions analyzed, the type of CT imaging, and their efficacy in predicting BMD compared with conventional DXA studies. Important considerations and limitations are also discussed. The overall reported accuracy, sensitivity, and specificity of AI in classifying osteoporosis using CT images ranged from 61.8% to 99.4%, 41.0% to 100.0%, and 31.0% to 100.0% respectively, with areas under the curve (AUCs) ranging from 0.582 to 0.994. While additional research is necessary to validate the clinical efficacy and reproducibility of these AI tools before incorporating them into routine clinical practice, these studies demonstrate the promising potential of using CT to opportunistically predict and classify osteoporosis without the need for DEXA.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
| | - Ren Wei Liu
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Weizhong Jonathan Sng
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore; (J.H.T.); (N.K.)
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore; (J.H.T.); (N.K.)
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
28
|
Flanders AE, Geis JR. NextGen Neuroradiology AI. Radiology 2023; 309:e231426. [PMID: 37987667 DOI: 10.1148/radiol.231426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Affiliation(s)
- Adam E Flanders
- From the Department of Radiology, Thomas Jefferson University, 132 S 10th St, Suite 1080B Main Building, Philadelphia, PA 19107 (A.E.F.); and Department of Radiology, National Jewish Health, Denver, Colo (J.R.G.)
| | - J Raymond Geis
- From the Department of Radiology, Thomas Jefferson University, 132 S 10th St, Suite 1080B Main Building, Philadelphia, PA 19107 (A.E.F.); and Department of Radiology, National Jewish Health, Denver, Colo (J.R.G.)
| |
Collapse
|
29
|
da Silva HEC, Santos GNM, Leite AF, Mesquita CRM, Figueiredo PTDS, Stefani CM, de Melo NS. The use of artificial intelligence tools in cancer detection compared to the traditional diagnostic imaging methods: An overview of the systematic reviews. PLoS One 2023; 18:e0292063. [PMID: 37796946 PMCID: PMC10553229 DOI: 10.1371/journal.pone.0292063] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 09/12/2023] [Indexed: 10/07/2023] Open
Abstract
BACKGROUND AND PURPOSE In comparison to conventional medical imaging diagnostic modalities, the aim of this overview article is to analyze the accuracy of the application of Artificial Intelligence (AI) techniques in the identification and diagnosis of malignant tumors in adult patients. DATA SOURCES The acronym PIRDs was used and a comprehensive literature search was conducted on PubMed, Cochrane, Scopus, Web of Science, LILACS, Embase, Scielo, EBSCOhost, and grey literature through Proquest, Google Scholar, and JSTOR for systematic reviews of AI as a diagnostic model and/or detection tool for any cancer type in adult patients, compared to the traditional diagnostic radiographic imaging model. There were no limits on publishing status, publication time, or language. For study selection and risk of bias evaluation, pairs of reviewers worked separately. RESULTS In total, 382 records were retrieved in the databases, 364 after removing duplicates, 32 satisfied the full-text reading criterion, and 09 papers were considered for qualitative synthesis. Although there was heterogeneity in terms of methodological aspects, patient differences, and techniques used, the studies found that several AI approaches are promising in terms of specificity, sensitivity, and diagnostic accuracy in the detection and diagnosis of malignant tumors. When compared to other machine learning algorithms, the Super Vector Machine method performed better in cancer detection and diagnosis. Computer-assisted detection (CAD) has shown promising in terms of aiding cancer detection, when compared to the traditional method of diagnosis. CONCLUSIONS The detection and diagnosis of malignant tumors with the help of AI seems to be feasible and accurate with the use of different technologies, such as CAD systems, deep and machine learning algorithms and radiomic analysis when compared with the traditional model, although these technologies are not capable of to replace the professional radiologist in the analysis of medical images. Although there are limitations regarding the generalization for all types of cancer, these AI tools might aid professionals, serving as an auxiliary and teaching tool, especially for less trained professionals. Therefore, further longitudinal studies with a longer follow-up duration are required for a better understanding of the clinical application of these artificial intelligence systems. TRIAL REGISTRATION Systematic review registration. Prospero registration number: CRD42022307403.
Collapse
Affiliation(s)
| | | | - André Ferreira Leite
- Faculty of Health Science, Dentistry of Department, Brasilia University, Brasilia, Brazil
| | | | | | - Cristine Miron Stefani
- Faculty of Health Science, Dentistry of Department, Brasilia University, Brasilia, Brazil
| | - Nilce Santos de Melo
- Faculty of Health Science, Dentistry of Department, Brasilia University, Brasilia, Brazil
| |
Collapse
|
30
|
Fawaz P, Sayegh PE, Vannet BV. What is the current state of artificial intelligence applications in dentistry and orthodontics? JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2023; 124:101524. [PMID: 37270174 DOI: 10.1016/j.jormas.2023.101524] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 05/08/2023] [Accepted: 05/31/2023] [Indexed: 06/05/2023]
Abstract
BACKGROUND The use of Artificial Intelligence (AI) in the medical field has the potential to bring about significant improvements in patient care and outcomes. AI is being used in dentistry and more specifically in orthodontics through the development of diagnostic imaging tools, the development of treatment planning tools, and the development of robotic surgery. The aim of this study is to present the latest emerging AI softwares and applications in dental field to benefit from. TYPES OF STUDIES REVIEWED Search strategies were conducted in three electronic databases, with no date limits in the following databases up to April 30, 2023: MEDLINE, PUBMED, and GOOGLE® SCHOLAR for articles related to AI in dentistry & orthodontics. No inclusion and exclusion criteria were used for the selection of the articles. Most of the articles included (n = 79) are reviews of the literature, retro/prospective studies, systematic reviews and meta-analyses, and observational studies. RESULTS The use of AI in dentistry and orthodontics is a rapidly growing area of research and development, with the potential to revolutionize the field and bring about significant improvements in patient care and outcomes; this can save clinicians' chair-time and push for more individualized treatment plans. Results from the various studies reported in this review are suggestive that the accuracy of AI-based systems is quite promising and reliable. PRACTICAL IMPLICATIONS AI application in the healthcare field has proven to be efficient and helpful for the dentist to be more precise in diagnosis and clinical decision-making. These systems can simplify the tasks and provide results in quick time which can save dentists time and help them perform their duties more efficiently. These systems can be of greater aid and can be used as auxiliary support for dentists with lesser experience.
Collapse
Affiliation(s)
- Paul Fawaz
- Academic Lecturer & Researcher at the Orthodontic department Université de Lorraine, Nancy, France.
| | | | - Bart Vande Vannet
- Clinical and Academical responsable of the Orthodontic department at Université de Lorraine, Nancy, France.
| |
Collapse
|
31
|
Yan Q, Li F, Cui Y, Wang Y, Wang X, Jia W, Liu X, Li Y, Chang H, Shi F, Xia Y, Zhou Q, Zeng Q. Discrimination Between Glioblastoma and Solitary Brain Metastasis Using Conventional MRI and Diffusion-Weighted Imaging Based on a Deep Learning Algorithm. J Digit Imaging 2023; 36:1480-1488. [PMID: 37156977 PMCID: PMC10406764 DOI: 10.1007/s10278-023-00838-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 04/13/2023] [Accepted: 04/18/2023] [Indexed: 05/10/2023] Open
Abstract
This study aims to develop and validate a deep learning (DL) model to differentiate glioblastoma from single brain metastasis (BM) using conventional MRI combined with diffusion-weighted imaging (DWI). Preoperative conventional MRI and DWI of 202 patients with solitary brain tumor (104 glioblastoma and 98 BM) were retrospectively obtained between February 2016 and September 2022. The data were divided into training and validation sets in a 7:3 ratio. An additional 32 patients (19 glioblastoma and 13 BM) from a different hospital were considered testing set. Single-MRI-sequence DL models were developed using the 3D residual network-18 architecture in tumoral (T model) and tumoral + peritumoral regions (T&P model). Furthermore, the combination model based on conventional MRI and DWI was developed. The area under the receiver operating characteristic curve (AUC) was used to assess the classification performance. The attention area of the model was visualized as a heatmap by gradient-weighted class activation mapping technique. For the single-MRI-sequence DL model, the T2WI sequence achieved the highest AUC in the validation set with either T models (0.889) or T&P models (0.934). In the combination models of the T&P model, the model of DWI combined with T2WI and contrast-enhanced T1WI showed increased AUC of 0.949 and 0.930 compared with that of single-MRI sequences in the validation set, respectively. And the highest AUC (0.956) was achieved by combined contrast-enhanced T1WI, T2WI, and DWI. In the heatmap, the central region of the tumoral was hotter and received more attention than other areas and was more important for differentiating glioblastoma from BM. A conventional MRI-based DL model could differentiate glioblastoma from solitary BM, and the combination models improved classification performance.
Collapse
Affiliation(s)
- Qingqing Yan
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China
- Shandong First Medical University, Jinan, China
| | - Fuyan Li
- Department of Radiology, Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, China
| | - Yi Cui
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, China
| | - Yong Wang
- Shandong Cancer Hospital and Institute, Shandong First Medical University, Shandong Academy of Medical Sciences, Jinan, China
| | - Xiao Wang
- Department of Radiology, Jining NO.1 People's Hospital, Jining, China
| | - Wenjing Jia
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Xinhui Liu
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Yuting Li
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Huan Chang
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Feng Shi
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Yuwei Xia
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Qing Zhou
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Qingshi Zeng
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China.
| |
Collapse
|
32
|
Tanenbaum LN, Bash SC, Zaharchuk G, Shankaranarayanan A, Chamberlain R, Wintermark M, Beaulieu C, Novick M, Wang L. Deep Learning-Generated Synthetic MR Imaging STIR Spine Images Are Superior in Image Quality and Diagnostically Equivalent to Conventional STIR: A Multicenter, Multireader Trial. AJNR Am J Neuroradiol 2023; 44:987-993. [PMID: 37414452 PMCID: PMC10411840 DOI: 10.3174/ajnr.a7920] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 06/01/2023] [Indexed: 07/08/2023]
Abstract
BACKGROUND AND PURPOSE Deep learning image reconstruction allows faster MR imaging acquisitions while matching or exceeding the standard of care and can create synthetic images from existing data sets. This multicenter, multireader spine study evaluated the performance of synthetically created STIR compared with acquired STIR. MATERIALS AND METHODS From a multicenter, multiscanner data base of 328 clinical cases, a nonreader neuroradiologist randomly selected 110 spine MR imaging studies in 93 patients (sagittal T1, T2, and STIR) and classified them into 5 categories of disease and healthy. A DICOM-based deep learning application generated a synthetically created STIR series from the sagittal T1 and T2 images. Five radiologists (3 neuroradiologists, 1 musculoskeletal radiologist, and 1 general radiologist) rated the STIR quality and classified disease pathology (study 1, n = 80). They then assessed the presence or absence of findings typically evaluated with STIR in patients with trauma (study 2, n = 30). The readers evaluated studies with either acquired STIR or synthetically created STIR in a blinded and randomized fashion with a 1-month washout period. The interchangeability of acquired STIR and synthetically created STIR was assessed using a noninferiority threshold of 10%. RESULTS For classification, there was a decrease in interreader agreement expected by randomly introducing synthetically created STIR of 3.23%. For trauma, there was an overall increase in interreader agreement by +1.9%. The lower bound of confidence for both exceeded the noninferiority threshold, indicating interchangeability of synthetically created STIR with acquired STIR. Both the Wilcoxon signed-rank and t tests showed higher image-quality scores for synthetically created STIR over acquired STIR (P < .0001). CONCLUSIONS Synthetically created STIR spine MR images were diagnostically interchangeable with acquired STIR, while providing significantly higher image quality, suggesting routine clinical practice potential.
Collapse
Affiliation(s)
| | - S C Bash
- From RadNet (L.N.T., S.C.B.), New York, New York
| | - G Zaharchuk
- Stanford University Medical Center (G.Z., C.B.), Stanford, California
| | | | - R Chamberlain
- Subtle Medical (A.S., R.C., L.W.), Menlo Park, California
| | - M Wintermark
- MD Anderson Cancer Center (M.W.), University of Texas, Houston, Texas
| | - C Beaulieu
- Stanford University Medical Center (G.Z., C.B.), Stanford, California
| | - M Novick
- All-American Teleradiology (M.N.), Bay Village, Ohio
| | - L Wang
- Subtle Medical (A.S., R.C., L.W.), Menlo Park, California
| |
Collapse
|
33
|
Wang B, Pan Y, Xu S, Zhang Y, Ming Y, Chen L, Liu X, Wang C, Liu Y, Xia Y. Quantitative Cerebral Blood Volume Image Synthesis from Standard MRI Using Image-to-Image Translation for Brain Tumors. Radiology 2023; 308:e222471. [PMID: 37581504 DOI: 10.1148/radiol.222471] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Background Cerebral blood volume (CBV) maps derived from dynamic susceptibility contrast-enhanced (DSC) MRI are useful but not commonly available in clinical scenarios. Purpose To test image-to-image translation techniques for generating CBV maps from standard MRI sequences of brain tumors using the bookend technique DSC MRI as ground-truth references. Materials and Methods A total of 756 MRI examinations, including quantitative CBV maps produced from bookend DSC MRI, were included in this retrospective study. Two algorithms, the feature-consistency generative adversarial network (GAN) and three-dimensional encoder-decoder network with only mean absolute error loss, were trained to synthesize CBV maps. The performance of the two algorithms was evaluated quantitatively using the structural similarity index (SSIM) and qualitatively by two neuroradiologists using a four-point Likert scale. The clinical value of combining synthetic CBV maps and standard MRI scans of brain tumors was assessed in several clinical scenarios (tumor grading, prognosis prediction, differential diagnosis) using multicenter data sets (four external and one internal). Differences in diagnostic and predictive accuracy were tested using the z test. Results The three-dimensional encoder-decoder network with T1-weighted images, contrast-enhanced T1-weighted images, and apparent diffusion coefficient maps as the input achieved the highest synthetic performance (SSIM, 86.29% ± 4.30). The mean qualitative score of the synthesized CBV maps by neuroradiologists was 2.63. Combining synthetic CBV with standard MRI improved the accuracy of grading gliomas (standard MRI scans area under the receiver operating characteristic curve [AUC], 0.707; standard MRI scans with CBV maps AUC, 0.857; z = 15.17; P < .001), prediction of prognosis in gliomas (standard MRI scans AUC, 0.654; standard MRI scans with CBV maps AUC, 0.793; z = 9.62; P < .001), and differential diagnosis between tumor recurrence and treatment response in gliomas (standard MRI scans AUC, 0.778; standard MRI scans with CBV maps AUC, 0.853; z = 4.86; P < .001) and brain metastases (standard MRI scans AUC, 0.749; standard MRI scans with CBV maps AUC, 0.857; z = 6.13; P < .001). Conclusion GAN image-to-image translation techniques produced accurate synthetic CBV maps from standard MRI scans, which could be used for improving the clinical evaluation of brain tumors. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Branstetter in this issue.
Collapse
Affiliation(s)
- Bao Wang
- From the Department of Radiology, Qilu Hospital of Shandong University, Jinan, China (B.W.); School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China (Y.P., Y.X.); Departments of Neurosurgery (B.W., S.X., Y.L.) and Radiology (Y.Z.), Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China; Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China (Y.M., L.C., Y.L.); Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China (X.L.); Department of Neurosurgery, the Second Hospital of Shandong University, Jinan, China (C.W.); and Shandong Institute of Brain Science and Brain-inspired Research, Shandong First Medical University, Jinan, China (Y.L.)
| | - Yongsheng Pan
- From the Department of Radiology, Qilu Hospital of Shandong University, Jinan, China (B.W.); School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China (Y.P., Y.X.); Departments of Neurosurgery (B.W., S.X., Y.L.) and Radiology (Y.Z.), Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China; Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China (Y.M., L.C., Y.L.); Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China (X.L.); Department of Neurosurgery, the Second Hospital of Shandong University, Jinan, China (C.W.); and Shandong Institute of Brain Science and Brain-inspired Research, Shandong First Medical University, Jinan, China (Y.L.)
| | - Shangchen Xu
- From the Department of Radiology, Qilu Hospital of Shandong University, Jinan, China (B.W.); School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China (Y.P., Y.X.); Departments of Neurosurgery (B.W., S.X., Y.L.) and Radiology (Y.Z.), Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China; Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China (Y.M., L.C., Y.L.); Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China (X.L.); Department of Neurosurgery, the Second Hospital of Shandong University, Jinan, China (C.W.); and Shandong Institute of Brain Science and Brain-inspired Research, Shandong First Medical University, Jinan, China (Y.L.)
| | - Yi Zhang
- From the Department of Radiology, Qilu Hospital of Shandong University, Jinan, China (B.W.); School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China (Y.P., Y.X.); Departments of Neurosurgery (B.W., S.X., Y.L.) and Radiology (Y.Z.), Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China; Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China (Y.M., L.C., Y.L.); Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China (X.L.); Department of Neurosurgery, the Second Hospital of Shandong University, Jinan, China (C.W.); and Shandong Institute of Brain Science and Brain-inspired Research, Shandong First Medical University, Jinan, China (Y.L.)
| | - Yang Ming
- From the Department of Radiology, Qilu Hospital of Shandong University, Jinan, China (B.W.); School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China (Y.P., Y.X.); Departments of Neurosurgery (B.W., S.X., Y.L.) and Radiology (Y.Z.), Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China; Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China (Y.M., L.C., Y.L.); Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China (X.L.); Department of Neurosurgery, the Second Hospital of Shandong University, Jinan, China (C.W.); and Shandong Institute of Brain Science and Brain-inspired Research, Shandong First Medical University, Jinan, China (Y.L.)
| | - Ligang Chen
- From the Department of Radiology, Qilu Hospital of Shandong University, Jinan, China (B.W.); School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China (Y.P., Y.X.); Departments of Neurosurgery (B.W., S.X., Y.L.) and Radiology (Y.Z.), Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China; Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China (Y.M., L.C., Y.L.); Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China (X.L.); Department of Neurosurgery, the Second Hospital of Shandong University, Jinan, China (C.W.); and Shandong Institute of Brain Science and Brain-inspired Research, Shandong First Medical University, Jinan, China (Y.L.)
| | - Xuejun Liu
- From the Department of Radiology, Qilu Hospital of Shandong University, Jinan, China (B.W.); School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China (Y.P., Y.X.); Departments of Neurosurgery (B.W., S.X., Y.L.) and Radiology (Y.Z.), Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China; Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China (Y.M., L.C., Y.L.); Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China (X.L.); Department of Neurosurgery, the Second Hospital of Shandong University, Jinan, China (C.W.); and Shandong Institute of Brain Science and Brain-inspired Research, Shandong First Medical University, Jinan, China (Y.L.)
| | - Chengwei Wang
- From the Department of Radiology, Qilu Hospital of Shandong University, Jinan, China (B.W.); School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China (Y.P., Y.X.); Departments of Neurosurgery (B.W., S.X., Y.L.) and Radiology (Y.Z.), Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China; Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China (Y.M., L.C., Y.L.); Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China (X.L.); Department of Neurosurgery, the Second Hospital of Shandong University, Jinan, China (C.W.); and Shandong Institute of Brain Science and Brain-inspired Research, Shandong First Medical University, Jinan, China (Y.L.)
| | - Yingchao Liu
- From the Department of Radiology, Qilu Hospital of Shandong University, Jinan, China (B.W.); School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China (Y.P., Y.X.); Departments of Neurosurgery (B.W., S.X., Y.L.) and Radiology (Y.Z.), Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China; Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China (Y.M., L.C., Y.L.); Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China (X.L.); Department of Neurosurgery, the Second Hospital of Shandong University, Jinan, China (C.W.); and Shandong Institute of Brain Science and Brain-inspired Research, Shandong First Medical University, Jinan, China (Y.L.)
| | - Yong Xia
- From the Department of Radiology, Qilu Hospital of Shandong University, Jinan, China (B.W.); School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China (Y.P., Y.X.); Departments of Neurosurgery (B.W., S.X., Y.L.) and Radiology (Y.Z.), Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China; Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China (Y.M., L.C., Y.L.); Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China (X.L.); Department of Neurosurgery, the Second Hospital of Shandong University, Jinan, China (C.W.); and Shandong Institute of Brain Science and Brain-inspired Research, Shandong First Medical University, Jinan, China (Y.L.)
| |
Collapse
|
34
|
Perillo T, de Giorgi M, Papace UM, Serino A, Cuocolo R, Manto A. Current role of machine learning and radiogenomics in precision neuro-oncology. EXPLORATION OF TARGETED ANTI-TUMOR THERAPY 2023; 4:545-555. [PMID: 37720347 PMCID: PMC10501892 DOI: 10.37349/etat.2023.00151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 04/20/2023] [Indexed: 09/19/2023] Open
Abstract
In the past few years, artificial intelligence (AI) has been increasingly used to create tools that can enhance workflow in medicine. In particular, neuro-oncology has benefited from the use of AI and especially machine learning (ML) and radiogenomics, which are subfields of AI. ML can be used to develop algorithms that dynamically learn from available medical data in order to automatically do specific tasks. On the other hand, radiogenomics can identify relationships between tumor genetics and imaging features, thus possibly giving new insights into the pathophysiology of tumors. Therefore, ML and radiogenomics could help treatment tailoring, which is crucial in personalized neuro-oncology. The aim of this review is to illustrate current and possible future applications of ML and radiomics in neuro-oncology.
Collapse
Affiliation(s)
- Teresa Perillo
- Department of Neuroradiology, “Umberto I” Hospital, 84014 Norcera Inferiore, Italy
| | - Marco de Giorgi
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80138 Naples, Italy
| | - Umberto Maria Papace
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80138 Naples, Italy
| | - Antonietta Serino
- Department of Neuroradiology, “Umberto I” Hospital, 84014 Norcera Inferiore, Italy
| | - Renato Cuocolo
- Department of Medicine, Surgery, and Dentistry, University of Salerno, 84084 Fisciano, Italy
| | - Andrea Manto
- Department of Neuroradiology, “Umberto I” Hospital, 84014 Norcera Inferiore, Italy
| |
Collapse
|
35
|
Huang H, Li R, Zhang J. A review of visual sustained attention: neural mechanisms and computational models. PeerJ 2023; 11:e15351. [PMID: 37334118 PMCID: PMC10274610 DOI: 10.7717/peerj.15351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 04/13/2023] [Indexed: 06/20/2023] Open
Abstract
Sustained attention is one of the basic abilities of humans to maintain concentration on relevant information while ignoring irrelevant information over extended periods. The purpose of the review is to provide insight into how to integrate neural mechanisms of sustained attention with computational models to facilitate research and application. Although many studies have assessed attention, the evaluation of humans' sustained attention is not sufficiently comprehensive. Hence, this study provides a current review on both neural mechanisms and computational models of visual sustained attention. We first review models, measurements, and neural mechanisms of sustained attention and propose plausible neural pathways for visual sustained attention. Next, we analyze and compare the different computational models of sustained attention that the previous reviews have not systematically summarized. We then provide computational models for automatically detecting vigilance states and evaluation of sustained attention. Finally, we outline possible future trends in the research field of sustained attention.
Collapse
Affiliation(s)
- Huimin Huang
- National Engineering Research Center for E-learning, Central China Normal University, Wuhan, Hubei, China
| | - Rui Li
- National Engineering Research Center for E-learning, Central China Normal University, Wuhan, Hubei, China
| | - Junsong Zhang
- Brain Cognition and Intelligent Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University, Xiamen, Fujian, China
| |
Collapse
|
36
|
Chen R, Li D, Zhao S, Zhang Y, Wang H, Wu Y. Simulation of dynamic monitoring for intracerebral hemorrhage based on magnetic induction phase shift technology. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2023; 94:064101. [PMID: 37862492 DOI: 10.1063/5.0107788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Accepted: 05/18/2023] [Indexed: 10/22/2023]
Abstract
Intracerebral hemorrhage (ICH) is a common and severe brain disease associated with high mortality and morbidity. Accurate measurement of the ICH area is an essential indicator for doctors to determine whether a surgical operation is necessary. However, although currently used clinical detection methods, such as computed tomography (CT) and magnetic resonance imaging (MRI), provide high-quality images, they may have limitations such as high costs, large equipment size, and radiation exposure to the human body in the case of CT. It makes long-term bedside monitoring infeasible. This paper presents a dynamic monitoring method for ICH areas based on magnetic induction. This study investigates the influence of the bleeding area and the position of ICH on the phase difference at the detection point near the area to be measured. The study applies a neural network algorithm to predict the bleeding area using the phase difference data received by the detection coil as the network input and the bleeding area as the network output. The relative error between the predicted and actual values of the neural network is calculated, and the error of each group of data is less than 4%, which confirms the feasibility of this method for detecting and even trend monitoring of the ICH area.
Collapse
Affiliation(s)
- Ruijuan Chen
- School of Life Sciences, TianGong University, Tianjin 300387, China
| | - Dandan Li
- School of Electrical and Information Engineering, TianGong University, Tianjin 300387, China
| | - Songsong Zhao
- School of Life Sciences, TianGong University, Tianjin 300387, China
| | - Yuanxin Zhang
- School of Life Sciences, TianGong University, Tianjin 300387, China
| | - Huiquan Wang
- School of Life Sciences, TianGong University, Tianjin 300387, China
| | - Yifan Wu
- School of Life Sciences, TianGong University, Tianjin 300387, China
| |
Collapse
|
37
|
Khosravi P, Schweitzer M. Artificial intelligence in neuroradiology: a scoping review of some ethical challenges. FRONTIERS IN RADIOLOGY 2023; 3:1149461. [PMID: 37492387 PMCID: PMC10365008 DOI: 10.3389/fradi.2023.1149461] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 04/27/2023] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) has great potential to increase accuracy and efficiency in many aspects of neuroradiology. It provides substantial opportunities for insights into brain pathophysiology, developing models to determine treatment decisions, and improving current prognostication as well as diagnostic algorithms. Concurrently, the autonomous use of AI models introduces ethical challenges regarding the scope of informed consent, risks associated with data privacy and protection, potential database biases, as well as responsibility and liability that might potentially arise. In this manuscript, we will first provide a brief overview of AI methods used in neuroradiology and segue into key methodological and ethical challenges. Specifically, we discuss the ethical principles affected by AI approaches to human neuroscience and provisions that might be imposed in this domain to ensure that the benefits of AI frameworks remain in alignment with ethics in research and healthcare in the future.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, NY, United States
| | - Mark Schweitzer
- Office of the Vice President for Health Affairs Office of the Vice President, Wayne State University, Detroit, MI, United States
| |
Collapse
|
38
|
Pham N, Hill V, Rauschecker A, Lui Y, Niogi S, Fillipi CG, Chang P, Zaharchuk G, Wintermark M. Critical Appraisal of Artificial Intelligence-Enabled Imaging Tools Using the Levels of Evidence System. AJNR Am J Neuroradiol 2023; 44:E21-E28. [PMID: 37080722 PMCID: PMC10171388 DOI: 10.3174/ajnr.a7850] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/16/2023] [Indexed: 04/22/2023]
Abstract
Clinical adoption of an artificial intelligence-enabled imaging tool requires critical appraisal of its life cycle from development to implementation by using a systematic, standardized, and objective approach that can verify both its technical and clinical efficacy. Toward this concerted effort, the ASFNR/ASNR Artificial Intelligence Workshop Technology Working Group is proposing a hierarchal evaluation system based on the quality, type, and amount of scientific evidence that the artificial intelligence-enabled tool can demonstrate for each component of its life cycle. The current proposal is modeled after the levels of evidence in medicine, with the uppermost level of the hierarchy showing the strongest evidence for potential impact on patient care and health care outcomes. The intended goal of establishing an evidence-based evaluation system is to encourage transparency, foster an understanding of the creation of artificial intelligence tools and the artificial intelligence decision-making process, and to report the relevant data on the efficacy of artificial intelligence tools that are developed. The proposed system is an essential step in working toward a more formalized, clinically validated, and regulated framework for the safe and effective deployment of artificial intelligence imaging applications that will be used in clinical practice.
Collapse
Affiliation(s)
- N Pham
- From the Department of Radiology (N.P., G.Z.), Stanford School of Medicine, Palo Alto, California
| | - V Hill
- Department of Radiology (V.H.), Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | - A Rauschecker
- Department of Radiology (A.R.), University of California, San Francisco, San Francisco, California
| | - Y Lui
- Department of Radiology (Y.L.), NYU Grossman School of Medicine, New York, New York
| | - S Niogi
- Department of Radiology (S.N.), Weill Cornell Medicine, New York, New York
| | - C G Fillipi
- Department of Radiology (C.G.F.), Tufts University School of Medicine, Boston, Massachusetts
| | - P Chang
- Department of Radiology (P.C.), University of California, Irvine, Irvine, California
| | - G Zaharchuk
- From the Department of Radiology (N.P., G.Z.), Stanford School of Medicine, Palo Alto, California
| | - M Wintermark
- Department of Neuroradiology (M.W.), The University of Texas MD Anderson Cancer Center, Houston, Texas
| |
Collapse
|
39
|
Mirkin S, Albensi BC. Should artificial intelligence be used in conjunction with Neuroimaging in the diagnosis of Alzheimer's disease? Front Aging Neurosci 2023; 15:1094233. [PMID: 37187577 PMCID: PMC10177660 DOI: 10.3389/fnagi.2023.1094233] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 03/27/2023] [Indexed: 05/17/2023] Open
Abstract
Alzheimer's disease (AD) is a progressive, neurodegenerative disorder that affects memory, thinking, behavior, and other cognitive functions. Although there is no cure, detecting AD early is important for the development of a therapeutic plan and a care plan that may preserve cognitive function and prevent irreversible damage. Neuroimaging, such as magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET), has served as a critical tool in establishing diagnostic indicators of AD during the preclinical stage. However, as neuroimaging technology quickly advances, there is a challenge in analyzing and interpreting vast amounts of brain imaging data. Given these limitations, there is great interest in using artificial Intelligence (AI) to assist in this process. AI introduces limitless possibilities in the future diagnosis of AD, yet there is still resistance from the healthcare community to incorporate AI in the clinical setting. The goal of this review is to answer the question of whether AI should be used in conjunction with neuroimaging in the diagnosis of AD. To answer the question, the possible benefits and disadvantages of AI are discussed. The main advantages of AI are its potential to improve diagnostic accuracy, improve the efficiency in analyzing radiographic data, reduce physician burnout, and advance precision medicine. The disadvantages include generalization and data shortage, lack of in vivo gold standard, skepticism in the medical community, potential for physician bias, and concerns over patient information, privacy, and safety. Although the challenges present fundamental concerns and must be addressed when the time comes, it would be unethical not to use AI if it can improve patient health and outcome.
Collapse
Affiliation(s)
- Sophia Mirkin
- Dr. Kiran C. Patel College of Osteopathic Medicine, Nova Southeastern University, Fort Lauderdale, FL, United States
| | - Benedict C. Albensi
- Barry and Judy Silverman College of Pharmacy, Nova Southeastern University, Fort Lauderdale, FL, United States
- St. Boniface Hospital Research, Winnipeg, MB, Canada
- University of Manitoba, Winnipeg, MB, Canada
| |
Collapse
|
40
|
Sun Q, Yang J, Zhao S, Chen C, Hou Y, Yuan Y, Ma S, Huang Y. LIVE-Net: Comprehensive 3D vessel extraction framework in CT angiography. Comput Biol Med 2023; 159:106886. [PMID: 37062255 DOI: 10.1016/j.compbiomed.2023.106886] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 03/04/2023] [Accepted: 04/01/2023] [Indexed: 04/18/2023]
Abstract
The extraction of vessels from computed tomography angiography (CTA) is significant in diagnosing and evaluating vascular diseases. However, due to the anatomical complexity, wide intensity distribution, and small volume proportion of vessels, vessel extraction is laborious and time-consuming, and it is easy to lead to error-prone diagnostic results in clinical practice. This study proposes a novel comprehensive vessel extraction framework, called the Local Iterative-based Vessel Extraction Network (LIVE-Net), to achieve 3D vessel segmentation while tracking vessel centerlines. LIVE-Net contains dual dataflow pathways that work alternately: an iterative tracking network and a local segmentation network. The former can generate the fine-grain direction and radius prediction of a vascular patch by using the attention-embedded atrous pyramid network (aAPN), and the latter can achieve 3D vascular lumen segmentation by constructing the multi-order self-attention U-shape network (MOSA-UNet). LIVE-Net is trained and evaluated on two datasets: the MICCAI 2008 Coronary Artery Tracking Challenge (CAT08) dataset and head and neck CTA dataset from the clinic. Experimental results of both tracking and segmentation show that our proposed LIVE-Net exhibits superior performance compared with other state-of-the-art (SOTA) networks. In the CAT08 dataset, the tracked centerlines have an average overlap of 95.2%, overlap until first error of 91.2%, overlap with the clinically relevant vessels of 98.3%, and error distance inside of 0.21 mm. The corresponding tracking overlap metrics in the head and neck CTA dataset are 96.7%, 91.0%, and 99.8%, respectively. In addition, the results of the consistent experiment also show strong clinical correspondence. For the segmentation of bilateral carotid and vertebral arteries, our method can not only achieve better accuracy with an average dice similarity coefficient (DSC) of 90.03%, Intersection over Union (IoU) of 81.97%, and 95% Hausdorff distance (95%HD) of 3.42 mm , but higher efficiency with an average time of 67.25 s , even three times faster compared to some methods applied in full field view. Both the tracking and segmentation results prove the potential clinical utility of our network.
Collapse
Affiliation(s)
- Qi Sun
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China.
| | - Sizhe Zhao
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Chen Chen
- Northeastern University, Shenyang, Liaoning, China
| | - Yang Hou
- Department of Radiology, ShengJing Hospital of China Medical University, Shenyang, Liaoning, China
| | - Yuliang Yuan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Shuang Ma
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Yan Huang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| |
Collapse
|
41
|
Karakis R, Gurkahraman K, Mitsis GD, Boudrias MH. DEEP LEARNING PREDICTION OF MOTOR PERFORMANCE IN STROKE INDIVIDUALS USING NEUROIMAGING DATA. J Biomed Inform 2023; 141:104357. [PMID: 37031755 DOI: 10.1016/j.jbi.2023.104357] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 02/24/2023] [Accepted: 04/01/2023] [Indexed: 04/11/2023]
Abstract
The degree of motor impairment and profile of recovery after stroke are difficult to predict for each individual. Measures obtained from clinical assessments, as well as neurophysiological and neuroimaging techniques have been used as potential biomarkers of motor recovery, with limited accuracy up to date. To address this, the present study aimed to develop a deep learning model based on structural brain images obtained from stroke participants and healthy volunteers. The following inputs were used in a multi-channel 3D convolutional neural network (CNN) model: fractional anisotropy, mean diffusivity, radial diffusivity, and axial diffusivity maps obtained from Diffusion Tensor Imaging (DTI) images, white and gray matter intensity values obtained from Magnetic Resonance Imaging, as well as demographic data (e.g., age, gender). Upper limb motor function was classified into "Poor" and "Good" categories. To assess the performance of the DL model, we compared it to more standard machine learning (ML) classifiers including k-nearest neighbor, support vector machines (SVM), Decision Trees, Random Forests, Ada Boosting, and Naïve Bayes, whereby the inputs of these classifiers were the features taken from the fully connected layer of the CNN model. The highest accuracy and area under the curve values were 0.92 and 0.92 for the 3D-CNN and 0.91 and 0.91 for the SVM, respectively. The multi-channel 3D-CNN with residual blocks and SVM supported by DL was more accurate than traditional ML methods to classify upper limb motor impairment in the stroke population. These results suggest that combining volumetric DTI maps and measures of white and gray matter integrity can improve the prediction of the degree of motor impairment after stroke. Identifying the potential of recovery early on after a stroke could promote the allocation of resources to optimize the functional independence of these individuals and their quality of life.
Collapse
Affiliation(s)
- Rukiye Karakis
- Department of Software Engineering, Faculty of Technology, Sivas Cumhuriyet University, Turkey
| | - Kali Gurkahraman
- Department of Computer Engineering, Faculty of Engineering, Sivas Cumhuriyet University, Turkey
| | - Georgios D Mitsis
- Department of Bioengineering, Faculty of Engineering, McGill University, Montreal, QC, Canada
| | - Marie-Hélène Boudrias
- School of Physical and Occupational Therapy, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada; BRAIN Laboratory, Jewish Rehabilitation Hospital, Site of Centre for Interdisciplinary Research of Greater Montreal (CRIR) and CISSS-Laval, QC, Canada.
| |
Collapse
|
42
|
Ong W, Zhu L, Tan YL, Teo EC, Tan JH, Kumar N, Vellayappan BA, Ooi BC, Quek ST, Makmur A, Hallinan JTPD. Application of Machine Learning for Differentiating Bone Malignancy on Imaging: A Systematic Review. Cancers (Basel) 2023; 15:cancers15061837. [PMID: 36980722 PMCID: PMC10047175 DOI: 10.3390/cancers15061837] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/07/2023] [Accepted: 03/16/2023] [Indexed: 03/22/2023] Open
Abstract
An accurate diagnosis of bone tumours on imaging is crucial for appropriate and successful treatment. The advent of Artificial intelligence (AI) and machine learning methods to characterize and assess bone tumours on various imaging modalities may assist in the diagnostic workflow. The purpose of this review article is to summarise the most recent evidence for AI techniques using imaging for differentiating benign from malignant lesions, the characterization of various malignant bone lesions, and their potential clinical application. A systematic search through electronic databases (PubMed, MEDLINE, Web of Science, and clinicaltrials.gov) was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 34 articles were retrieved from the databases and the key findings were compiled and summarised. A total of 34 articles reported the use of AI techniques to distinguish between benign vs. malignant bone lesions, of which 12 (35.3%) focused on radiographs, 12 (35.3%) on MRI, 5 (14.7%) on CT and 5 (14.7%) on PET/CT. The overall reported accuracy, sensitivity, and specificity of AI in distinguishing between benign vs. malignant bone lesions ranges from 0.44–0.99, 0.63–1.00, and 0.73–0.96, respectively, with AUCs of 0.73–0.96. In conclusion, the use of AI to discriminate bone lesions on imaging has achieved a relatively good performance in various imaging modalities, with high sensitivity, specificity, and accuracy for distinguishing between benign vs. malignant lesions in several cohort studies. However, further research is necessary to test the clinical performance of these algorithms before they can be facilitated and integrated into routine clinical practice.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Correspondence: ; Tel.: +65-67725207
| | - Lei Zhu
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Yi Liang Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Ee Chin Teo
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Balamurugan A. Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, 5 Lower Kent Ridge Road, Singapore 119074, Singapore
| | - Beng Chin Ooi
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
43
|
Chen Y, Wang Y, Song Z, Fan Y, Gao T, Tang X. Abnormal white matter changes in Alzheimer's disease based on diffusion tensor imaging: A systematic review. Ageing Res Rev 2023; 87:101911. [PMID: 36931328 DOI: 10.1016/j.arr.2023.101911] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 03/01/2023] [Accepted: 03/13/2023] [Indexed: 03/17/2023]
Abstract
Alzheimer's disease (AD) is a degenerative neurological disease in elderly individuals. Subjective cognitive decline (SCD), mild cognitive impairment (MCI) and further development to dementia (d-AD) are considered to be major stages of the progressive pathological development of AD. Diffusion tensor imaging (DTI), one of the most important modalities of MRI, can describe the microstructure of white matter through its tensor model. It is widely used in understanding the central nervous system mechanism and finding appropriate potential biomarkers for the early stages of AD. Based on the multilevel analysis methods of DTI (voxelwise, fiberwise and networkwise), we summarized that AD patients mainly showed extensive microstructural damage, structural disconnection and topological abnormalities in the corpus callosum, fornix, and medial temporal lobe, including the hippocampus and cingulum. The diffusion features and structural connectomics of specific regions can provide information for the early assisted recognition of AD. The classification accuracy of SCD and normal controls can reach 92.68% at present. And due to the further changes of brain structure and function, the classification accuracy of MCI, d-AD and normal controls can reach more than 97%. Finally, we summarized the limitations of current DTI-based AD research and propose possible future research directions.
Collapse
Affiliation(s)
- Yu Chen
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Yifei Wang
- School of Life Science, Beijing Institute of Technology, Beijing 100081, China
| | - Zeyu Song
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Yingwei Fan
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Tianxin Gao
- School of Life Science, Beijing Institute of Technology, Beijing 100081, China.
| | - Xiaoying Tang
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China; School of Life Science, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
44
|
Li D, Li X, Li S, Qi M, Sun X, Hu G. Relationship between the deep features of the full-scan pathological map of mucinous gastric carcinoma and related genes based on deep learning. Heliyon 2023; 9:e14374. [PMID: 36942252 PMCID: PMC10023952 DOI: 10.1016/j.heliyon.2023.e14374] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 02/28/2023] [Accepted: 03/02/2023] [Indexed: 03/11/2023] Open
Abstract
Background Long-term differential expression of disease-associated genes is a crucial driver of pathological changes in mucinous gastric carcinoma. Therefore, there should be a correlation between depth features extracted from pathology-based full-scan images using deep learning and disease-associated gene expression. This study tried to provides preliminary evidence that long-term differentially expressed (disease-associated) genes lead to subtle changes in disease pathology by exploring their correlation, and offer a new ideas for precise analysis of pathomics and combined analysis of pathomics and genomics. Methods Full pathological scans, gene sequencing data, and clinical data of patients with mucinous gastric carcinoma were downloaded from TCGA data. The VGG-16 network architecture was used to construct a binary classification model to explore the potential of VGG-16 applications and extract the deep features of the pathology-based full-scan map. Differential gene expression analysis was performed and a protein-protein interaction network was constructed to screen disease-related core genes. Differential, Lasso regression, and extensive correlation analyses were used to screen for valuable deep features. Finally, a correlation analysis was used to determine whether there was a correlation between valuable deep features and disease-related core genes. Result The accuracy of the binary classification model was 0.775 ± 0.129. A total of 24 disease-related core genes were screened, including ASPM, AURKA, AURKB, BUB1, BUB1B, CCNA2, CCNB1, CCNB2, CDCA8, CDK1, CENPF, DLGAP5, KIF11, KIF20A, KIF2C, KIF4A, MELK, PBK, RRM2, TOP2A, TPX2, TTK, UBE2C, and ZWINT. In addition, differential, Lasso regression, and extensive correlation analyses were used to screen eight valuable deep features, including features 51, 106, 109, 118, 257, 282, 326, and 487. Finally, the results of the correlation analysis suggested that valuable deep features were either positively or negatively correlated with core gene expression. Conclusion The preliminary results of this study support our hypotheses. Deep learning may be an important bridge for the joint analysis of pathomics and genomics and provides preliminary evidence for long-term abnormal expression of genes leading to subtle changes in pathology.
Collapse
Affiliation(s)
- Ding Li
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Xiaoyuan Li
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Shifang Li
- Department of Neurosurgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Mengmeng Qi
- Department of Endocrinology, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Xiaowei Sun
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Guojie Hu
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| |
Collapse
|
45
|
Miao X, Shao T, Wang Y, Wang Q, Han J, Li X, Li Y, Sun C, Wen J, Liu J. The value of convolutional neural networks-based deep learning model in differential diagnosis of space-occupying brain diseases. Front Neurol 2023; 14:1107957. [PMID: 36816568 PMCID: PMC9932812 DOI: 10.3389/fneur.2023.1107957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/16/2023] [Indexed: 02/05/2023] Open
Abstract
Objectives It is still a challenge to differentiate space-occupying brain lesions such as tumefactive demyelinating lesions (TDLs), tumefactive primary angiitis of the central nervous system (TPACNS), primary central nervous system lymphoma (PCNSL), and brain gliomas. Convolutional neural networks (CNNs) have been used to analyze complex medical data and have proven transformative for image-based applications. It can quickly acquire diseases' radiographic features and correct doctors' diagnostic bias to improve diagnostic efficiency and accuracy. The study aimed to assess the value of CNN-based deep learning model in the differential diagnosis of space-occupying brain diseases on MRI. Methods We retrospectively analyzed clinical and MRI data from 480 patients with TDLs (n = 116), TPACNS (n = 64), PCNSL (n = 150), and brain gliomas (n = 150). The patients were randomly assigned to training (n = 240), testing (n = 73), calibration (n = 96), and validation (n = 71) groups. And a CNN-implemented deep learning model guided by clinical experts was developed to identify the contrast-enhanced T1-weighted sequence lesions of these four diseases. We utilized accuracy, sensitivity, specificity, and area under the curve (AUC) to evaluate the performance of the CNN model. The model's performance was then compared to the neuroradiologists' diagnosis. Results The CNN model had a total accuracy of 87% which was higher than senior neuroradiologists (74%), and the AUC of TDLs, PCNSL, TPACNS and gliomas were 0.92, 0.92, 0.89 and 0.88, respectively. Conclusion The CNN model can accurately identify specific radiographic features of TDLs, TPACNS, PCNSL, and gliomas. It has the potential to be an effective auxiliary diagnostic tool in the clinic, assisting inexperienced clinicians in reducing diagnostic bias and improving diagnostic efficiency.
Collapse
Affiliation(s)
- Xiuling Miao
- Department of Neurology, School of Medicine, South China University of Technology, Guangzhou, China
- Department of Neurology, The Sixth Medical Center of PLA General Hospital of Beijing, Beijing, China
| | - Tianyu Shao
- School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Yaming Wang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Qingjun Wang
- Department of Radiology, The Sixth Medical Center of PLA General Hospital of Beijing, Beijing, China
| | - Jing Han
- Department of Neurology, School of Medicine, South China University of Technology, Guangzhou, China
| | - Xinnan Li
- Department of Neurology, The Sixth Medical Center of PLA General Hospital of Beijing, Beijing, China
| | - Yuxin Li
- Department of Neurology, The Sixth Medical Center of PLA General Hospital of Beijing, Beijing, China
| | - Chenjing Sun
- Department of Neurology, The Sixth Medical Center of PLA General Hospital of Beijing, Beijing, China
| | - Junhai Wen
- School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Jianguo Liu
- Department of Neurology, School of Medicine, South China University of Technology, Guangzhou, China
- Department of Neurology, The Sixth Medical Center of PLA General Hospital of Beijing, Beijing, China
| |
Collapse
|
46
|
Duong MT, Rudie JD, Mohan S. Neuroimaging Patterns of Intracranial Infections: Meningitis, Cerebritis, and Their Complications. Neuroimaging Clin N Am 2023; 33:11-41. [PMID: 36404039 PMCID: PMC10904173 DOI: 10.1016/j.nic.2022.07.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Neuroimaging provides rapid, noninvasive visualization of central nervous system infections for optimal diagnosis and management. Generalizable and characteristic imaging patterns help radiologists distinguish different types of intracranial infections including meningitis and cerebritis from a variety of bacterial, viral, fungal, and/or parasitic causes. Here, we describe key radiologic patterns of meningeal enhancement and diffusion restriction through profiles of meningitis, cerebritis, abscess, and ventriculitis. We discuss various imaging modalities and recent diagnostic advances such as deep learning through a survey of intracranial pathogens and their radiographic findings. Moreover, we explore critical complications and differential diagnoses of intracranial infections.
Collapse
Affiliation(s)
- Michael Tran Duong
- Division of Neuroradiology, Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Jeffrey D Rudie
- Department of Radiology, Scripps Clinic and University of California San Diego, 10666 Torrey Pines Road, La Jolla, CA 92037, USA
| | - Suyash Mohan
- Division of Neuroradiology, Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA.
| |
Collapse
|
47
|
Role of Ensemble Deep Learning for Brain Tumor Classification in Multiple Magnetic Resonance Imaging Sequence Data. Diagnostics (Basel) 2023; 13:diagnostics13030481. [PMID: 36766587 PMCID: PMC9914433 DOI: 10.3390/diagnostics13030481] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 01/24/2023] [Accepted: 01/26/2023] [Indexed: 01/31/2023] Open
Abstract
The biopsy is a gold standard method for tumor grading. However, due to its invasive nature, it has sometimes proved fatal for brain tumor patients. As a result, a non-invasive computer-aided diagnosis (CAD) tool is required. Recently, many magnetic resonance imaging (MRI)-based CAD tools have been proposed for brain tumor grading. The MRI has several sequences, which can express tumor structure in different ways. However, a suitable MRI sequence for brain tumor classification is not yet known. The most common brain tumor is 'glioma', which is the most fatal form. Therefore, in the proposed study, to maximize the classification ability between low-grade versus high-grade glioma, three datasets were designed comprising three MRI sequences: T1-Weighted (T1W), T2-weighted (T2W), and fluid-attenuated inversion recovery (FLAIR). Further, five well-established convolutional neural networks, AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50 were adopted for tumor classification. An ensemble algorithm was proposed using the majority vote of above five deep learning (DL) models to produce more consistent and improved results than any individual model. Five-fold cross validation (K5-CV) protocol was adopted for training and testing. For the proposed ensembled classifier with K5-CV, the highest test accuracies of 98.88 ± 0.63%, 97.98 ± 0.86%, and 94.75 ± 0.61% were achieved for FLAIR, T2W, and T1W-MRI data, respectively. FLAIR-MRI data was found to be most significant for brain tumor classification, where it showed a 4.17% and 0.91% improvement in accuracy against the T1W-MRI and T2W-MRI sequence data, respectively. The proposed ensembled algorithm (MajVot) showed significant improvements in the average accuracy of three datasets of 3.60%, 2.84%, 1.64%, 4.27%, and 1.14%, respectively, against AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50.
Collapse
|
48
|
Martucci M, Russo R, Schimperna F, D’Apolito G, Panfili M, Grimaldi A, Perna A, Ferranti AM, Varcasia G, Giordano C, Gaudino S. Magnetic Resonance Imaging of Primary Adult Brain Tumors: State of the Art and Future Perspectives. Biomedicines 2023; 11:364. [PMID: 36830900 PMCID: PMC9953338 DOI: 10.3390/biomedicines11020364] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 01/20/2023] [Accepted: 01/22/2023] [Indexed: 01/28/2023] Open
Abstract
MRI is undoubtedly the cornerstone of brain tumor imaging, playing a key role in all phases of patient management, starting from diagnosis, through therapy planning, to treatment response and/or recurrence assessment. Currently, neuroimaging can describe morphologic and non-morphologic (functional, hemodynamic, metabolic, cellular, microstructural, and sometimes even genetic) characteristics of brain tumors, greatly contributing to diagnosis and follow-up. Knowing the technical aspects, strength and limits of each MR technique is crucial to correctly interpret MR brain studies and to address clinicians to the best treatment strategy. This article aimed to provide an overview of neuroimaging in the assessment of adult primary brain tumors. We started from the basilar role of conventional/morphological MR sequences, then analyzed, one by one, the non-morphological techniques, and finally highlighted future perspectives, such as radiomics and artificial intelligence.
Collapse
Affiliation(s)
- Matia Martucci
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
| | - Rosellina Russo
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
| | | | - Gabriella D’Apolito
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
| | - Marco Panfili
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
| | - Alessandro Grimaldi
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| | - Alessandro Perna
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| | | | - Giuseppe Varcasia
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| | - Carolina Giordano
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
| | - Simona Gaudino
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| |
Collapse
|
49
|
Diffusion Tensor Imaging in Amyotrophic Lateral Sclerosis: Machine Learning for Biomarker Development. Int J Mol Sci 2023; 24:ijms24031911. [PMID: 36768231 PMCID: PMC9915541 DOI: 10.3390/ijms24031911] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/11/2023] [Accepted: 01/16/2023] [Indexed: 01/21/2023] Open
Abstract
Diffusion tensor imaging (DTI) allows the in vivo imaging of pathological white matter alterations, either with unbiased voxel-wise or hypothesis-guided tract-based analysis. Alterations of diffusion metrics are indicative of the cerebral status of patients with amyotrophic lateral sclerosis (ALS) at the individual level. Using machine learning (ML) models to analyze complex and high-dimensional neuroimaging data sets, new opportunities for DTI-based biomarkers in ALS arise. This review aims to summarize how different ML models based on DTI parameters can be used for supervised diagnostic classifications and to provide individualized patient stratification with unsupervised approaches in ALS. To capture the whole spectrum of neuropathological signatures, DTI might be combined with additional modalities, such as structural T1w 3-D MRI in ML models. To further improve the power of ML in ALS and enable the application of deep learning models, standardized DTI protocols and multi-center collaborations are needed to validate multimodal DTI biomarkers. The application of ML models to multiparametric MRI/multimodal DTI-based data sets will enable a detailed assessment of neuropathological signatures in patients with ALS and the development of novel neuroimaging biomarkers that could be used in the clinical workup.
Collapse
|
50
|
Lee KY, Liu CC, Chen DYT, Weng CL, Chiu HW, Chiang CH. Automatic detection and vascular territory classification of hyperacute staged ischemic stroke on diffusion weighted image using convolutional neural networks. Sci Rep 2023; 13:404. [PMID: 36624122 PMCID: PMC9829896 DOI: 10.1038/s41598-023-27621-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 01/04/2023] [Indexed: 01/11/2023] Open
Abstract
Automated ischemic stroke detection and classification according to its vascular territory is an essential step in stroke image evaluation, especially at hyperacute stage where mechanical thrombectomy may improve patients' outcome. This study aimed to evaluate the performance of various convolutional neural network (CNN) models on hyperacute staged diffusion-weighted images (DWI) for detection of ischemic stroke and classification into anterior circulation infarct (ACI), posterior circulation infarct (PCI) and normal image slices. In this retrospective study, 253 cases of hyperacute staged DWI were identified, downloaded and reviewed. After exclusion, DWI from 127 cases were used and we created a dataset containing total of 2119 image slices, and separates it into three groups, namely ACI (618 slices), PCI (149 slices) and normal (1352 slices). Two transfer learning based CNN models, namely Inception-v3, EfficientNet-b0 and one self-derived modified LeNet model were used. The performance of the models was evaluated and activation maps using gradient-weighted class activation mapping (Grad-Cam) technique were made. Inception-v3 had the best overall accuracy (86.3%), weighted F1 score (86.2%) and kappa score (0.715), followed by the modified LeNet (85.2% accuracy, 84.7% weighted F1 score and 0.693 kappa score). The EfficientNet-b0 had the poorest performance of 83.6% accuracy, 83% weighted F1 score and 0.662 kappa score. The activation map showed that one possible explanation for misclassification is due to susceptibility artifact. A sufficiently high performance can be achieved by using CNN model to detect ischemic stroke on hyperacute staged DWI and classify it according to vascular territory.
Collapse
Affiliation(s)
- Kun-Yu Lee
- grid.412955.e0000 0004 0419 7197Department of Medical Image, Shuang Ho Hospital, Taipei Medical University, No. 291, Zhongzheng Road, Zhonghe District, New Taipei City, 23561 Taiwan, ROC ,grid.412896.00000 0000 9337 0481Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, No. 250 Wu-Hsing Street, Taipei City, 11031 Taiwan, ROC
| | - Chia-Chuan Liu
- grid.412955.e0000 0004 0419 7197Department of Medical Image, Shuang Ho Hospital, Taipei Medical University, No. 291, Zhongzheng Road, Zhonghe District, New Taipei City, 23561 Taiwan, ROC ,grid.412896.00000 0000 9337 0481Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, No. 250 Wu-Hsing Street, Taipei City, 11031 Taiwan, ROC
| | - David Yen-Ting Chen
- grid.412955.e0000 0004 0419 7197Department of Medical Image, Shuang Ho Hospital, Taipei Medical University, No. 291, Zhongzheng Road, Zhonghe District, New Taipei City, 23561 Taiwan, ROC ,grid.412896.00000 0000 9337 0481Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, No. 250 Wu-Hsing Street, Taipei City, 11031 Taiwan, ROC
| | - Chi-Lun Weng
- grid.413878.10000 0004 0572 9327Department of Radiology, Ditmanson Medical Foundation Chia-Yi Christian Hospital, No. 539, Zhongxiao Rd., East Dist., Chiayi City, 600566 Taiwan, ROC
| | - Hung-Wen Chiu
- grid.412896.00000 0000 9337 0481Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University TW, No. 250 Wu-Hsing Street, Taipei City, 11031 Taiwan, ROC
| | - Chen-Hua Chiang
- Department of Medical Image, Shuang Ho Hospital, Taipei Medical University, No. 291, Zhongzheng Road, Zhonghe District, New Taipei City, 23561, Taiwan, ROC. .,Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, No. 250 Wu-Hsing Street, Taipei City, 11031, Taiwan, ROC.
| |
Collapse
|