1
|
Yangi K, Hong J, Gholami AS, On TJ, Reed AG, Puppalla P, Chen J, Calderon Valero CE, Xu Y, Li B, Santello M, Lawton MT, Preul MC. Deep learning in neurosurgery: a systematic literature review with a structured analysis of applications across subspecialties. Front Neurol 2025; 16:1532398. [PMID: 40308224 PMCID: PMC12040697 DOI: 10.3389/fneur.2025.1532398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2024] [Accepted: 03/04/2025] [Indexed: 05/02/2025] Open
Abstract
Objective This study systematically reviewed deep learning (DL) applications in neurosurgical practice to provide a comprehensive understanding of DL in neurosurgery. The review process included a systematic overview of recent developments in DL technologies, an examination of the existing literature on their applications in neurosurgery, and insights into the future of neurosurgery. The study also summarized the most widely used DL algorithms, their specific applications in neurosurgical practice, their limitations, and future directions. Materials and methods An advanced search using medical subject heading terms was conducted in Medline (via PubMed), Scopus, and Embase databases restricted to articles published in English. Two independent neurosurgically experienced reviewers screened selected articles. Results A total of 456 articles were initially retrieved. After screening, 162 were found eligible and included in the study. Reference lists of all 162 articles were checked, and 19 additional articles were found eligible and included in the study. The 181 included articles were divided into 6 categories according to the subspecialties: general neurosurgery (n = 64), neuro-oncology (n = 49), functional neurosurgery (n = 32), vascular neurosurgery (n = 17), neurotrauma (n = 9), and spine and peripheral nerve (n = 10). The leading procedures in which DL algorithms were most commonly used were deep brain stimulation and subthalamic and thalamic nuclei localization (n = 24) in the functional neurosurgery group; segmentation, identification, classification, and diagnosis of brain tumors (n = 29) in the neuro-oncology group; and neuronavigation and image-guided neurosurgery (n = 13) in the general neurosurgery group. Apart from various video and image datasets, computed tomography, magnetic resonance imaging, and ultrasonography were the most frequently used datasets to train DL algorithms in all groups overall (n = 79). Although there were few studies involving DL applications in neurosurgery in 2016, research interest began to increase in 2019 and has continued to grow in the 2020s. Conclusion DL algorithms can enhance neurosurgical practice by improving surgical workflows, real-time monitoring, diagnostic accuracy, outcome prediction, volumetric assessment, and neurosurgical education. However, their integration into neurosurgical practice involves challenges and limitations. Future studies should focus on refining DL models with a wide variety of datasets, developing effective implementation techniques, and assessing their affect on time and cost efficiency.
Collapse
Affiliation(s)
- Kivanc Yangi
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Jinpyo Hong
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Arianna S. Gholami
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Thomas J. On
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Alexander G. Reed
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Pravarakhya Puppalla
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Jiuxu Chen
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, United States
| | - Carlos E. Calderon Valero
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Yuan Xu
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Baoxin Li
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, United States
| | - Marco Santello
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, United States
| | - Michael T. Lawton
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Mark C. Preul
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| |
Collapse
|
2
|
Ouachikh O, Chaix R, Sontheimer A, Coste J, Aider OA, Dautkulova A, Abdelouahab K, Hafidi A, Salah MB, Pereira B, Lemaire JJ. Brain color-coded diffusion imaging: Utility of ACPC reorientation of gradients in healthy subjects and patients. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 257:108449. [PMID: 39378632 DOI: 10.1016/j.cmpb.2024.108449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Revised: 07/08/2024] [Accepted: 09/29/2024] [Indexed: 10/10/2024]
Abstract
BACKGROUND AND OBJECTIVE The common structural interpretation of diffusion color-encoded (DCE) maps assumes that the brain is aligned with the gradients of the MRI machine. This is seldom achieved in the field, leading to incorrect red (R), green (G) and blue (B) DCE values for the expected orientation of fiber bundles. We studied the virtual reorientation of gradients according to the anterior commissure - posterior commissure (ACPC) system on the RGB derivatives. METHODS We measured mean ± standard deviation of average, standard deviation, skewness and kurtosis of RGB derivatives, before (rO) and after (acpcO) gradient reorientation, in one healthy-subject group with head routinely positioned (HS-routine), and in two patient groups, one with essential tremor (ET-Opti), and one with Parkinson's disease (PD-Opti), with head position optimized according to ACPC before acquisition. We studied the pitch, roll and yaw angles of reorientation, and we compared rO and acpcO conditions, and groups (ad hoc statistics). RESULTS Pitch (maximum in the HS-routine group) was greater than roll and yaw. After reorientation of gradients, in the HS-routine group, DCE average increased, and Stddev, skewness and kurtosis decreased; R, G and B average increased, and R and B skewness and kurtosis decreased. By contrast, in the ET-Opti group and the PD-Opti group, R, G and B, average and Stddev increased, and skewness and kurtosis decreased. In both rO and acpcO conditions, in the ET-Opti and PD-Opti groups, average and standard deviation were higher, while skewness and kurtosis were lower. CONCLUSIONS DCE map interpretability depends on brain orientation. Reorientation realigns gradients with the anatomic and physiologic position of the head and brain, as exemplified.
Collapse
Affiliation(s)
- Omar Ouachikh
- Université Clermont Auvergne, CNRS, CHU Clermont-Ferrand, Clermont Auvergne INP, Institut Pascal, F-63000 Clermont-Ferrand, France; Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Remi Chaix
- Université Clermont Auvergne, CNRS, CHU Clermont-Ferrand, Clermont Auvergne INP, Institut Pascal, F-63000 Clermont-Ferrand, France; Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Anna Sontheimer
- Université Clermont Auvergne, CNRS, CHU Clermont-Ferrand, Clermont Auvergne INP, Institut Pascal, F-63000 Clermont-Ferrand, France; Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Jerome Coste
- Université Clermont Auvergne, CNRS, CHU Clermont-Ferrand, Clermont Auvergne INP, Institut Pascal, F-63000 Clermont-Ferrand, France; Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Omar Ait Aider
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Aigerim Dautkulova
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Kamel Abdelouahab
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Aziz Hafidi
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Maha Ben Salah
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Bruno Pereira
- Direction de la Recherche Clinique et de l'Innovation, CHU Clermont-Ferrand, F-63000 Clermont-Ferrand, France
| | - Jean-Jacques Lemaire
- Université Clermont Auvergne, CNRS, CHU Clermont-Ferrand, Clermont Auvergne INP, Institut Pascal, F-63000 Clermont-Ferrand, France; Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France.
| |
Collapse
|
3
|
Matthew J, Uus A, Egloff Collado A, Luis A, Arulkumaran S, Fukami-Gartner A, Kyriakopoulou V, Cromb D, Wright R, Colford K, Deprez M, Hutter J, O’Muircheartaigh J, Malamateniou C, Razavi R, Story L, Hajnal JV, Rutherford MA. Automated craniofacial biometry with 3D T2w fetal MRI. PLOS DIGITAL HEALTH 2024; 3:e0000663. [PMID: 39774200 PMCID: PMC11684610 DOI: 10.1371/journal.pdig.0000663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Accepted: 10/09/2024] [Indexed: 01/11/2025]
Abstract
OBJECTIVES Evaluating craniofacial phenotype-genotype correlations prenatally is increasingly important; however, it is subjective and challenging with 3D ultrasound. We developed an automated label propagation pipeline using 3D motion- corrected, slice-to-volume reconstructed (SVR) fetal MRI for craniofacial measurements. METHODS A literature review and expert consensus identified 31 craniofacial biometrics for fetal MRI. An MRI atlas with defined anatomical landmarks served as a template for subject registration, auto-labelling, and biometric calculation. We assessed 108 healthy controls and 24 fetuses with Down syndrome (T21) in the third trimester (29-36 weeks gestational age, GA) to identify meaningful biometrics in T21. Reliability and reproducibility were evaluated in 10 random datasets by four observers. RESULTS Automated labels were produced for all 132 subjects with a 0.3% placement error rate. Seven measurements, including anterior base of skull length and maxillary length, showed significant differences with large effect sizes between T21 and control groups (ANOVA, p<0.001). Manual measurements took 25-35 minutes per case, while automated extraction took approximately 5 minutes. Bland-Altman plots showed agreement within manual observer ranges except for mandibular width, which had higher variability. Extended GA growth charts (19-39 weeks), based on 280 control fetuses, were produced for future research. CONCLUSION This is the first automated atlas-based protocol using 3D SVR MRI for fetal craniofacial biometrics, accurately revealing morphological craniofacial differences in a T21 cohort. Future work should focus on improving measurement reliability, larger clinical cohorts, and technical advancements, to enhance prenatal care and phenotypic characterisation.
Collapse
Affiliation(s)
- Jacqueline Matthew
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
- Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Alena Uus
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
| | - Alexia Egloff Collado
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
- Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Aysha Luis
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
- Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Sophie Arulkumaran
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
- Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Abi Fukami-Gartner
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
| | - Vanessa Kyriakopoulou
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
| | - Daniel Cromb
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
- Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Robert Wright
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
- Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Kathleen Colford
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
| | - Maria Deprez
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
| | - Jana Hutter
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
- Smart Imaging Lab, Radiological Institute, University Hospital Erlangen, Erlangen, Germany
| | - Jonathan O’Muircheartaigh
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
| | - Christina Malamateniou
- Division of Midwifery and Radiography, City University of London, London, United Kingdom
| | - Reza Razavi
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
- Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Lisa Story
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
- Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Joseph V. Hajnal
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
| | - Mary A. Rutherford
- Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King’s College London, St Thomas’ Hospital, London, United Kingdom
- Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| |
Collapse
|
4
|
Patnaik A, Guruprasad N, Sekar A, Bansal S, Sahu RN. An Observational Comparative Study to Evaluate the Use of Image-Guided Surgery in the Management and Outcome of Supratentorial Intracranial Space-Occupying Lesions. JOURNAL OF PHARMACY AND BIOALLIED SCIENCES 2024; 16:S589-S591. [PMID: 38595518 PMCID: PMC11001000 DOI: 10.4103/jpbs.jpbs_881_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 09/16/2023] [Accepted: 09/20/2023] [Indexed: 04/11/2024] Open
Abstract
Objectives The objective of this article is to study the effect of neuronavigation on the outcome of surgery for supratentorial tumors, such as the extent of resection, size of craniotomy, and overall morbidity and mortality by comparing with conventional excision. Methods A total of 50 patients undergoing intracranial surgery for supratentorial space-occupying lesions from 2020 to 2022 were included in the study. One intervention group consisted of patients undergoing surgical resection of supratentorial tumors utilizing image guidance versus the control group, which consisted of patients undergoing surgical excision of supratentorial tumor excision without image guidance. Parameters used to compare the outcome were the extent of resection of the lesions, craniotomy size, and overall morbidity and mortality. Results and Conclusion There was no significant reduction in craniotomy size or prolongation of operative duration with the use of neuronavigation. There was no significant difference in postoperative hospital stay between the two groups. Neuronavigation-assisted cases did not show any significant reduction in the occurrence of postoperative neurological deficits or any reduction of overall morbidity and mortality.
Collapse
Affiliation(s)
- Ashis Patnaik
- Department of Neurosurgery, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - N Guruprasad
- Department of Neurosurgery, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - Arunkumar Sekar
- Department of Neurosurgery, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - Sumit Bansal
- Department of Neurosurgery, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - Rabi N. Sahu
- Department of Neurosurgery, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| |
Collapse
|
5
|
Schonfeld E, Veeravagu A. Demonstrating the successful application of synthetic learning in spine surgery for training multi-center models with increased patient privacy. Sci Rep 2023; 13:12481. [PMID: 37528216 PMCID: PMC10393976 DOI: 10.1038/s41598-023-39458-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 07/25/2023] [Indexed: 08/03/2023] Open
Abstract
From real-time tumor classification to operative outcome prediction, applications of machine learning to neurosurgery are powerful. However, the translation of many of these applications are restricted by the lack of "big data" in neurosurgery. Important restrictions in patient privacy and sharing of imaging data reduce the diversity of the datasets used to train resulting models and therefore limit generalizability. Synthetic learning is a recent development in machine learning that generates synthetic data from real data and uses the synthetic data to train downstream models while preserving patient privacy. Such an approach has yet to be successfully demonstrated in the spine surgery domain. Spine radiographs were collected from the VinDR-SpineXR dataset, with 1470 labeled as abnormal and 2303 labeled as normal. A conditional generative adversarial network (GAN) was trained on the radiographs to generate a spine radiograph and normal/abnormal label. A modified conditional GAN (SpineGAN) was trained on the same task. A convolutional neural network (CNN) was trained using the real data to label abnormal radiographs. A CNN was trained to label abnormal radiographs using synthetic images from the GAN and in a separate experiment from SpineGAN. Using the real radiographs, an AUC of 0.856 was achieved in abnormality classification. Training on synthetic data generated by the standard GAN (AUC of 0.814) and synthetic data generated by our SpineGAN (AUC of 0.830) resulted in similar classifier performance. SpineGAN generated images with higher FID and lower precision scores, but with higher recall and increased performance when used for synthetic learning. The successful application of synthetic learning was demonstrated in the spine surgery domain for the classification of spine radiographs as abnormal or normal. A modified domain-relevant GAN is introduced for the generation of spine images, evidencing the importance of domain-relevant generation techniques in synthetic learning. Synthetic learning can allow neurosurgery to use larger and more diverse patient imaging sets to train more generalizable algorithms with greater patient privacy.
Collapse
Affiliation(s)
- Ethan Schonfeld
- Neurosurgery Artificial Intelligence Lab, Stanford University School of Medicine, Stanford, CA, USA.
| | - Anand Veeravagu
- Neurosurgery Artificial Intelligence Lab, Stanford University School of Medicine, Stanford, CA, USA.
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA, USA.
| |
Collapse
|