1
|
Hafeez Y, Memon K, AL-Quraishi MS, Yahya N, Elferik S, Ali SSA. Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It. Diagnostics (Basel) 2025; 15:168. [PMID: 39857052 PMCID: PMC11764244 DOI: 10.3390/diagnostics15020168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Revised: 12/22/2024] [Accepted: 01/08/2025] [Indexed: 01/27/2025] Open
Abstract
Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in the direction of designing and developing computer aided diagnosis (CAD) tools to serve as assistants to doctors, their large-scale adoption and integration into the healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET) scans have been widely and very effectively employed by radiologists and neurologists for the differential diagnoses of neurological disorders for decades, yet no AI-powered systems to analyze such scans have been incorporated into the standard operating procedures of healthcare systems. Why? It is absolutely understandable that in diagnostic medicine, precious human lives are on the line, and hence there is no room even for the tiniest of mistakes. Nevertheless, with the advent of explainable artificial intelligence (XAI), the old-school black boxes of deep learning (DL) systems have been unraveled. Would XAI be the turning point for medical experts to finally embrace AI in diagnostic radiology? This review is a humble endeavor to find the answers to these questions. Methods: In this review, we present the journey and contributions of AI in developing systems to recognize, preprocess, and analyze brain MRI scans for differential diagnoses of various neurological disorders, with special emphasis on CAD systems embedded with explainability. A comprehensive review of the literature from 2017 to 2024 was conducted using host databases. We also present medical domain experts' opinions and summarize the challenges up ahead that need to be addressed in order to fully exploit the tremendous potential of XAI in its application to medical diagnostics and serve humanity. Results: Forty-seven studies were summarized and tabulated with information about the XAI technology and datasets employed, along with performance accuracies. The strengths and weaknesses of the studies have also been discussed. In addition, the opinions of seven medical experts from around the world have been presented to guide engineers and data scientists in developing such CAD tools. Conclusions: Current CAD research was observed to be focused on the enhancement of the performance accuracies of the DL regimens, with less attention being paid to the authenticity and usefulness of explanations. A shortage of ground truth data for explainability was also observed. Visual explanation methods were found to dominate; however, they might not be enough, and more thorough and human professor-like explanations would be required to build the trust of healthcare professionals. Special attention to these factors along with the legal, ethical, safety, and security issues can bridge the current gap between XAI and routine clinical practice.
Collapse
Affiliation(s)
- Yasir Hafeez
- Faculty of Science and Engineering, University of Nottingham, Jalan Broga, Semenyih 43500, Selangor Darul Ehsan, Malaysia;
| | - Khuhed Memon
- Centre for Intelligent Signal and Imaging Research, Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Perak Darul Ridzuan, Malaysia; (K.M.); (N.Y.)
| | - Maged S. AL-Quraishi
- Interdisciplinary Research Center for Smart Mobility and Logistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia; (M.S.A.-Q.); (S.E.)
| | - Norashikin Yahya
- Centre for Intelligent Signal and Imaging Research, Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Perak Darul Ridzuan, Malaysia; (K.M.); (N.Y.)
| | - Sami Elferik
- Interdisciplinary Research Center for Smart Mobility and Logistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia; (M.S.A.-Q.); (S.E.)
| | - Syed Saad Azhar Ali
- Aerospace Engineering Department and Interdisciplinary Research Center for Smart Mobility and Logistics, and Interdisciplinary Research Center Aviation and Space Exploration, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
| |
Collapse
|
2
|
Bartnik A, Singh S, Sum C, Smith M, Bergsland N, Zivadinov R, Dwyer MG. An Automated Tool to Classify and Transform Unstructured MRI Data into BIDS Datasets. Neuroinformatics 2024; 22:229-238. [PMID: 38530566 DOI: 10.1007/s12021-024-09659-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/07/2024] [Indexed: 03/28/2024]
Abstract
The increasing use of neuroimaging in clinical research has driven the creation of many large imaging datasets. However, these datasets often rely on inconsistent naming conventions in image file headers to describe acquisition, and time-consuming manual curation is necessary. Therefore, we sought to automate the process of classifying and organizing magnetic resonance imaging (MRI) data according to acquisition types common to the clinical routine, as well as automate the transformation of raw, unstructured images into Brain Imaging Data Structure (BIDS) datasets. To do this, we trained an XGBoost model to classify MRI acquisition types using relatively few acquisition parameters that are automatically stored by the MRI scanner in image file metadata, which are then mapped to the naming conventions prescribed by BIDS to transform the input images to the BIDS structure. The model recognizes MRI types with 99.475% accuracy, as well as a micro/macro-averaged precision of 0.9995/0.994, a micro/macro-averaged recall of 0.9995/0.989, and a micro/macro-averaged F1 of 0.9995/0.991. Our approach accurately and quickly classifies MRI types and transforms unstructured data into standardized structures with little-to-no user intervention, reducing the barrier of entry for clinical scientists and increasing the accessibility of existing neuroimaging data.
Collapse
Affiliation(s)
- Alexander Bartnik
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Sujal Singh
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Conan Sum
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Mackenzie Smith
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Niels Bergsland
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Robert Zivadinov
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Michael G Dwyer
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA.
| |
Collapse
|
3
|
Ali SSA. Brain MRI sequence and view plane identification using deep learning. Front Neuroinform 2024; 18:1373502. [PMID: 38716062 PMCID: PMC11074364 DOI: 10.3389/fninf.2024.1373502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 04/03/2024] [Indexed: 01/06/2025] Open
Abstract
Brain magnetic resonance imaging (MRI) scans are available in a wide variety of sequences, view planes, and magnet strengths. A necessary preprocessing step for any automated diagnosis is to identify the MRI sequence, view plane, and magnet strength of the acquired image. Automatic identification of the MRI sequence can be useful in labeling massive online datasets used by data scientists in the design and development of computer aided diagnosis (CAD) tools. This paper presents a deep learning (DL) approach for brain MRI sequence and view plane identification using scans of different data types as input. A 12-class classification system is presented for commonly used MRI scans, including T1, T2-weighted, proton density (PD), fluid attenuated inversion recovery (FLAIR) sequences in axial, coronal and sagittal view planes. Multiple online publicly available datasets have been used to train the system, with multiple infrastructures. MobileNet-v2 offers an adequate performance accuracy of 99.76% with unprocessed MRI scans and a comparable accuracy with skull-stripped scans and has been deployed in a tool for public use. The tool has been tested on unseen data from online and hospital sources with a satisfactory performance accuracy of 99.84 and 86.49%, respectively.
Collapse
Affiliation(s)
- Syed Saad Azhar Ali
- Aerospace Engineering Department and Interdisciplinary Research Center for Smart Mobility and Logistics, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia
| |
Collapse
|
4
|
Mahmutoglu MA, Preetha CJ, Meredig H, Tonn JC, Weller M, Wick W, Bendszus M, Brugnara G, Vollmuth P. Deep Learning-based Identification of Brain MRI Sequences Using a Model Trained on Large Multicentric Study Cohorts. Radiol Artif Intell 2024; 6:e230095. [PMID: 38166331 PMCID: PMC10831512 DOI: 10.1148/ryai.230095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 09/30/2023] [Accepted: 10/30/2023] [Indexed: 01/04/2024]
Abstract
Purpose To develop a fully automated device- and sequence-independent convolutional neural network (CNN) for reliable and high-throughput labeling of heterogeneous, unstructured MRI data. Materials and Methods Retrospective, multicentric brain MRI data (2179 patients with glioblastoma, 8544 examinations, 63 327 sequences) from 249 hospitals and 29 scanner types were used to develop a network based on ResNet-18 architecture to differentiate nine MRI sequence types, including T1-weighted, postcontrast T1-weighted, T2-weighted, fluid-attenuated inversion recovery, susceptibility-weighted, apparent diffusion coefficient, diffusion-weighted (low and high b value), and gradient-recalled echo T2*-weighted and dynamic susceptibility contrast-related images. The two-dimensional-midsection images from each sequence were allocated to training or validation (approximately 80%) and testing (approximately 20%) using a stratified split to ensure balanced groups across institutions, patients, and MRI sequence types. The prediction accuracy was quantified for each sequence type, and subgroup comparison of model performance was performed using χ2 tests. Results On the test set, the overall accuracy of the CNN (ResNet-18) ensemble model among all sequence types was 97.9% (95% CI: 97.6, 98.1), ranging from 84.2% for susceptibility-weighted images (95% CI: 81.8, 86.6) to 99.8% for T2-weighted images (95% CI: 99.7, 99.9). The ResNet-18 model achieved significantly better accuracy compared with ResNet-50 despite its simpler architecture (97.9% vs 97.1%; P ≤ .001). The accuracy of the ResNet-18 model was not affected by the presence versus absence of tumor on the two-dimensional-midsection images for any sequence type (P > .05). Conclusion The developed CNN (www.github.com/neuroAI-HD/HD-SEQ-ID) reliably differentiates nine types of MRI sequences within multicenter and large-scale population neuroimaging data and may enhance the speed, accuracy, and efficiency of clinical and research neuroradiologic workflows. Keywords: MR-Imaging, Neural Networks, CNS, Brain/Brain Stem, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms Supplemental material is available for this article. © RSNA, 2023.
Collapse
Affiliation(s)
- Mustafa Ahmed Mahmutoglu
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Chandrakanth Jayachandran Preetha
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Hagen Meredig
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Joerg-Christian Tonn
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Michael Weller
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Wolfgang Wick
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Martin Bendszus
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Gianluca Brugnara
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| | - Philipp Vollmuth
- From the Department of Neuroradiology (M.A.M., C.J.P., H.M., M.B., G.B., P.V.), Department of Neuroradiology, Division for Computational Neuroimaging (M.A.M., C.J.P., H.M., G.B., P.V.), and Department of Neurology (W.W.), Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany; Department of Neurosurgery, University Hospital Munich LMU, Munich, Germany (J.C.T.); and Department of Neurology, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zurich, Switzerland (M.W.)
| |
Collapse
|
5
|
Na S, Ko Y, Ham SJ, Sung YS, Kim MH, Shin Y, Jung SC, Ju C, Kim BS, Yoon K, Kim KW. Sequence-Type Classification of Brain MRI for Acute Stroke Using a Self-Supervised Machine Learning Algorithm. Diagnostics (Basel) 2023; 14:70. [PMID: 38201379 PMCID: PMC10804387 DOI: 10.3390/diagnostics14010070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 12/18/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
We propose a self-supervised machine learning (ML) algorithm for sequence-type classification of brain MRI using a supervisory signal from DICOM metadata (i.e., a rule-based virtual label). A total of 1787 brain MRI datasets were constructed, including 1531 from hospitals and 256 from multi-center trial datasets. The ground truth (GT) was generated by two experienced image analysts and checked by a radiologist. An ML framework called ImageSort-net was developed using various features related to MRI acquisition parameters and used for training virtual labels and ML algorithms derived from rule-based labeling systems that act as labels for supervised learning. For the performance evaluation of ImageSort-net (MLvirtual), we compare and analyze the performances of models trained with human expert labels (MLhumans), using as a test set blank data that the rule-based labeling system failed to infer from each dataset. The performance of ImageSort-net (MLvirtual) was comparable to that of MLhuman (98.5% and 99%, respectively) in terms of overall accuracy when trained with hospital datasets. When trained with a relatively small multi-center trial dataset, the overall accuracy was relatively lower than that of MLhuman (95.6% and 99.4%, respectively). After integrating the two datasets and re-training them, MLvirtual showed higher accuracy than MLvirtual trained only on multi-center datasets (95.6% and 99.7%, respectively). Additionally, the multi-center dataset inference performances after the re-training of MLvirtual and MLhumans were identical (99.7%). Training of ML algorithms based on rule-based virtual labels achieved high accuracy for sequence-type classification of brain MRI and enabled us to build a sustainable self-learning system.
Collapse
Affiliation(s)
- Seongwon Na
- Department of Computer Science and Engineering, Konkuk University, Seoul 05029, Republic of Korea;
- Biomedical Research Center, Asan Institute for Life Sciences, Asan Medical Center, Seoul 05505, Republic of Korea
| | - Yousun Ko
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea; (Y.K.)
| | - Su Jung Ham
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea; (Y.K.)
| | - Yu Sub Sung
- Clinical Research Center, Asan Medical Center, Seoul 05505, Republic of Korea
- Department of Convergence Medicine, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea
| | - Mi-Hyun Kim
- Trialinformatics Inc., Seoul 05505, Republic of Korea
- Department of Radiation Science & Technology, Jeonbuk National University, Jeonju 56212, Republic of Korea
| | - Youngbin Shin
- Biomedical Research Center, Asan Institute for Life Sciences, Asan Medical Center, Seoul 05505, Republic of Korea
| | - Seung Chai Jung
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea; (Y.K.)
| | - Chung Ju
- Shin Poong Pharm. Co., Ltd., Seoul 06246, Republic of Korea
- Graduate School of Clinical Pharmacy, CHA University, Pocheon-si 11160, Republic of Korea
| | - Byung Su Kim
- Shin Poong Pharm. Co., Ltd., Seoul 06246, Republic of Korea
| | - Kyoungro Yoon
- Department of Computer Science and Engineering, Konkuk University, Seoul 05029, Republic of Korea;
- Department of Smart ICT Convergence Engineering, Konkuk University, Seoul 05029, Republic of Korea
| | - Kyung Won Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea; (Y.K.)
| |
Collapse
|
6
|
Alafandi A, van Garderen KA, Klein S, van der Voort SR, Rizopoulos D, Nabors L, Stupp R, Weller M, Gorlia T, Tonn JC, Smits M. Association of pre-radiotherapy tumour burden and overall survival in newly diagnosed glioblastoma adjusted for MGMT promoter methylation status. Eur J Cancer 2023; 188:122-130. [PMID: 37235895 DOI: 10.1016/j.ejca.2023.04.021] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 04/07/2023] [Accepted: 04/26/2023] [Indexed: 05/28/2023]
Abstract
PURPOSE We retrospectively evaluated the association between postoperative pre-radiotherapy tumour burden and overall survival (OS) adjusted for the prognostic value of O6-methylguanine DNA methyltransferase (MGMT) promoter methylation in patients with newly diagnosed glioblastoma treated with radio-/chemotherapy with temozolomide. MATERIALS AND METHODS Patients were included from the CENTRIC (EORTC 26071-22072) and CORE trials if postoperative magnetic resonance imaging scans were available within a timeframe of up to 4weeks before radiotherapy, including both pre- and post-contrast T1w images and at least one T2w sequence (T2w or T2w-FLAIR). Postoperative (residual) pre-radiotherapy contrast-enhanced tumour (CET) volumes and non-enhanced T2w abnormalities (NT2A) tissue volumes were obtained by three-dimensional segmentation. Cox proportional hazard models and Kaplan Meier estimates were used to assess the association of pre-radiotherapy CET/NT2A volume with OS adjusted for known prognostic factors (age, performance status, MGMT status). RESULTS 408 tumour (of which 270 MGMT methylated) segmentations were included. Median OS in patients with MGMT methylated tumours was 117 weeks versus 61weeks in MGMT unmethylated tumours (p < 0.001). When stratified for MGMT methylation status, higher CET volume (HR 1.020; 95% confidence interval CI [1.013-1.027]; p < 0.001) and older age (HR 1.664; 95% CI [1.214-2.281]; p = 0.002) were significantly associated with shorter OS while NT2A volume and performance status were not. CONCLUSION Pre-radiotherapy CET volume was strongly associated with OS in patients receiving radio-/chemotherapy for newly diagnosed glioblastoma stratified by MGMT promoter methylation status.
Collapse
Affiliation(s)
- A Alafandi
- Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands; Brain Tumour Centre, Erasmus MC Cancer Institute, Rotterdam, the Netherlands
| | - K A van Garderen
- Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands; Brain Tumour Centre, Erasmus MC Cancer Institute, Rotterdam, the Netherlands; Medical Delta, Delft, the Netherlands
| | - S Klein
- Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - S R van der Voort
- Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - D Rizopoulos
- Department of Biostatistics and Department of Epidemiology, Erasmus MC, Rotterdam, the Netherlands
| | - L Nabors
- Department of Neurology, University of Alabama at Birmingham, Birmingham, AL, USA
| | - R Stupp
- Malnati Brain Tumor Institute, Departments of Neurological Surgery and Neurology, Northwestern University, Chicago, IL, USA
| | - M Weller
- Department of Neurology, University Hospital and University of Zurich, Zurich, Switzerland
| | - T Gorlia
- European Organisation for Research and Treatmeant of Cancer Headquarters, Brussels, Belgium
| | - J-C Tonn
- Department of Neurosurgery, LMU University Munich, Munich, Germany
| | - M Smits
- Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands; Brain Tumour Centre, Erasmus MC Cancer Institute, Rotterdam, the Netherlands; Medical Delta, Delft, the Netherlands.
| |
Collapse
|
7
|
Chakrabarty S, Abidi SA, Mousa M, Mokkarala M, Hren I, Yadav D, Kelsey M, LaMontagne P, Wood J, Adams M, Su Y, Thorpe S, Chung C, Sotiras A, Marcus DS. Integrative Imaging Informatics for Cancer Research: Workflow Automation for Neuro-Oncology (I3CR-WANO). JCO Clin Cancer Inform 2023; 7:e2200177. [PMID: 37146265 PMCID: PMC10281444 DOI: 10.1200/cci.22.00177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/25/2023] [Accepted: 03/06/2023] [Indexed: 05/07/2023] Open
Abstract
PURPOSE Efforts to use growing volumes of clinical imaging data to generate tumor evaluations continue to require significant manual data wrangling, owing to data heterogeneity. Here, we propose an artificial intelligence-based solution for the aggregation and processing of multisequence neuro-oncology MRI data to extract quantitative tumor measurements. MATERIALS AND METHODS Our end-to-end framework (1) classifies MRI sequences using an ensemble classifier, (2) preprocesses the data in a reproducible manner, (3) delineates tumor tissue subtypes using convolutional neural networks, and (4) extracts diverse radiomic features. Moreover, it is robust to missing sequences and adopts an expert-in-the-loop approach in which the segmentation results may be manually refined by radiologists. After the implementation of the framework in Docker containers, it was applied to two retrospective glioma data sets collected from the Washington University School of Medicine (WUSM; n = 384) and The University of Texas MD Anderson Cancer Center (MDA; n = 30), comprising preoperative MRI scans from patients with pathologically confirmed gliomas. RESULTS The scan-type classifier yielded an accuracy of >99%, correctly identifying sequences from 380 of 384 and 30 of 30 sessions from the WUSM and MDA data sets, respectively. Segmentation performance was quantified using the Dice Similarity Coefficient between the predicted and expert-refined tumor masks. The mean Dice scores were 0.882 (±0.244) and 0.977 (±0.04) for whole-tumor segmentation for WUSM and MDA, respectively. CONCLUSION This streamlined framework automatically curated, processed, and segmented raw MRI data of patients with varying grades of gliomas, enabling the curation of large-scale neuro-oncology data sets and demonstrating high potential for integration as an assistive tool in clinical practice.
Collapse
Affiliation(s)
- Satrajit Chakrabarty
- Department of Electrical and Systems Engineering, Washington University in St Louis, St Louis, MO
| | - Syed Amaan Abidi
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Mina Mousa
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Mahati Mokkarala
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Isabelle Hren
- Department of Computer Science & Engineering, Washington University in St Louis, St Louis, MO
| | - Divya Yadav
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Matthew Kelsey
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Pamela LaMontagne
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - John Wood
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Michael Adams
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Yuzhuo Su
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Sherry Thorpe
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Caroline Chung
- Division of Radiation Oncology, Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Aristeidis Sotiras
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
- Institute for Informatics, Washington University School of Medicine, St Louis, MO
| | - Daniel S. Marcus
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| |
Collapse
|
8
|
Salome P, Sforazzini F, Grugnara G, Kudak A, Dostal M, Herold-Mende C, Heiland S, Debus J, Abdollahi A, Knoll M. MR-Class: A Python Tool for Brain MR Image Classification Utilizing One-vs-All DCNNs to Deal with the Open-Set Recognition Problem. Cancers (Basel) 2023; 15:cancers15061820. [PMID: 36980707 PMCID: PMC10046648 DOI: 10.3390/cancers15061820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 03/11/2023] [Accepted: 03/15/2023] [Indexed: 03/19/2023] Open
Abstract
Background: MR image classification in datasets collected from multiple sources is complicated by inconsistent and missing DICOM metadata. Therefore, we aimed to establish a method for the efficient automatic classification of MR brain sequences. Methods: Deep convolutional neural networks (DCNN) were trained as one-vs-all classifiers to differentiate between six classes: T1 weighted (w), contrast-enhanced T1w, T2w, T2w-FLAIR, ADC, and SWI. Each classifier yields a probability, allowing threshold-based and relative probability assignment while excluding images with low probability (label: unknown, open-set recognition problem). Data from three high-grade glioma (HGG) cohorts was assessed; C1 (320 patients, 20,101 MRI images) was used for training, while C2 (197, 11,333) and C3 (256, 3522) were for testing. Two raters manually checked images through an interactive labeling tool. Finally, MR-Class’ added value was evaluated via radiomics model performance for progression-free survival (PFS) prediction in C2, utilizing the concordance index (C-I). Results: Approximately 10% of annotation errors were observed in each cohort between the DICOM series descriptions and the derived labels. MR-Class accuracy was 96.7% [95% Cl: 95.8, 97.3] for C2 and 94.4% [93.6, 96.1] for C3. A total of 620 images were misclassified; manual assessment of those frequently showed motion artifacts or alterations of anatomy by large tumors. Implementation of MR-Class increased the PFS model C-I by 14.6% on average, compared to a model trained without MR-Class. Conclusions: We provide a DCNN-based method for the sequence classification of brain MR images and demonstrate its usability in two independent HGG datasets.
Collapse
Affiliation(s)
- Patrick Salome
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany
- Heidelberg Medical Faculty, Heidelberg University, 69117 Heidelberg, Germany
- German Cancer Consortium Core Center Heidelberg, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Correspondence: (P.S.); (M.K.)
| | - Francesco Sforazzini
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany
- Heidelberg Medical Faculty, Heidelberg University, 69117 Heidelberg, Germany
- German Cancer Consortium Core Center Heidelberg, 69120 Heidelberg, Germany
| | - Gianluca Grugnara
- Department of Neuroradiology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Andreas Kudak
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
- Clinical Cooperation Unit Radiation Therapy, German Cancer Research Center, 69120 Heidelberg, Germany
| | - Matthias Dostal
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
- Clinical Cooperation Unit Radiation Therapy, German Cancer Research Center, 69120 Heidelberg, Germany
| | - Christel Herold-Mende
- Brain Tumour Group, European Organization for Research and Treatment of Cancer, 1200 Brussels, Belgium
- Division of Neurosurgical Research, Department of Neurosurgery, University of Heidelberg, 69117 Heidelberg, Germany
| | - Sabine Heiland
- Department of Neuroradiology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Jürgen Debus
- German Cancer Consortium Core Center Heidelberg, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Amir Abdollahi
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany
- German Cancer Consortium Core Center Heidelberg, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Maximilian Knoll
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany
- German Cancer Consortium Core Center Heidelberg, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
- Correspondence: (P.S.); (M.K.)
| |
Collapse
|
9
|
Chakrabarty S, Abidi SA, Mousa M, Mokkarala M, Kelsey M, LaMontagne P, Sotiras A, Marcus DS. Deep learning-based end-to-end scan-type classification, pre-processing, and segmentation of clinical neuro-oncology studies. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12469:124690N. [PMID: 39263425 PMCID: PMC11389857 DOI: 10.1117/12.2647656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
Modern neuro-oncology workflows are driven by large collections of high-dimensional MRI data obtained using varying acquisition protocols. The concomitant heterogeneity of this data makes extensive manual curation and pre-processing imperative prior to algorithmic use. The limited efforts invested towards automating this curation and processing are fragmented, do not encompass the entire workflow, or still require significant manual intervention. In this work, we propose an artificial intelligence-driven solution for transforming multi-modal raw neuro-oncology MRI Digital Imaging and Communications in Medicine (DICOM) data into quantitative tumor measurements. Our end-to-end framework classifies MRI scans into different structural sequence types, preprocesses the data, and uses convolutional neural networks to segment tumor tissue subtypes. Moreover, it adopts an expert-in-the-loop approach, where segmentation results may be manually refined by radiologists. This framework was implemented as Docker Containers (for command line usage and within the eXtensible Neuroimaging Archive Toolkit [XNAT]) and validated on a retrospective glioma dataset (n = 155) collected from the Washington University School of Medicine, comprising preoperative MRI scans from patients with histopathologically confirmed gliomas. Segmentation results were refined by a neuroradiologist, and performance was quantified using Dice Similarity Coefficient to compare predicted and expert-refined tumor masks. The scan-type classifier yielded a 99.71% accuracy across all sequence types. The segmentation model achieved mean Dice scores of 0.894 (± 0.225) for whole tumor segmentation. The proposed framework can automate tumor segmentation and characterization - thus streamlining workflows in a clinical setting as well as expediting standardized curation of large-scale neuro-oncology datasets in a research setting.
Collapse
Affiliation(s)
- Satrajit Chakrabarty
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Syed Amaan Abidi
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Mina Mousa
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Mahati Mokkarala
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Matthew Kelsey
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Pamela LaMontagne
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Aristeidis Sotiras
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
- Institute for Informatics, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Daniel S Marcus
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| |
Collapse
|
10
|
Kasmanoff N, Lee MD, Razavian N, Lui YW. Deep multi-task learning and random forest for series classification by pulse sequence type and orientation. Neuroradiology 2023; 65:77-87. [PMID: 35906437 PMCID: PMC9361920 DOI: 10.1007/s00234-022-03023-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 07/19/2022] [Indexed: 01/11/2023]
Abstract
PURPOSE Increasingly complex MRI studies and variable series naming conventions reveal limitations of rule-based image routing, especially in health systems with multiple scanners and sites. Accurate methods to identify series based on image content would aid post-processing and PACS viewing. Recent deep/machine learning efforts classify 5-8 basic brain MR sequences. We present an ensemble model combining a convolutional neural network and a random forest classifier to differentiate 25 brain sequences and image orientation. METHODS Series were grouped by descriptions into 25 sequences and 4 orientations. Dataset A, obtained from our institution, was divided into training (16,828 studies; 48,512 series; 112,028 images), validation (4746 studies; 16,612 series; 26,222 images) and test sets (6348 studies; 58,705 series; 3,314,018 images). Dataset B, obtained from a separate hospital, was used for out-of-domain external validation (1252 studies; 2150 series; 234,944 images). We developed an ensemble model combining a 2D convolutional neural network with a custom multi-task learning architecture and random forest classifier trained on DICOM metadata to classify sequence and orientation by series. RESULTS The neural network, random forest, and ensemble achieved 95%, 97%, and 98% overall sequence accuracy on dataset A, and 98%, 99%, and 99% accuracy on dataset B, respectively. All models achieved > 99% orientation accuracy on both datasets. CONCLUSION The ensemble model for series identification accommodates the complexity of brain MRI studies in state-of-the-art clinical practice. Expanding on previous work demonstrating proof-of-concept, our approach is more comprehensive with greater sequence diversity and orientation classification.
Collapse
Affiliation(s)
- Noah Kasmanoff
- Center for Data Science, New York University, New York, NY USA
| | - Matthew D. Lee
- Department of Radiology, NYU Grossman School of Medicine, New York University, New York, NY 10016 USA
| | - Narges Razavian
- Center for Data Science, New York University, New York, NY USA ,Department of Radiology, NYU Grossman School of Medicine, New York University, New York, NY 10016 USA ,Department of Population Health, NYU Grossman School of Medicine, New York University, New York, NY USA
| | - Yvonne W. Lui
- Department of Radiology, NYU Grossman School of Medicine, New York University, New York, NY 10016 USA
| |
Collapse
|
11
|
Henssen D, Meijer F, Verburg FA, Smits M. Challenges and opportunities for advanced neuroimaging of glioblastoma. Br J Radiol 2023; 96:20211232. [PMID: 36062962 PMCID: PMC10997013 DOI: 10.1259/bjr.20211232] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 08/10/2022] [Accepted: 08/25/2022] [Indexed: 11/05/2022] Open
Abstract
Glioblastoma is the most aggressive of glial tumours in adults. On conventional magnetic resonance (MR) imaging, these tumours are observed as irregular enhancing lesions with areas of infiltrating tumour and cortical expansion. More advanced imaging techniques including diffusion-weighted MRI, perfusion-weighted MRI, MR spectroscopy and positron emission tomography (PET) imaging have found widespread application to diagnostic challenges in the setting of first diagnosis, treatment planning and follow-up. This review aims to educate readers with regard to the strengths and weaknesses of the clinical application of these imaging techniques. For example, this review shows that the (semi)quantitative analysis of the mentioned advanced imaging tools was found useful for assessing tumour aggressiveness and tumour extent, and aids in the differentiation of tumour progression from treatment-related effects. Although these techniques may aid in the diagnostic work-up and (post-)treatment phase of glioblastoma, so far no unequivocal imaging strategy is available. Furthermore, the use and further development of artificial intelligence (AI)-based tools could greatly enhance neuroradiological practice by automating labour-intensive tasks such as tumour measurements, and by providing additional diagnostic information such as prediction of tumour genotype. Nevertheless, due to the fact that advanced imaging and AI-diagnostics is not part of response assessment criteria, there is no harmonised guidance on their use, while at the same time the lack of standardisation severely hampers the definition of uniform guidelines.
Collapse
Affiliation(s)
- Dylan Henssen
- Department of Medical Imaging, Radboud university medical
center, Nijmegen, The Netherlands
| | - Frederick Meijer
- Department of Medical Imaging, Radboud university medical
center, Nijmegen, The Netherlands
| | - Frederik A. Verburg
- Department of Medical Imaging, Radboud university medical
center, Nijmegen, The Netherlands
| | - Marion Smits
- Department of Medical Imaging, Radboud university medical
center, Nijmegen, The Netherlands
| |
Collapse
|
12
|
Lim RP, Kachel S, Villa ADM, Kearney L, Bettencourt N, Young AA, Chiribiri A, Scannell CM. CardiSort: a convolutional neural network for cross vendor automated sorting of cardiac MR images. Eur Radiol 2022; 32:5907-5920. [PMID: 35368227 PMCID: PMC9381634 DOI: 10.1007/s00330-022-08724-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 01/22/2022] [Accepted: 03/05/2022] [Indexed: 01/19/2023]
Abstract
OBJECTIVES To develop an image-based automatic deep learning method to classify cardiac MR images by sequence type and imaging plane for improved clinical post-processing efficiency. METHODS Multivendor cardiac MRI studies were retrospectively collected from 4 centres and 3 vendors. A two-head convolutional neural network ('CardiSort') was trained to classify 35 sequences by imaging sequence (n = 17) and plane (n = 10). Single vendor training (SVT) on single-centre images (n = 234 patients) and multivendor training (MVT) with multicentre images (n = 434 patients, 3 centres) were performed. Model accuracy and F1 scores on a hold-out test set were calculated, with ground truth labels by an expert radiologist. External validation of MVT (MVTexternal) was performed on data from 3 previously unseen magnet systems from 2 vendors (n = 80 patients). RESULTS Model sequence/plane/overall accuracy and F1-scores were 85.2%/93.2%/81.8% and 0.82 for SVT and 96.1%/97.9%/94.3% and 0.94 MVT on the hold-out test set. MVTexternal yielded sequence/plane/combined accuracy and F1-scores of 92.7%/93.0%/86.6% and 0.86. There was high accuracy for common sequences and conventional cardiac planes. Poor accuracy was observed for underrepresented classes and sequences where there was greater variability in acquisition parameters across centres, such as perfusion imaging. CONCLUSIONS A deep learning network was developed on multivendor data to classify MRI studies into component sequences and planes, with external validation. With refinement, it has potential to improve workflow by enabling automated sequence selection, an important first step in completely automated post-processing pipelines. KEY POINTS • Deep learning can be applied for consistent and efficient classification of cardiac MR image types. • A multicentre, multivendor study using a deep learning algorithm (CardiSort) showed high classification accuracy on a hold-out test set with good generalisation to images from previously unseen magnet systems. • CardiSort has potential to improve clinical workflows, as a vital first step in developing fully automated post-processing pipelines.
Collapse
Affiliation(s)
- Ruth P Lim
- Austin Health, Melbourne, Australia.
- Departments of Radiology, The University of Melbourne, Melbourne, Australia.
- Department of Surgery (Austin), The University of Melbourne, Melbourne, Australia.
| | - Stefan Kachel
- Austin Health, Melbourne, Australia
- Departments of Radiology, The University of Melbourne, Melbourne, Australia
- Department of Radiology, Columbia University, New York, USA
| | - Adriana D M Villa
- School of Biomedical Engineering and Imaging Sciences, Kings College London, London, UK
| | - Leighton Kearney
- Austin Health, Melbourne, Australia
- I-MED Radiology, Melbourne, Australia
| | | | - Alistair A Young
- School of Biomedical Engineering and Imaging Sciences, Kings College London, London, UK
| | - Amedeo Chiribiri
- School of Biomedical Engineering and Imaging Sciences, Kings College London, London, UK
| | - Cian M Scannell
- School of Biomedical Engineering and Imaging Sciences, Kings College London, London, UK
| |
Collapse
|
13
|
Reinertsen I, Collins DL, Drouin S. The Essential Role of Open Data and Software for the Future of Ultrasound-Based Neuronavigation. Front Oncol 2021; 10:619274. [PMID: 33604299 PMCID: PMC7884817 DOI: 10.3389/fonc.2020.619274] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 12/11/2020] [Indexed: 01/17/2023] Open
Abstract
With the recent developments in machine learning and modern graphics processing units (GPUs), there is a marked shift in the way intra-operative ultrasound (iUS) images can be processed and presented during surgery. Real-time processing of images to highlight important anatomical structures combined with in-situ display, has the potential to greatly facilitate the acquisition and interpretation of iUS images when guiding an operation. In order to take full advantage of the recent advances in machine learning, large amounts of high-quality annotated training data are necessary to develop and validate the algorithms. To ensure efficient collection of a sufficient number of patient images and external validity of the models, training data should be collected at several centers by different neurosurgeons, and stored in a standard format directly compatible with the most commonly used machine learning toolkits and libraries. In this paper, we argue that such effort to collect and organize large-scale multi-center datasets should be based on common open source software and databases. We first describe the development of existing open-source ultrasound based neuronavigation systems and how these systems have contributed to enhanced neurosurgical guidance over the last 15 years. We review the impact of the large number of projects worldwide that have benefited from the publicly available datasets “Brain Images of Tumors for Evaluation” (BITE) and “Retrospective evaluation of Cerebral Tumors” (RESECT) that include MR and US data from brain tumor cases. We also describe the need for continuous data collection and how this effort can be organized through the use of a well-adapted and user-friendly open-source software platform that integrates both continually improved guidance and automated data collection functionalities.
Collapse
Affiliation(s)
- Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - D Louis Collins
- NIST Laboratory, McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montréal, QC, Canada
| | - Simon Drouin
- Laboratoire Multimédia, École de Technologie Supérieure, Montréal, QC, Canada
| |
Collapse
|