1
|
Wang K, George-Jones NA, Chen L, Hunter JB, Wang J. Joint Vestibular Schwannoma Enlargement Prediction and Segmentation Using a Deep Multi-task Model. Laryngoscope 2023; 133:2754-2760. [PMID: 36495306 PMCID: PMC10256836 DOI: 10.1002/lary.30516] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 11/17/2022] [Accepted: 11/20/2022] [Indexed: 12/14/2022]
Abstract
OBJECTIVE To develop a deep-learning-based multi-task (DMT) model for joint tumor enlargement prediction (TEP) and automatic tumor segmentation (TS) for vestibular schwannoma (VS) patients using their initial diagnostic contrast-enhanced T1-weighted (ceT1) magnetic resonance images (MRIs). METHODS Initial ceT1 MRIs for VS patients meeting the inclusion/exclusion criteria of this study were retrospectively collected. VSs on the initial MRIs and their first follow-up scans were manually contoured. Tumor volume and enlargement ratio were measured based on expert contours. A DMT model was constructed for jointly TS and TEP. The manually segmented VS volume on the initial scan and the tumor enlargement label (≥20% volumetric growth) were used as the ground truth for training and evaluating the TS and TEP modules, respectively. RESULTS We performed 5-fold cross-validation with the eligible patients (n = 103). Median segmentation dice coefficient, prediction sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were measured and achieved the following values: 84.20%, 0.68, 0.78, 0.72, and 0.77, respectively. The segmentation result is significantly better than the separate TS network (dice coefficient of 83.13%, p = 0.03) and marginally lower than the state-of-the-art segmentation model nnU-Net (dice coefficient of 86.45%, p = 0.16). The TEP performance is significantly better than the single-task prediction model (AUC = 0.60, p = 0.01) and marginally better than a radiomics-based prediction model (AUC = 0.70, p = 0.17). CONCLUSION The proposed DMT model is of higher learning efficiency and achieves promising performance on TEP and TS. The proposed technology has the potential to improve VS patient management. LEVEL OF EVIDENCE NA Laryngoscope, 133:2754-2760, 2023.
Collapse
Affiliation(s)
- Kai Wang
- The Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Nicholas A George-Jones
- The Department of Otolaryngology-Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- The Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Liyuan Chen
- The Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jacob B Hunter
- The Department of Otolaryngology-Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- The Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
2
|
Petsiou DP, Martinos A, Spinos D. Applications of Artificial Intelligence in Temporal Bone Imaging: Advances and Future Challenges. Cureus 2023; 15:e44591. [PMID: 37795060 PMCID: PMC10545916 DOI: 10.7759/cureus.44591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/02/2023] [Indexed: 10/06/2023] Open
Abstract
The applications of artificial intelligence (AI) in temporal bone (TB) imaging have gained significant attention in recent years, revolutionizing the field of otolaryngology and radiology. Accurate interpretation of imaging features of TB conditions plays a crucial role in diagnosing and treating a range of ear-related pathologies, including middle and inner ear diseases, otosclerosis, and vestibular schwannomas. According to multiple clinical studies published in the literature, AI-powered algorithms have demonstrated exceptional proficiency in interpreting imaging findings, not only saving time for physicians but also enhancing diagnostic accuracy by reducing human error. Although several challenges remain in routinely relying on AI applications, the collaboration between AI and healthcare professionals holds the key to better patient outcomes and significantly improved patient care. This overview delivers a comprehensive update on the advances of AI in the field of TB imaging, summarizes recent evidence provided by clinical studies, and discusses future insights and challenges in the widespread integration of AI in clinical practice.
Collapse
Affiliation(s)
- Dioni-Pinelopi Petsiou
- Otolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens, School of Medicine, Athens, GRC
| | - Anastasios Martinos
- Otolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens, School of Medicine, Athens, GRC
| | - Dimitrios Spinos
- Otolaryngology-Head and Neck Surgery, Gloucestershire Hospitals NHS Foundation Trust, Gloucester, GBR
| |
Collapse
|
3
|
Neves CA, Liu GS, El Chemaly T, Bernstein IA, Fu F, Blevins NH. Automated Radiomic Analysis of Vestibular Schwannomas and Inner Ears Using Contrast-Enhanced T1-Weighted and T2-Weighted Magnetic Resonance Imaging Sequences and Artificial Intelligence. Otol Neurotol 2023; 44:e602-e609. [PMID: 37464458 DOI: 10.1097/mao.0000000000003959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
OBJECTIVE To objectively evaluate vestibular schwannomas (VSs) and their spatial relationships with the ipsilateral inner ear (IE) in magnetic resonance imaging (MRI) using deep learning. STUDY DESIGN Cross-sectional study. PATIENTS A total of 490 adults with VS, high-resolution MRI scans, and no previous neurotologic surgery. INTERVENTIONS MRI studies of VS patients were split into training (390 patients) and test (100 patients) sets. A three-dimensional convolutional neural network model was trained to segment VS and IE structures using contrast-enhanced T1-weighted and T2-weighted sequences, respectively. Manual segmentations were used as ground truths. Model performance was evaluated on the test set and on an external set of 100 VS patients from a public data set (Vestibular-Schwannoma-SEG). MAIN OUTCOME MEASURES Dice score, relative volume error, average symmetric surface distance, 95th-percentile Hausdorff distance, and centroid locations. RESULTS Dice scores for VS and IE volume segmentations were 0.91 and 0.90, respectively. On the public data set, the model segmented VS tumors with a Dice score of 0.89 ± 0.06 (mean ± standard deviation), relative volume error of 9.8 ± 9.6%, average symmetric surface distance of 0.31 ± 0.22 mm, and 95th-percentile Hausdorff distance of 1.26 ± 0.76 mm. Predicted VS segmentations overlapped with ground truth segmentations in all test subjects. Mean errors of predicted VS volume, VS centroid location, and IE centroid location were 0.05 cm 3 , 0.52 mm, and 0.85 mm, respectively. CONCLUSIONS A deep learning system can segment VS and IE structures in high-resolution MRI scans with excellent accuracy. This technology offers promise to improve the clinical workflow for assessing VS radiomics and enhance the management of VS patients.
Collapse
Affiliation(s)
| | - George S Liu
- Department of Otolaryngology-Head and Neck Surgery, Stanford University
| | | | - Isaac A Bernstein
- Department of Otolaryngology-Head and Neck Surgery, Stanford University
| | - Fanrui Fu
- Department of Otolaryngology-Head and Neck Surgery, Stanford University
| | - Nikolas H Blevins
- Department of Otolaryngology-Head and Neck Surgery, Stanford University
| |
Collapse
|
4
|
Lee WK, Hong JS, Lin YH, Lu YF, Hsu YY, Lee CC, Yang HC, Wu CC, Lu CF, Sun MH, Pan HC, Wu HM, Chung WY, Guo WY, You WC, Wu YT. Federated Learning: A Cross-Institutional Feasibility Study of Deep Learning Based Intracranial Tumor Delineation Framework for Stereotactic Radiosurgery. J Magn Reson Imaging 2023. [PMID: 37572087 DOI: 10.1002/jmri.28950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 07/27/2023] [Accepted: 07/28/2023] [Indexed: 08/14/2023] Open
Abstract
BACKGROUND Deep learning-based segmentation algorithms usually required large or multi-institute data sets to improve the performance and ability of generalization. However, protecting patient privacy is a key concern in the multi-institutional studies when conventional centralized learning (CL) is used. PURPOSE To explores the feasibility of a proposed lesion delineation for stereotactic radiosurgery (SRS) scheme for federated learning (FL), which can solve decentralization and privacy protection concerns. STUDY TYPE Retrospective. SUBJECTS 506 and 118 vestibular schwannoma patients aged 15-88 and 22-85 from two institutes, respectively; 1069 and 256 meningioma patients aged 12-91 and 23-85, respectively; 574 and 705 brain metastasis patients aged 26-92 and 28-89, respectively. FIELD STRENGTH/SEQUENCE 1.5T, spin-echo, and gradient-echo [Correction added after first online publication on 21 August 2023. Field Strength has been changed to "1.5T" from "5T" in this sentence.]. ASSESSMENT The proposed lesion delineation method was integrated into an FL framework, and CL models were established as the baseline. The effect of image standardization strategies was also explored. The dice coefficient was used to evaluate the segmentation between the predicted delineation and the ground truth, which was manual delineated by neurosurgeons and a neuroradiologist. STATISTICAL TESTS The paired t-test was applied to compare the mean for the evaluated dice scores (p < 0.05). RESULTS FL performed the comparable mean dice coefficient to CL for the testing set of Taipei Veterans General Hospital regardless of standardization and parameter; for the Taichung Veterans General Hospital data, CL significantly (p < 0.05) outperformed FL while using bi-parameter, but comparable results while using single-parameter. For the non-SRS data, FL achieved the comparable applicability to CL with mean dice 0.78 versus 0.78 (without standardization), and outperformed to the baseline models of two institutes. DATA CONCLUSION The proposed lesion delineation successfully implemented into an FL framework. The FL models were applicable on SRS data of each participating institute, and the FL exhibited comparable mean dice coefficient to CL on non-SRS dataset. Standardization strategies would be recommended when FL is used. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Wei-Kai Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Jia-Sheng Hong
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Yi-Hui Lin
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yung-Fa Lu
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Ying-Yi Hsu
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Cheng-Chia Lee
- Department of Neurosurgery, Taipei Veterans General Hospital, Taipei City, Taiwan
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Huai-Che Yang
- Department of Neurosurgery, Taipei Veterans General Hospital, Taipei City, Taiwan
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Chih-Chun Wu
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Chia-Feng Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Ming-His Sun
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Hung-Chuan Pan
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Hsiu-Mei Wu
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Wen-Yuh Chung
- Department of Neurosurgery, Taipei Veterans General Hospital, Taipei City, Taiwan
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Taipei Neuroscience Institute, Taipei Medical University, Shuang Ho Hospital, New Taipei City, Taiwan
| | - Wan-Yuo Guo
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Weir-Chiang You
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- College Medical Device Innovation and Translation Center, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| |
Collapse
|
5
|
Koechli C, Zwahlen DR, Schucht P, Windisch P. Radiomics and machine learning for predicting the consistency of benign tumors of the central nervous system: A systematic review. Eur J Radiol 2023; 164:110866. [PMID: 37207398 DOI: 10.1016/j.ejrad.2023.110866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/28/2023] [Accepted: 05/03/2023] [Indexed: 05/21/2023]
Abstract
PURPOSE Predicting the consistency of benign central nervous system (CNS) tumors prior to surgery helps to improve surgical outcomes. This review summarizes and analyzes the literature on using radiomics and/or machine learning (ML) for consistency prediction. METHOD The Medical Literature Analysis and Retrieval System Online (MEDLINE) database was screened for studies published in English from January 1st 2000. Data was extracted according to the PRISMA guidelines and quality of the studies was assessed in compliance with the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). RESULTS Eight publications were included focusing on pituitary macroadenomas (n = 5), pituitary adenomas (n = 1), and meningiomas (n = 2) using a retrospective (n = 6), prospective (n = 1), and unknown (n = 1) study design with a total of 763 patients for the consistency prediction. The studies reported an area under the curve (AUC) of 0.71-0.99 for their respective best performing model regarding the consistency prediction. Of all studies, four articles validated their models internally whereas none validated their models externally. Two articles stated making data available on request with the remaining publications lacking information with regard to data availability. CONCLUSIONS The research on consistency prediction of CNS tumors is still at an early stage regarding the use of radiomics and different ML techniques. Best-practice procedures regarding radiomics and ML need to be followed more rigorously to facilitate the comparison between publications and, accordingly, the possible implementation into clinical practice in the future.
Collapse
Affiliation(s)
- Carole Koechli
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland; Universitätsklinik für Neurochirurgie, Bern University Hospital, 3010 Bern, Switzerland.
| | - Daniel R Zwahlen
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland
| | - Philippe Schucht
- Universitätsklinik für Neurochirurgie, Bern University Hospital, 3010 Bern, Switzerland
| | - Paul Windisch
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland
| |
Collapse
|
6
|
Wang MY, Jia CG, Xu HQ, Xu CS, Li X, Wei W, Chen JC. Development and Validation of a Deep Learning Predictive Model Combining Clinical and Radiomic Features for Short-Term Postoperative Facial Nerve Function in Acoustic Neuroma Patients. Curr Med Sci 2023; 43:336-343. [PMID: 37059936 PMCID: PMC10103675 DOI: 10.1007/s11596-023-2713-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 01/14/2023] [Indexed: 04/16/2023]
Abstract
OBJECTIVE This study aims to construct and validate a predictable deep learning model associated with clinical data and multi-sequence magnetic resonance imaging (MRI) for short-term postoperative facial nerve function in patients with acoustic neuroma. METHODS A total of 110 patients with acoustic neuroma who underwent surgery through the retrosigmoid sinus approach were included. Clinical data and raw features from four MRI sequences (T1-weighted, T2-weighted, T1-weighted contrast enhancement, and T2-weighted-Flair images) were analyzed. Spearman correlation analysis along with least absolute shrinkage and selection operator regression were used to screen combined clinical and radiomic features. Nomogram, machine learning, and convolutional neural network (CNN) models were constructed to predict the prognosis of facial nerve function on the seventh day after surgery. Receiver operating characteristic (ROC) curve and decision curve analysis (DCA) were used to evaluate model performance. A total of 1050 radiomic parameters were extracted, from which 13 radiomic and 3 clinical features were selected. RESULTS The CNN model performed best among all prediction models in the test set with an area under the curve (AUC) of 0.89 (95% CI, 0.84-0.91). CONCLUSION CNN modeling that combines clinical and multi-sequence MRI radiomic features provides excellent performance for predicting short-term facial nerve function after surgery in patients with acoustic neuroma. As such, CNN modeling may serve as a potential decision-making tool for neurosurgery.
Collapse
Affiliation(s)
- Meng-Yang Wang
- Department of Neurosurgery, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Chen-Guang Jia
- Department of Neurosurgery, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Huan-Qing Xu
- School of Medical Information Engineering, Anhui University of Chinese Medicine, Hefei, 230012, China
| | - Cheng-Shi Xu
- Department of Neurosurgery, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Xiang Li
- Department of Neurosurgery, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China
| | - Wei Wei
- Department of Neurosurgery, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China.
| | - Jin-Cao Chen
- Department of Neurosurgery, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China.
| |
Collapse
|
7
|
Lin YY, Guo WY, Lu CF, Peng SJ, Wu YT, Lee CC. Application of artificial intelligence to stereotactic radiosurgery for intracranial lesions: detection, segmentation, and outcome prediction. J Neurooncol 2023; 161:441-50. [PMID: 36635582 DOI: 10.1007/s11060-022-04234-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Rapid evolution of artificial intelligence (AI) prompted its wide application in healthcare systems. Stereotactic radiosurgery served as a good candidate for AI model development and achieved encouraging result in recent years. This article aimed at demonstrating current AI application in radiosurgery. METHODS Literatures published in PubMed during 2010-2022, discussing AI application in stereotactic radiosurgery were reviewed. RESULTS AI algorithms, especially machine learning/deep learning models, have been administered to different aspect of stereotactic radiosurgery. Spontaneous tumor detection and automated lesion delineation or segmentation were two of the promising application, which could be further extended to longitudinal treatment follow-up. Outcome prediction utilized machine learning algorithms with radiomic-based analysis was another well-established application. CONCLUSIONS Stereotactic radiosurgery has taken a lead role in AI development. Current achievement, limitation, and further investigation was summarized in this article.
Collapse
|
8
|
Lee WK, Yang HC, Lee CC, Lu CF, Wu CC, Chung WY, Wu HM, Guo WY, Wu YT. Lesion delineation framework for vestibular schwannoma, meningioma and brain metastasis for gamma knife radiosurgery using stereotactic magnetic resonance images. Comput Methods Programs Biomed 2023; 229:107311. [PMID: 36577161 DOI: 10.1016/j.cmpb.2022.107311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 12/06/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE GKRS is an effective treatment for smaller intracranial tumors with a high control rate and low risk of complications. Target delineation in medical MR images is essential in the planning of GKRS and follow-up. A deep learning-based algorithm can effectively segment the targets from medical images and has been widely explored. However, state-of-the-art deep learning-based target delineation uses fixed sizes, and the isotropic voxel size may not be suitable for stereotactic MR images which use different anisotropic voxel sizes and numbers of slices according to the lesion size and location for clinical GKRS planning. This study developed an automatic deep learning-based segmentation scheme for stereotactic MR images. METHODS We retrospectively collected stereotactic MR images from 506 patients with VS, 1,069 patients with meningioma and 574 patients with BM who had been treated using GKRS; the lesion contours and individual T1W+C and T2W MR images were extracted from the GammaPlan system. The three-dimensional patching-based training strategy and dual-pathway architecture were used to manage inconsistent FOVs and anisotropic voxel size. Furthermore, we used two-parametric MR image as training input to segment the regions with different image characteristics (e.g., cystic lesions) effectively. RESULTS Our results for VS and BM demonstrated that the model trained using two-parametric MR images significantly outperformed the model trained using single-parametric images with median Dice coefficients (0.91, 0.05 versus 0.90, 0.06, and 0.82, 0.23 versus 0.78, 0.34, respectively), whereas predicted delineations in meningiomas using the dual-pathway model were dominated by single-parametric images (median Dice coefficients 0.83, 0.17 versus 0.84, 0.22). Finally, we combined three data sets to train the models, achieving the comparable or even higher testing median Dice (VS: 0.91, 0.07; meningioma: 0.83, 0.22; BM: 0.84, 0.23) in three diseases while using two-parametric as input. CONCLUSIONS Our proposed deep learning-based tumor segmentation scheme was successfully applied to multiple types of intracranial tumor (VS, meningioma and BM) undergoing GKRS and for segmenting the tumor effectively from stereotactic MR image volumes for use in GKRS planning.
Collapse
Affiliation(s)
- Wei-Kai Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Huai-Che Yang
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Cheng-Chia Lee
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Chia-Feng Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chih-Chun Wu
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wen-Yuh Chung
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Hsiu-Mei Wu
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wan-Yuo Guo
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Brain Research Center, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| |
Collapse
|
9
|
Tsutsumi K, Soltanzadeh-zarandi S, Khosravi P, Goshtasbi K, Djalilian HR, Abouzari M. Machine Learning in the Management of Lateral Skull Base Tumors: A Systematic Review. OHBM 2022; 3:7. [DOI: 10.3390/ohbm3040007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The application of machine learning (ML) techniques to otolaryngology remains a topic of interest and prevalence in the literature, though no previous articles have summarized the current state of ML application to management and the diagnosis of lateral skull base (LSB) tumors. Subsequently, we present a systematic overview of previous applications of ML techniques to the management of LSB tumors. Independent searches were conducted on PubMed and Web of Science between August 2020 and February 2021 to identify the literature pertaining to the use of ML techniques in LSB tumor surgery written in the English language. All articles were assessed in regard to their application task, ML methodology, and their outcomes. A total of 32 articles were examined. The number of articles involving applications of ML techniques to LSB tumor surgeries has significantly increased since the first article relevant to this field was published in 1994. The most commonly employed ML category was tree-based algorithms. Most articles were included in the category of surgical management (13; 40.6%), followed by those in disease classification (8; 25%). Overall, the application of ML techniques to the management of LSB tumor has evolved rapidly over the past two decades, and the anticipated growth in the future could significantly augment the surgical outcomes and management of LSB tumors.
Collapse
|
10
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
11
|
Windisch P, Koechli C, Rogers S, Schröder C, Förster R, Zwahlen DR, Bodis S. Machine Learning for the Detection and Segmentation of Benign Tumors of the Central Nervous System: A Systematic Review. Cancers (Basel) 2022; 14:2676. [PMID: 35681655 PMCID: PMC9179850 DOI: 10.3390/cancers14112676] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/18/2022] [Accepted: 05/26/2022] [Indexed: 11/20/2022] Open
Abstract
Simple Summary Machine learning in radiology of the central nervous system has seen many interesting publications in the past few years. Since the focus has largely been on malignant tumors such as brain metastases and high-grade gliomas, we conducted a systematic review on benign tumors to summarize what has been published and where there might be gaps in the research. We found several studies that report good results, but the descriptions of methodologies could be improved to enable better comparisons and assessment of biases. Abstract Objectives: To summarize the available literature on using machine learning (ML) for the detection and segmentation of benign tumors of the central nervous system (CNS) and to assess the adherence of published ML/diagnostic accuracy studies to best practice. Methods: The MEDLINE database was searched for the use of ML in patients with any benign tumor of the CNS, and the records were screened according to PRISMA guidelines. Results: Eleven retrospective studies focusing on meningioma (n = 4), vestibular schwannoma (n = 4), pituitary adenoma (n = 2) and spinal schwannoma (n = 1) were included. The majority of studies attempted segmentation. Links to repositories containing code were provided in two manuscripts, and no manuscripts shared imaging data. Only one study used an external test set, which raises the question as to whether some of the good performances that have been reported were caused by overfitting and may not generalize to data from other institutions. Conclusions: Using ML for detecting and segmenting benign brain tumors is still in its infancy. Stronger adherence to ML best practices could facilitate easier comparisons between studies and contribute to the development of models that are more likely to one day be used in clinical practice.
Collapse
|
12
|
Koechli C, Vu E, Sager P, Näf L, Fischer T, Putora PM, Ehret F, Fürweger C, Schröder C, Förster R, Zwahlen DR, Muacevic A, Windisch P. Convolutional Neural Networks to Detect Vestibular Schwannomas on Single MRI Slices: A Feasibility Study. Cancers (Basel) 2022; 14:2069. [PMID: 35565199 DOI: 10.3390/cancers14092069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 03/30/2022] [Accepted: 04/19/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Due to the fact that they take inter-slice information into account, 3D- and 2.5D-convolutional neural networks (CNNs) potentially perform better in tumor detection tasks than 2D-CNNs. However, this potential benefit is at the expense of increased computational power and the need for segmentations as an input. Therefore, in this study we aimed to detect vestibular schwannomas (VSs) in individual magnetic resonance imaging (MRI) slices by using a 2D-CNN. We retrained (539 patients) and internally validated (94 patients) a pretrained CNN using contrast-enhanced MRI slices from one institution. Furthermore, we externally validated the CNN using contrast-enhanced MRI slices from another institution. This resulted in an accuracy of 0.949 (95% CI 0.935–0.963) and 0.912 (95% CI 0.866–0.958) for the internal and external validation, respectively. Our findings indicate that 2D-CNNs might be a promising alternative to 2.5-/3D-CNNs for certain tasks thanks to the decreased requirement for computational power and the fact that there is no need for segmentations. Abstract In this study. we aimed to detect vestibular schwannomas (VSs) in individual magnetic resonance imaging (MRI) slices by using a 2D-CNN. A pretrained CNN (ResNet-34) was retrained and internally validated using contrast-enhanced T1-weighted (T1c) MRI slices from one institution. In a second step, the model was externally validated using T1c- and T1-weighted (T1) slices from a different institution. As a substitute, bisected slices were used with and without tumors originating from whole transversal slices that contained part of the unilateral VS. The model predictions were assessed based on the categorical accuracy and confusion matrices. A total of 539, 94, and 74 patients were included for training, internal validation, and external T1c validation, respectively. This resulted in an accuracy of 0.949 (95% CI 0.935–0.963) for the internal validation and 0.912 (95% CI 0.866–0.958) for the external T1c validation. We suggest that 2D-CNNs might be a promising alternative to 2.5-/3D-CNNs for certain tasks thanks to the decreased demand for computational power and the fact that there is no need for segmentations. However, further research is needed on the difference between 2D-CNNs and more complex architectures.
Collapse
|
13
|
Abdel Razek AAK, Alksas A, Shehata M, AbdelKhalek A, Abdel Baky K, El-Baz A, Helmy E. Clinical applications of artificial intelligence and radiomics in neuro-oncology imaging. Insights Imaging 2021; 12:152. [PMID: 34676470 PMCID: PMC8531173 DOI: 10.1186/s13244-021-01102-6] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 09/26/2021] [Indexed: 12/15/2022] Open
Abstract
This article is a comprehensive review of the basic background, technique, and clinical applications of artificial intelligence (AI) and radiomics in the field of neuro-oncology. A variety of AI and radiomics utilized conventional and advanced techniques to differentiate brain tumors from non-neoplastic lesions such as inflammatory and demyelinating brain lesions. It is used in the diagnosis of gliomas and discrimination of gliomas from lymphomas and metastasis. Also, semiautomated and automated tumor segmentation has been developed for radiotherapy planning and follow-up. It has a role in the grading, prediction of treatment response, and prognosis of gliomas. Radiogenomics allowed the connection of the imaging phenotype of the tumor to its molecular environment. In addition, AI is applied for the assessment of extra-axial brain tumors and pediatric tumors with high performance in tumor detection, classification, and stratification of patient's prognoses.
Collapse
Affiliation(s)
| | - Ahmed Alksas
- Biomaging Lab, Department of Bioengineering, University of Louisville, Louisville, KY, 40292, USA
| | - Mohamed Shehata
- Biomaging Lab, Department of Bioengineering, University of Louisville, Louisville, KY, 40292, USA
| | - Amr AbdelKhalek
- Internship at Mansoura University Hospital, Mansoura Faculty of Medicine, Mansoura, Egypt
| | - Khaled Abdel Baky
- Department of Diagnostic Radiology, Faculty of Medicine, Port Said University, Port Said, Egypt
| | - Ayman El-Baz
- Biomaging Lab, Department of Bioengineering, University of Louisville, Louisville, KY, 40292, USA
| | - Eman Helmy
- Department of Diagnostic Radiology, Faculty of Medicine, Mansoura University, Elgomheryia Street, Mansoura, 3512, Egypt.
| |
Collapse
|
14
|
Sager P, Näf L, Vu E, Fischer T, Putora PM, Ehret F, Fürweger C, Schröder C, Förster R, Zwahlen DR, Muacevic A, Windisch P. Convolutional Neural Networks for Classifying Laterality of Vestibular Schwannomas on Single MRI Slices-A Feasibility Study. Diagnostics (Basel) 2021; 11:1676. [PMID: 34574017 PMCID: PMC8465488 DOI: 10.3390/diagnostics11091676] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 09/04/2021] [Accepted: 09/07/2021] [Indexed: 11/16/2022] Open
Abstract
Introduction: Many proposed algorithms for tumor detection rely on 2.5/3D convolutional neural networks (CNNs) and the input of segmentations for training. The purpose of this study is therefore to assess the performance of tumor detection on single MRI slices containing vestibular schwannomas (VS) as a computationally inexpensive alternative that does not require the creation of segmentations. Methods: A total of 2992 T1-weighted contrast-enhanced axial slices containing VS from the MRIs of 633 patients were labeled according to tumor location, of which 2538 slices from 539 patients were used for training a CNN (ResNet-34) to classify them according to the side of the tumor as a surrogate for detection and 454 slices from 94 patients were used for internal validation. The model was then externally validated on contrast-enhanced and non-contrast-enhanced slices from a different institution. Categorical accuracy was noted, and the results of the predictions for the validation set are provided with confusion matrices. Results: The model achieved an accuracy of 0.928 (95% CI: 0.869-0.987) on contrast-enhanced slices and 0.795 (95% CI: 0.702-0.888) on non-contrast-enhanced slices from the external validation cohorts. The implementation of Gradient-weighted Class Activation Mapping (Grad-CAM) revealed that the focus of the model was not limited to the contrast-enhancing tumor but to a larger area of the cerebellum and the cerebellopontine angle. Conclusions: Single-slice predictions might constitute a computationally inexpensive alternative to training 2.5/3D-CNNs for certain detection tasks in medical imaging even without the use of segmentations. Head-to-head comparisons between 2D and more sophisticated architectures could help to determine the difference in accuracy, especially for more difficult tasks.
Collapse
Affiliation(s)
- Philipp Sager
- Department of Radiation Oncology, Kantonsspital Winterthur, 8400 Winterthur, Switzerland; (P.S.); (C.S.); (R.F.); (D.R.Z.)
| | - Lukas Näf
- Department of Radiology, Kantonsspital St. Gallen, 9007 St. Gallen, Switzerland; (L.N.); (T.F.)
| | - Erwin Vu
- Department of Radiation Oncology, Kantonsspital St. Gallen, 9007 St. Gallen, Switzerland; (E.V.); (P.M.P.)
| | - Tim Fischer
- Department of Radiology, Kantonsspital St. Gallen, 9007 St. Gallen, Switzerland; (L.N.); (T.F.)
| | - Paul M. Putora
- Department of Radiation Oncology, Kantonsspital St. Gallen, 9007 St. Gallen, Switzerland; (E.V.); (P.M.P.)
- Department of Radiation Oncology, University of Bern, 3010 Bern, Switzerland
| | - Felix Ehret
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, 13353 Berlin, Germany;
- European Cyberknife Center, 81377 Munich, Germany; (C.F.); (A.M.)
| | - Christoph Fürweger
- European Cyberknife Center, 81377 Munich, Germany; (C.F.); (A.M.)
- Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, 50937 Cologne, Germany
| | - Christina Schröder
- Department of Radiation Oncology, Kantonsspital Winterthur, 8400 Winterthur, Switzerland; (P.S.); (C.S.); (R.F.); (D.R.Z.)
| | - Robert Förster
- Department of Radiation Oncology, Kantonsspital Winterthur, 8400 Winterthur, Switzerland; (P.S.); (C.S.); (R.F.); (D.R.Z.)
- Faculty of Medicine, University of Zurich, 8006 Zurich, Switzerland
| | - Daniel R. Zwahlen
- Department of Radiation Oncology, Kantonsspital Winterthur, 8400 Winterthur, Switzerland; (P.S.); (C.S.); (R.F.); (D.R.Z.)
- Faculty of Medicine, University of Zurich, 8006 Zurich, Switzerland
| | | | - Paul Windisch
- Department of Radiation Oncology, Kantonsspital Winterthur, 8400 Winterthur, Switzerland; (P.S.); (C.S.); (R.F.); (D.R.Z.)
- European Cyberknife Center, 81377 Munich, Germany; (C.F.); (A.M.)
| |
Collapse
|
15
|
Huang T, Lee W, Wu C, Lee C, Lu C, Yang H, Lin C, Chung W, Wang P, Chen Y, Wu H, Guo W, Wu Y. Detection of Vestibular Schwannoma on Triple-parametric Magnetic Resonance Images Using Convolutional Neural Networks. J Med Biol Eng. [DOI: 10.1007/s40846-021-00638-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Abstract
Purpose
The first step in typical treatment of vestibular schwannoma (VS) is to localize the tumor region, which is time-consuming and subjective because it relies on repeatedly reviewing different parametric magnetic resonance (MR) images. A reliable, automatic VS detection method can streamline the process.
Methods
A convolutional neural network architecture, namely YOLO-v2 with a residual network as a backbone, was used to detect VS tumors from MR images. To heighten performance, T1-weighted–contrast-enhanced, T2-weighted, and T1-weighted images were combined into triple-channel images for feature learning. The triple-channel images were cropped into three sizes to serve as input images of YOLO-v2. The VS detection effectiveness levels were evaluated for two backbone residual networks that downsampled the inputs by 16 and 32.
Results
The results demonstrated the VS detection capability of YOLO-v2 with a residual network as a backbone model. The average precision was 0.7953 for a model with 416 × 416-pixel input images and 16 instances of downsampling, when both the thresholds of confidence score and intersection-over-union were set to 0.5. In addition, under an appropriate threshold of confidence score, a high average precision, namely 0.8171, was attained by using a model with 448 × 448-pixel input images and 16 instances of downsampling.
Conclusion
We demonstrated successful VS tumor detection by using a YOLO-v2 with a residual network as a backbone model on resized triple-parametric MR images. The results indicated the influence of image size, downsampling strategy, and confidence score threshold on VS tumor detection.
Collapse
|
16
|
Lee CC, Lee WK, Wu CC, Lu CF, Yang HC, Chen YW, Chung WY, Hu YS, Wu HM, Wu YT, Guo WY. Applying artificial intelligence to longitudinal imaging analysis of vestibular schwannoma following radiosurgery. Sci Rep 2021; 11:3106. [PMID: 33542422 PMCID: PMC7862268 DOI: 10.1038/s41598-021-82665-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Accepted: 01/18/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has been applied with considerable success in the fields of radiology, pathology, and neurosurgery. It is expected that AI will soon be used to optimize strategies for the clinical management of patients based on intensive imaging follow-up. Our objective in this study was to establish an algorithm by which to automate the volumetric measurement of vestibular schwannoma (VS) using a series of parametric MR images following radiosurgery. Based on a sample of 861 consecutive patients who underwent Gamma Knife radiosurgery (GKRS) between 1993 and 2008, the proposed end-to-end deep-learning scheme with automated pre-processing pipeline was applied to a series of 1290 MR examinations (T1W+C, and T2W parametric MR images). All of which were performed under consistent imaging acquisition protocols. The relative volume difference (RVD) between AI-based volumetric measurements and clinical measurements performed by expert radiologists were + 1.74%, - 0.31%, - 0.44%, - 0.19%, - 0.01%, and + 0.26% at each follow-up time point, regardless of the state of the tumor (progressed, pseudo-progressed, or regressed). This study outlines an approach to the evaluation of treatment responses via novel volumetric measurement algorithm, and can be used longitudinally following GKRS for VS. The proposed deep learning AI scheme is applicable to longitudinal follow-up assessments following a variety of therapeutic interventions.
Collapse
Affiliation(s)
- Cheng-Chia Lee
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veteran General Hospital, Taipei, Taiwan
- Brain Research Center, National Yang-Ming University, Taipei, Taiwan
| | - Wei-Kai Lee
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan
| | - Chih-Chun Wu
- Department of Radiology, Taipei Veteran General Hospital, Taipei, Taiwan
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
| | - Chia-Feng Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan
| | - Huai-Che Yang
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Yu-Wei Chen
- Department of Neurosurgery, Neurological Institute, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Wen-Yuh Chung
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Yong-Sin Hu
- Department of Radiology, Taipei Veteran General Hospital, Taipei, Taiwan
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
| | - Hsiu-Mei Wu
- Department of Radiology, Taipei Veteran General Hospital, Taipei, Taiwan
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
| | - Yu-Te Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan.
- Institute of Biophotonics, National Yang-Ming University, Taipei, Taiwan.
- Brain Research Center, National Yang-Ming University, Taipei, Taiwan.
| | - Wan-Yuo Guo
- Department of Radiology, Taipei Veteran General Hospital, Taipei, Taiwan.
- School of Medicine, National Yang-Ming University, Taipei, Taiwan.
| |
Collapse
|