1
|
Bathla G, Mehta PM, Soni N, Johnson M, Benson JC, Messina SA, Farnsworth P, Agarwal A, Carlson ML, Lane JI. Evaluation of Vestibular Schwannoma Size across Time: How Well Do the Experts Perform and What Can be Improved? AJNR Am J Neuroradiol 2025:ajnr.A8614. [PMID: 39638352 DOI: 10.3174/ajnr.a8614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Accepted: 12/03/2024] [Indexed: 12/07/2024]
Abstract
BACKGROUND AND PURPOSE 2D linear measurements are often used in routine clinical practice during vestibular schwannoma (VS) follow-up, primarily due to wider availability and ease of use. We sought to determine the radiologist's performance compared with 3D-volumetry, along with the impact of the number of linear measurements, slice thickness, and tumor volumes on these parameters. MATERIALS AND METHODS Specificity and accuracy estimates and 95% confidence intervals were calculated for the entire cohort and subgroups on the basis of volumes (<400, 400-800, >800 mm3), slice thickness (≤1.5 mm or >1.5 mm), and number of linear dimensions measured in the radiology report (0-1 or 2-3). RESULTS There was weak agreement between the radiologist's inference and VS volumetry (0.45; 95% CI. 0.41-00.53). Agreement was lower when 0-1 tumor dimension was measured (0.29; 95% CI, 0.21-0.42), for smaller tumors of <400 mm3 (0.37; 95% CI, 0.28-0.45), and for thick-section imaging of >1.5 mm (0.36; 95% CI, 0.25-0.46). The reader sensitivity was modest (0.49-0.54), while the accuracy for detecting ≤ ±25% interval change was weak (0.32-0.38). Reader performance trended toward improvement with thin-section imaging, measurement of 2-3 VS dimensions, and larger tumors. CONCLUSIONS In routine practice, radiologists show poor agreement with volumetric results and sensitivity to detect interval change and overall poor accuracy for volumetric changes of ≤ ± 25% in volume. In the absence of volumetric measurements, radiologists need to be more diligent when evaluating interval changes in VS.
Collapse
Affiliation(s)
- Girish Bathla
- From the Division of Neuroradiology, Department of Radiology (G.B., J.C.B., S.A.M., P.F., J.I.L.), Mayo Clinic, Rochester, Minnesota
| | - Parv M Mehta
- Department of Radiology (P.M.M.), UT Health, San Antonio, San Antonio, Texas
| | - Neetu Soni
- Division of Neuroradiology (N.S., A.A.), Department of Radiology, Mayo Clinic, Jacksonville, Florida
| | - Mathew Johnson
- Biomedical Statistics and Informatics (M.J.), Mayo Clinic, Rochester, Minnesota
| | - John C Benson
- From the Division of Neuroradiology, Department of Radiology (G.B., J.C.B., S.A.M., P.F., J.I.L.), Mayo Clinic, Rochester, Minnesota
| | - Steven A Messina
- From the Division of Neuroradiology, Department of Radiology (G.B., J.C.B., S.A.M., P.F., J.I.L.), Mayo Clinic, Rochester, Minnesota
| | - Paul Farnsworth
- From the Division of Neuroradiology, Department of Radiology (G.B., J.C.B., S.A.M., P.F., J.I.L.), Mayo Clinic, Rochester, Minnesota
| | - Amit Agarwal
- Division of Neuroradiology (N.S., A.A.), Department of Radiology, Mayo Clinic, Jacksonville, Florida
| | - Matthew L Carlson
- Department of Otolaryngology-Head and Neck Surgery (M.L.C.), Mayo Clinic, Rochester, Minnesota
| | - John I Lane
- From the Division of Neuroradiology, Department of Radiology (G.B., J.C.B., S.A.M., P.F., J.I.L.), Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
2
|
Cornelissen S, Schouten SM, Langenhuizen PPJH, Kunst HPM, Verheul JB, De With PHN. Towards clinical implementation of automated segmentation of vestibular schwannomas: a reliability study comparing AI and human performance. Neuroradiology 2025; 67:1049-1059. [PMID: 40183966 PMCID: PMC12040986 DOI: 10.1007/s00234-025-03611-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2025] [Accepted: 03/28/2025] [Indexed: 04/05/2025]
Abstract
PURPOSE To evaluate the clinimetric reliability of automated vestibular schwannoma (VS) segmentations by a comparison with human inter-observer variability on T1-weighted contrast-enhanced MRI scans. METHODS This retrospective study employed MR images, including follow-up, from 1,015 patients (median age: 59, 511 men), resulting in 1,856 unique scans. Two nnU-Net models were trained using fivefold cross-validation to create a single-center segmentation model, along with a multi-center model using additional publicly available data. Geometric-based segmentation metrics (e.g. the Dice score) were used to evaluate model performance. To quantitatively assess the clinimetric reliability of the models, automated tumor volumes from a separate test set were compared to human inter-observer variability using the limits of agreement with the mean (LOAM) procedure. Additionally, new agreement limits that include automated annotations are calculated. RESULTS Both models performed comparable to current state-of-the-art VS segmentation models, with median Dice scores of 91.6% and 91.9% for the single and multi-center models, respectively. There is a stark difference in clinimetric performance between both models: automated tumor volumes of the multi-center model fell within human agreement limits in 73% of the cases, compared to 44% for the single-center model. Newly calculated agreement limits including the single-center model, resulted in very high and wide limits. For the multi-center model, the new agreement limits were comparable to human inter-observer variability. CONCLUSION Models with excellent geometric-based metrics do not necessarily imply high clinimetric reliability, demonstrating the need to clinimetrically evaluate models as part of the clinical implementation process. The multi-center model displayed high reliability, warranting its possible future use in clinical practice. However, caution should be exercised when employing the model for small tumors, as the reliability was found to be volume-dependent.
Collapse
Affiliation(s)
- Stefan Cornelissen
- Gamma Knife Center, Department of Neurosurgery, Elisabeth-TweeSteden Hospital, Tilburg, Netherlands.
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands.
| | - Sammy M Schouten
- Gamma Knife Center, Department of Neurosurgery, Elisabeth-TweeSteden Hospital, Tilburg, Netherlands
- Department of Otolaryngology, Radboud University Medical Center, Nijmegen, Netherlands
- Department of Otolaryngology, Maastricht University Medical Centre+, Maastricht, Netherlands
| | - Patrick P J H Langenhuizen
- Gamma Knife Center, Department of Neurosurgery, Elisabeth-TweeSteden Hospital, Tilburg, Netherlands
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Henricus P M Kunst
- Department of Otolaryngology, Radboud University Medical Center, Nijmegen, Netherlands
- Department of Otolaryngology, Maastricht University Medical Centre+, Maastricht, Netherlands
| | - Jeroen B Verheul
- Gamma Knife Center, Department of Neurosurgery, Elisabeth-TweeSteden Hospital, Tilburg, Netherlands
| | - Peter H N De With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| |
Collapse
|
3
|
Spinos D, Martinos A, Petsiou DP, Mistry N, Garas G. Artificial Intelligence in Temporal Bone Imaging: A Systematic Review. Laryngoscope 2025; 135:973-981. [PMID: 39352072 DOI: 10.1002/lary.31809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 08/03/2024] [Accepted: 09/17/2024] [Indexed: 10/03/2024]
Abstract
OBJECTIVE The human temporal bone comprises more than 30 identifiable anatomical components. With the demand for precise image interpretation in this complex region, the utilization of artificial intelligence (AI) applications is steadily increasing. This systematic review aims to highlight the current role of AI in temporal bone imaging. DATA SOURCES A Systematic Review of English Publications searching MEDLINE (PubMed), COCHRANE Library, and EMBASE. REVIEW METHODS The search algorithm employed consisted of key items such as 'artificial intelligence,' 'machine learning,' 'deep learning,' 'neural network,' 'temporal bone,' and 'vestibular schwannoma.' Additionally, manual retrieval was conducted to capture any studies potentially missed in our initial search. All abstracts and full texts were screened based on our inclusion and exclusion criteria. RESULTS A total of 72 studies were included. 95.8% were retrospective and 88.9% were based on internal databases. Approximately two-thirds involved an AI-to-human comparison. Computed tomography (CT) was the imaging modality in 54.2% of the studies, with vestibular schwannoma (VS) being the most frequent study item (37.5%). Fifty-eight out of 72 articles employed neural networks, with 72.2% using various types of convolutional neural network models. Quality assessment of the included publications yielded a mean score of 13.6 ± 2.5 on a 20-point scale based on the CONSORT-AI extension. CONCLUSION Current research data highlight AI's potential in enhancing diagnostic accuracy with faster results and decreased performance errors compared to those of clinicians, thus improving patient care. However, the shortcomings of the existing research, often marked by heterogeneity and variable quality, underscore the need for more standardized methodological approaches to ensure the consistency and reliability of future data. LEVEL OF EVIDENCE NA Laryngoscope, 135:973-981, 2025.
Collapse
Affiliation(s)
- Dimitrios Spinos
- South Warwickshire NHS Foundation Trust, Warwick, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Anastasios Martinos
- National and Kapodistrian University of Athens School of Medicine, Athens, Greece
| | | | - Nina Mistry
- Gloucestershire Hospitals NHS Foundation Trust, ENT, Head and Neck Surgery, Gloucester, UK
| | - George Garas
- Surgical Innovation Centre, Department of Surgery and Cancer, Imperial College London, St. Mary's Hospital, London, UK
- Athens Medical Center, Marousi & Psychiko Clinic, Athens, Greece
| |
Collapse
|
4
|
Jing B, Wang K, Schmitz E, Tang S, Li Y, Zhang Y, Wang J. Prediction of pathological complete response to chemotherapy for breast cancer using deep neural network with uncertainty quantification. Med Phys 2024; 51:9385-9393. [PMID: 39369684 DOI: 10.1002/mp.17451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2024] [Revised: 07/31/2024] [Accepted: 09/15/2024] [Indexed: 10/08/2024] Open
Abstract
BACKGROUND The I-SPY 2 trial is a national-wide, multi-institutional clinical trial designed to evaluate multiple new therapeutic drugs for high-risk breast cancer. Previous studies suggest that pathological complete response (pCR) is a viable indicator of long-term outcomes of neoadjuvant chemotherapy for high-risk breast cancer. While pCR can be assessed during surgery after the chemotherapy, early prediction of pCR before the completion of the chemotherapy may facilitate personalized treatment management to achieve an improved outcome. Notably, the acquisition of dynamic contrast-enhanced magnetic resonance (DCEMR) images at multiple time points during the I-SPY 2 trial opens up the possibility of achieving early pCR prediction. PURPOSE In this study, we investigated the feasibility of the early prediction of pCR to neoadjuvant chemotherapy using multi-time point DCEMR images and clinical data acquired in the I-SPY2 trial. The prediction uncertainty was also quantified to allow physicians to make patient-specific decisions on treatment plans based on the level of associated uncertainty. METHODS The dataset used in our study included 624 patients with DCEMR images acquired at 3 time points before the completion of the chemotherapy: pretreatment (T0), after 3 cycles of treatment (T1), and after 12 cycles of treatment (T2). A convolutional long short-term memory (LSTM) network-based deep learning model, which integrated multi-time point deep image representations with clinical data, including tumor subtypes, was developed to predict pCR. The performance of the model was evaluated via the method of nested 5-fold cross validation. Moreover, we also quantified prediction uncertainty for each patient through test-time augmentation. To investigate the relationship between predictive performance and uncertainty, the area under the receiver operating characteristic curve (AUROC) was assessed on subgroups of patients stratified by the uncertainty score. RESULTS By integrating clinical data and DCEMR images obtained at three-time points before treatment completion, the AUROC reached 0.833 with a sensitivity of 0.723 and specificity of 0.800. This performance was significantly superior (p < 0.01) to models using only images (AUROC = 0.706) or only clinical data (AUROC = 0.746). After stratifying the patients into eight subgroups based on the uncertainty score, we found that group #1, with the lowest uncertainty, had a superior AUROC of 0.873. The AUROC decreased to 0.637 for group #8, which had the highest uncertainty. CONCLUSIONS The results indicate that our convolutional LSTM network-based deep learning model can be used to predict pCR earlier before the completion of chemotherapy. By combining clinical data and multi-time point deep image representations, our model outperforms models built solely on clinical or image data. Estimating prediction uncertainty may enable physicians to prioritize or disregard predictions based on their associated uncertainties. This approach could potentially enhance the personalization of breast cancer therapy.
Collapse
Affiliation(s)
- Bowen Jing
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) Lab, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Kai Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) Lab, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Erich Schmitz
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) Lab, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Shanshan Tang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) Lab, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Yunxiang Li
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - You Zhang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) Lab, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
5
|
Bourdillon AT. Computer Vision-Radiomics & Pathognomics. Otolaryngol Clin North Am 2024; 57:719-751. [PMID: 38910065 DOI: 10.1016/j.otc.2024.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
The role of computer vision in extracting radiographic (radiomics) and histopathologic (pathognomics) features is an extension of molecular biomarkers that have been foundational to our understanding across the spectrum of head and neck disorders. Especially within head and neck cancers, machine learning and deep learning applications have yielded advances in the characterization of tumor features, nodal features, and various outcomes. This review aims to overview the landscape of radiomic and pathognomic applications, informing future work to address gaps. Novel methodologies will be needed to potentially engineer ways of integrating multidimensional data inputs to examine disease features to guide prognosis comprehensively and ultimately clinical management.
Collapse
Affiliation(s)
- Alexandra T Bourdillon
- Department of Otolaryngology-Head & Neck Surgery, University of California-San Francisco, San Francisco, CA 94115, USA.
| |
Collapse
|
6
|
Nernekli K, Persad AR, Hori YS, Yener U, Celtikci E, Sahin MC, Sozer A, Sozer B, Park DJ, Chang SD. Automatic Segmentation of Vestibular Schwannomas: A Systematic Review. World Neurosurg 2024; 188:35-44. [PMID: 38685346 DOI: 10.1016/j.wneu.2024.04.145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 04/23/2024] [Indexed: 05/02/2024]
Abstract
BACKGROUND Vestibular schwannomas (VSs) are benign tumors often monitored over time, with measurement techniques for assessing growth rates subject to significant interobserver variability. Automatic segmentation of these tumors could provide a more reliable and efficient for tracking their progression, especially given the irregular shape and growth patterns of VS. METHODS Various studies and segmentation techniques employing different Convolutional Neural Network architectures and models, such as U-Net and convolutional-attention transformer segmentation, were analyzed. Models were evaluated based on their performance across diverse datasets, and challenges, including domain shift and data sharing, were scrutinized. RESULTS Automatic segmentation methods offer a promising alternative to conventional measurement techniques, offering potential benefits in precision and efficiency. However, these methods are not without challenges, notably the "domain shift" that occurs when models trained on specific datasets underperform when applied to different datasets. Techniques such as domain adaptation, domain generalization, and data diversity were discussed as potential solutions. CONCLUSIONS Accurate measurement of VS growth is a complex process, with volumetric analysis currently appearing more reliable than linear measurements. Automatic segmentation, despite its challenges, offers a promising avenue for future investigation. Robust well-generalized models could potentially improve the efficiency of tracking tumor growth, thereby augmenting clinical decision-making. Further work needs to be done to develop more robust models, address the domain shift, and enable secure data sharing for wider applicability.
Collapse
Affiliation(s)
- Kerem Nernekli
- Department of Radiology, Stanford University School of Medicine, Stanford, California, USA
| | - Amit R Persad
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California, USA
| | - Yusuke S Hori
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California, USA
| | - Ulas Yener
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California, USA
| | - Emrah Celtikci
- Department of Neurosurgery, Gazi University, Ankara, Turkey
| | | | - Alperen Sozer
- Department of Neurosurgery, Gazi University, Ankara, Turkey
| | - Batuhan Sozer
- Department of Neurosurgery, Gazi University, Ankara, Turkey
| | - David J Park
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California, USA.
| | - Steven D Chang
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
7
|
Chen M, Wang K, Wang J. Advancing Head and Neck Cancer Survival Prediction via Multi-Label Learning and Deep Model Interpretation. ARXIV 2024:arXiv:2405.05488v1. [PMID: 38764586 PMCID: PMC11100915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 05/21/2024]
Abstract
A comprehensive and reliable survival prediction model is of great importance to assist in the personalized management of Head and Neck Cancer (HNC) patient treated with curative Radiation Therapy (RT). In this work, we propose IMLSP, an Interpretable Multi-Label multi-modal deep Survival Prediction framework for predicting multiple HNC survival outcomes simultaneously and provide time-event specific visual explanation of the deep prediction process. We adopt Multi-Task Logistic Regression (MTLR) layers to convert survival prediction from a regression problem to a multi-time point classification task, and to enable predicting of multiple relevant survival outcomes at the same time. We also present Grad-Team, a Gradient-weighted Time-event activation mapping approach specifically developed for deep survival model visual explanation, to generate patient-specific time-to-event activation maps. We evaluate our method with the publicly available RADCURE HNC dataset, where it outperforms the corresponding single-modal models and single-label models on all survival outcomes. The generated activation maps show that the model focuses primarily on the tumor and nodal volumes when making the decision and the volume of interest varies for high- and low-risk patients. We demonstrate that the multi-label learning strategy can improve the learning efficiency and prognostic performance, while the interpretable survival prediction model is promising to help understand the decision-making process of AI and facilitate personalized treatment. The project website can be found at https://github.com/***.
Collapse
Affiliation(s)
- Meixu Chen
- University of Texas Southwestern Medical Center, Dallas, TX
| | - Kai Wang
- University of Texas Southwestern Medical Center, Dallas, TX
- University of Maryland Medical Center, Baltimore, MD
| | - Jing Wang
- University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
8
|
Kocak B, Keles A, Akinci D'Antonoli T. Self-reporting with checklists in artificial intelligence research on medical imaging: a systematic review based on citations of CLAIM. Eur Radiol 2024; 34:2805-2815. [PMID: 37740080 DOI: 10.1007/s00330-023-10243-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 08/09/2023] [Accepted: 08/20/2023] [Indexed: 09/24/2023]
Abstract
OBJECTIVE To evaluate the usage of a well-known and widely adopted checklist, Checklist for Artificial Intelligence in Medical imaging (CLAIM), for self-reporting through a systematic analysis of its citations. METHODS Google Scholar, Web of Science, and Scopus were used to search for citations (date, 29 April 2023). CLAIM's use for self-reporting with proof (i.e., filled-out checklist) and other potential use cases were systematically assessed in research papers. Eligible papers were evaluated independently by two readers, with the help of automatic annotation. Item-by-item confirmation analysis on papers with checklist proof was subsequently performed. RESULTS A total of 391 unique citations were identified from three databases. Of the 118 papers included in this study, 12 (10%) provided a proof of self-reported CLAIM checklist. More than half (70; 59%) only mentioned some sort of adherence to CLAIM without providing any proof in the form of a checklist. Approximately one-third (36; 31%) cited the CLAIM for reasons unrelated to their reporting or methodological adherence. Overall, the claims on 57 to 93% of the items per publication were confirmed in the item-by-item analysis, with a mean and standard deviation of 81% and 10%, respectively. CONCLUSION Only a small proportion of the publications used CLAIM as checklist and supplied filled-out documentation; however, the self-reported checklists may contain errors and should be approached cautiously. We hope that this systematic citation analysis would motivate artificial intelligence community about the importance of proper self-reporting, and encourage researchers, journals, editors, and reviewers to take action to ensure the proper usage of checklists. CLINICAL RELEVANCE STATEMENT Only a small percentage of the publications used CLAIM for self-reporting with proof (i.e., filled-out checklist). However, the filled-out checklist proofs may contain errors, e.g., false claims of adherence, and should be approached cautiously. These may indicate inappropriate usage of checklists and necessitate further action by authorities. KEY POINTS • Of 118 eligible papers, only 12 (10%) followed the CLAIM checklist for self-reporting with proof (i.e., filled-out checklist). More than half (70; 59%) only mentioned some kind of adherence without providing any proof. • Overall, claims on 57 to 93% of the items were valid in item-by-item confirmation analysis, with a mean and standard deviation of 81% and 10%, respectively. • Even with the checklist proof, the items declared may contain errors and should be approached cautiously.
Collapse
Affiliation(s)
- Burak Kocak
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Istanbul, Turkey.
| | - Ali Keles
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Istanbul, Turkey
| | - Tugba Akinci D'Antonoli
- Institute of Radiology and Nuclear Medicine, Cantonal Hospital Baselland, Liestal, Switzerland
| |
Collapse
|
9
|
Chen M, Wang K, Wang J. Vision Transformer-Based Multilabel Survival Prediction for Oropharynx Cancer After Radiation Therapy. Int J Radiat Oncol Biol Phys 2024; 118:1123-1134. [PMID: 37939732 PMCID: PMC11161220 DOI: 10.1016/j.ijrobp.2023.10.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 09/06/2023] [Accepted: 10/15/2023] [Indexed: 11/10/2023]
Abstract
PURPOSE A reliable and comprehensive cancer prognosis model for oropharyngeal cancer (OPC) could better assist in personalizing treatment. In this work, we developed a vision transformer-based (ViT-based) multilabel model with multimodal input to learn complementary information from available pretreatment data and predict multiple associated endpoints for radiation therapy for patients with OPC. METHODS AND MATERIALS A publicly available data set of 512 patients with OPC was used for both model training and evaluation. Planning computed tomography images, primary gross tumor volume masks, and 16 clinical variables representing patient demographics, diagnosis, and treatment were used as inputs. To extract deep image features with global attention, we used a ViT module. Clinical variables were concatenated with the learned image features and fed into fully connected layers to incorporate cross-modality features. To learn the mapping between the features and correlated survival outcomes, including overall survival, local failure-free survival, regional failure-free survival, and distant failure-free survival, we employed 4 multitask logistic regression layers. The proposed model was optimized by combining the multitask logistic regression negative-log likelihood losses of different prediction targets. RESULTS We employed the C-index and area under the curve metrics to assess the performance of our model for time-to-event prediction and time-specific binary prediction, respectively. Our proposed model outperformed corresponding single-modality and single-label models on all prediction labels, achieving C-indices of 0.773, 0.765, 0.776, and 0.773 for overall survival, local failure-free survival, regional failure-free survival, and distant failure-free survival, respectively. The area under the curve values ranged between 0.799 and 0.844 for different tasks at different time points. Using the medians of predicted risks as the thresholds to identify high-risk and low-risk patient groups, we performed the log-rank test, the results of which showed significantly larger separations in different event-free survivals. CONCLUSION We developed the first model capable of predicting multiple labels for OPC simultaneously. Our model demonstrated better prognostic ability for all the prediction targets compared with corresponding single-modality models and single-label models.
Collapse
Affiliation(s)
- Meixu Chen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas
| | - Kai Wang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas.
| |
Collapse
|
10
|
Alsaleh H. The impact of artificial intelligence in the diagnosis and management of acoustic neuroma: A systematic review. Technol Health Care 2024; 32:3801-3813. [PMID: 39093085 PMCID: PMC11612958 DOI: 10.3233/thc-232043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 05/20/2024] [Indexed: 08/04/2024]
Abstract
BACKGROUND Schwann cell sheaths are the source of benign, slowly expanding tumours known as acoustic neuromas (AN). The diagnostic and treatment approaches for AN must be patient-centered, taking into account unique factors and preferences. OBJECTIVE The purpose of this study is to investigate how machine learning and artificial intelligence (AI) can revolutionise AN management and diagnostic procedures. METHODS A thorough systematic review that included peer-reviewed material from public databases was carried out. Publications on AN, AI, and deep learning up until December 2023 were included in the review's purview. RESULTS Based on our analysis, AI models for volume estimation, segmentation, tumour type differentiation, and separation from healthy tissues have been developed successfully. Developments in computational biology imply that AI can be used effectively in a variety of fields, including quality of life evaluations, monitoring, robotic-assisted surgery, feature extraction, radiomics, image analysis, clinical decision support systems, and treatment planning. CONCLUSION For better AN diagnosis and treatment, a variety of imaging modalities require the development of strong, flexible AI models that can handle heterogeneous imaging data. Subsequent investigations ought to concentrate on reproducing findings in order to standardise AI approaches, which could transform their use in medical environments.
Collapse
Affiliation(s)
- Hadeel Alsaleh
- Department of Communication Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
- E-mail:
| |
Collapse
|
11
|
Patel RV, Groff KJ, Bi WL. Applications and Integration of Radiomics for Skull Base Oncology. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1462:285-305. [PMID: 39523272 DOI: 10.1007/978-3-031-64892-2_17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
Radiomics, a quantitative approach to extracting features from medical images, represents a new frontier in skull base oncology. Novel image analysis approaches have enabled us to capture patterns from images imperceptible by the human eye. This rich source of data can be combined with a range of clinical features, holding the potential to be a noninvasive source of biomarkers. Applications of radiomics in skull base pathologies have centered around three common tumor classes: meningioma, sellar/parasellar tumors, and vestibular schwannomas. Radiomic investigations can be categorized into five domains: tumor detection/segmentation, classification between tumor types, tumor grading, detection of tumor features, and prognostication. Various computational architectures have been employed across these domains, with deep-learning methods becoming more common versus machine learning. Across radiomic applications, contrast-enhanced T1-weighted MRI images remain the most utilized sequence for model development. Efforts to standardize and connect radiomic features to tumor biology have facilitated more clinically applicable radiomic models. Despite the advancement in model performance, several challenges continue to hinder translatability, including small sample sizes and model training on homogenous single institution data. To recognize the potential of radiomics for skull base oncology, prospective, multi-institutional collaboration will be the cornerstone for a validated radiomic technology.
Collapse
Affiliation(s)
- Ruchit V Patel
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Karenna J Groff
- New York University Grossman School of Medicine, New York, NY, USA
| | - Wenya Linda Bi
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
12
|
Suresh K, Elkahwagi MA, Garcia A, Naples JG, Corrales CE, Crowson MG. Development of a Predictive Model for Persistent Dizziness Following Vestibular Schwannoma Surgery. Laryngoscope 2023; 133:3534-3539. [PMID: 37092316 PMCID: PMC10593906 DOI: 10.1002/lary.30708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Revised: 04/03/2023] [Accepted: 04/11/2023] [Indexed: 04/25/2023]
Abstract
OBJECTIVE In an era of vestibular schwannoma (VS) surgery where functional preservation is increasingly emphasized, persistent postoperative dizziness is a relatively understudied functional outcome. The primary objective was to develop a predictive model to identify patients at risk for developing persistent postoperative dizziness after VS resection. METHODS Retrospective review of patients who underwent VS surgery at our institution with a minimum of 12 months of postoperative follow-up. Demographic, tumor-specific, preoperative, and immediate postoperative features were collected as predictors. The primary outcome was self-reported dizziness at 3-, 6-, and 12-month follow-up. Binary and multiclass machine learning classification models were developed using these features. RESULTS A total of 1,137 cases were used for modeling. The median age was 67 years, and 54% were female. Median tumor size was 2 cm, and the most common approach was suboccipital (85%). Overall, 63% of patients did not report postoperative dizziness at any timepoint; 11% at 3-month follow-up; 9% at 6-months; and 17% at 12-months. Both binary and multiclass models achieved high performance with AUCs of 0.89 and 0.86 respectively. Features important to model predictions were preoperative headache, need for physical therapy on discharge, vitamin D deficiency, and systemic comorbidities. CONCLUSION We demonstrate the feasibility of a machine learning approach to predict persistent dizziness following vestibular schwannoma surgery with high accuracy. These models could be used to provide quantitative estimates of risk, helping counsel patients on what to expect after surgery and manage patients proactively in the postoperative setting. LEVEL OF EVIDENCE 4 Laryngoscope, 133:3534-3539, 2023.
Collapse
Affiliation(s)
- Krish Suresh
- Department of Otolaryngology–Head & Neck Surgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
- Department of Otolaryngology–Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology–Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Mohamed A. Elkahwagi
- Department of Otolaryngology–Head & Neck Surgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
- Division of Otolaryngology–Head & Neck Surgery, Mansoura University, Egypt
| | - Alejandro Garcia
- Department of Otolaryngology–Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology–Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - James G. Naples
- Department of Otolaryngology–Head & Neck Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
- Department of Otolaryngology–Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - C. Eduardo Corrales
- Department of Otolaryngology–Head & Neck Surgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
- Department of Otolaryngology–Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Matthew G. Crowson
- Department of Otolaryngology–Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology–Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
13
|
Tsilivigkos C, Athanasopoulos M, Micco RD, Giotakis A, Mastronikolis NS, Mulita F, Verras GI, Maroulis I, Giotakis E. Deep Learning Techniques and Imaging in Otorhinolaryngology-A State-of-the-Art Review. J Clin Med 2023; 12:6973. [PMID: 38002588 PMCID: PMC10672270 DOI: 10.3390/jcm12226973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/02/2023] [Accepted: 11/06/2023] [Indexed: 11/26/2023] Open
Abstract
Over the last decades, the field of medicine has witnessed significant progress in artificial intelligence (AI), the Internet of Medical Things (IoMT), and deep learning (DL) systems. Otorhinolaryngology, and imaging in its various subspecialties, has not remained untouched by this transformative trend. As the medical landscape evolves, the integration of these technologies becomes imperative in augmenting patient care, fostering innovation, and actively participating in the ever-evolving synergy between computer vision techniques in otorhinolaryngology and AI. To that end, we conducted a thorough search on MEDLINE for papers published until June 2023, utilizing the keywords 'otorhinolaryngology', 'imaging', 'computer vision', 'artificial intelligence', and 'deep learning', and at the same time conducted manual searching in the references section of the articles included in our manuscript. Our search culminated in the retrieval of 121 related articles, which were subsequently subdivided into the following categories: imaging in head and neck, otology, and rhinology. Our objective is to provide a comprehensive introduction to this burgeoning field, tailored for both experienced specialists and aspiring residents in the domain of deep learning algorithms in imaging techniques in otorhinolaryngology.
Collapse
Affiliation(s)
- Christos Tsilivigkos
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Michail Athanasopoulos
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Riccardo di Micco
- Department of Otolaryngology and Head and Neck Surgery, Medical School of Hannover, 30625 Hannover, Germany;
| | - Aris Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Nicholas S. Mastronikolis
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Francesk Mulita
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Georgios-Ioannis Verras
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Ioannis Maroulis
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Evangelos Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| |
Collapse
|
14
|
Marinelli JP, Schnurman Z, Killeen DE, Nassiri AM, Hunter JB, Lees KA, Lohse CM, Roland JT, Golfinos JG, Kondziolka D, Link MJ, Carlson ML. Stratifying Risk of Future Growth Among Sporadic Vestibular Schwannomas. Otol Neurotol 2023; Publish Ahead of Print:00129492-990000000-00318. [PMID: 37367632 DOI: 10.1097/mao.0000000000003934] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
OBJECTIVE In certain cases, clinicians may consider continued observation of a vestibular schwannoma after initial growth is detected. The aim of the current work was to determine if patients with growing sporadic vestibular schwannomas could be stratified by the likelihood of subsequent growth based on initial growth behavior. STUDY DESIGN Slice-by-slice volumetric tumor measurements from 3,505 serial magnetic resonance imaging studies were analyzed from 952 consecutively treated patients. SETTING Three tertiary-referral centers. PATIENTS Adults with sporadic vestibular schwannoma. INTERVENTIONS Wait-and-scan. MAIN OUTCOME MEASURES Composite end point of subsequent growth- or treatment-free survival rates, where growth is defined as an additional increase of at least 20% in tumor volume from the volume at the time of initial growth. RESULTS Among 405 patients who elected continued observation despite documented growth, stratification, of volumetric growth rate into less than 25% (reference: n = 107), 25 to less than 50% (hazard ratio [HR], 1.39; p = 0.06; n = 96), 50 to less than 100% (HR, 1.71; p = 0.002; n = 112), and at least 100% (HR, 2.01; p < 0.001; n = 90) change per year predicted the likelihood of future growth or treatment. Subsequent growth- or treatment-free survival rates (95% confidence interval) at year 5 after detection of initial growth were 31% (21-44%) for those with less than 25% growth per year, 18% (10-32%) for those with 25 to less than 50%, 15% (9-26%) for those with 50 to less than 100%, and 6% (2-16%) for those with at least 100%. Neither patient age (p = 0.15) nor tumor volume at diagnosis (p = 0.95) significantly differed across stratification groups. CONCLUSIONS At the time of diagnosis, clinical features cannot consistently predict which tumors will ultimately display aggressive behavior. Stratification by volumetric growth rate at the time of initial growth results in a stepwise progression of increasing likelihood of subsequent growth. When considering continued observation after initial growth detection, almost 95% of patients who have tumors that double in volume between diagnosis and the first detection of growth demonstrate further tumor growth or undergo treatment if observed to 5 years.
Collapse
Affiliation(s)
- John P Marinelli
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota
| | - Zane Schnurman
- Department of Neurosurgery, NYU Langone Medical Center, New York, New York
| | - Daniel E Killeen
- Department of Otolaryngology-Head and Neck Surgery, University Hospitals Cleveland Medical Center, Case Western Reserve University Medical School, Cleveland, Ohio
| | - Ashley M Nassiri
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota
| | - Jacob B Hunter
- Department of Otolaryngology-Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Katherine A Lees
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota
| | - Christine M Lohse
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, Minnesota
| | - J Thomas Roland
- Department of Otolaryngology-Head and Neck Surgery, NYU Langone Health, New York, New York
| | - John G Golfinos
- Department of Neurosurgery, NYU Langone Medical Center, New York, New York
| | - Douglas Kondziolka
- Department of Neurosurgery, NYU Langone Medical Center, New York, New York
| | | | | |
Collapse
|