1
|
Xu X, Xi L, Zhu J, Feng C, Zhou P, Liu K, Shang Z, Shao Z. Intelligent Diagnosis of Cervical Lymph Node Metastasis Using a CNN Model. J Dent Res 2025:220345251322508. [PMID: 40271993 DOI: 10.1177/00220345251322508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2025] Open
Abstract
Lymph node (LN) metastasis is a prevalent cause of recurrence in oral squamous cell carcinoma (OSCC). However, accurately identifying metastatic LNs (LNs+) remains challenging. This prospective clinical study aims to test the effectiveness of our convolutional neural network (CNN) model for identifying OSCC cervical LN+ in contrast-enhanced computed tomography (CECT) in clinical practice. A CNN model was developed and trained using a dataset of 8,380 CECT images from previous OSCC patients. It was then prospectively validated on 17,777 preoperative CECT images from 354 OSCC patients between October 17, 2023, and August 31, 2024. The model's predicted LN results were provided to the surgical team without influencing surgical or treatment plans. During surgery, the predicted LN+ were identified and sent for separate pathological examination. The accuracy of the model's predictions was compared with those of human experts and verified against pathology reports. The capacity of the model to assist radiologists in LN+ diagnosis was also assessed. The CNN model was trained over 40 epochs and successfully validated after each. Compared with human experts (2 radiologists, 2 surgeons, and 2 students), the CNN model achieved higher sensitivity (81.89% vs. 81.48%, 46.91%, 50.62%), specificity (99.31% vs. 99.15%, 98.36%, 96.27%), LN+ accuracy (76.19% vs. 75.43%, P = 0.854; 40.64%, P < 0.001; 37.44%, P < 0.001), and clinical accuracy (86.16% vs. 83%, 61%, 56%). With the model's assistance, the radiologists surpassed both the previous predictive results without the model's support and the model's performance alone. The CNN model demonstrated an accuracy comparable to that of radiologists in identifying, locating, and predicting cervical LN+ in OSCC patients. Furthermore, the model has the potential to assist radiologists in making more accurate diagnoses.
Collapse
Affiliation(s)
- X Xu
- The State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
- Day Surgery Center, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - L Xi
- School of Computer Science, Wuhan University, Wuhan, China
| | - J Zhu
- The State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
- Department of Geriatric Dentistry, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - C Feng
- The State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - P Zhou
- Department of Radiology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - K Liu
- The State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
- Department of Oral and Maxillofacial Head Neck Surgery, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Z Shang
- The State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Z Shao
- The State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
- Day Surgery Center, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| |
Collapse
|
2
|
Warin K, Limprasert W, Paipongna T, Chaowchuen S, Vicharueang S. Deep convolutional neural network for automatic segmentation and classification of jaw tumors in contrast-enhanced computed tomography images. Int J Oral Maxillofac Surg 2025; 54:374-382. [PMID: 39414518 DOI: 10.1016/j.ijom.2024.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 09/26/2024] [Accepted: 10/03/2024] [Indexed: 10/18/2024]
Abstract
The purpose of this study was to evaluate the performance of convolutional neural network (CNN)-based image segmentation models for segmentation and classification of benign and malignant jaw tumors in contrast-enhanced computed tomography (CT) images. A dataset comprising 3416 CT images (1163 showing benign jaw tumors, 1253 showing malignant jaw tumors, and 1000 without pathological lesions) was obtained retrospectively from a cancer hospital and two regional hospitals in Thailand; the images were from 150 patients presenting with jaw tumors between 2016 and 2020. U-Net and Mask R-CNN image segmentation models were adopted. U-Net and Mask R-CNN were trained to distinguish between benign and malignant jaw tumors and to segment jaw tumors to identify their boundaries in CT images. The performance of each model in segmenting the jaw tumors in the CT images was evaluated on a test dataset. All models yielded high accuracy, with a Dice coefficient of 0.90-0.98 and Jaccard index of 0.82-0.97 for segmentation, and an area under the precision-recall curve of 0.63-0.85 for the classification of benign and malignant jaw tumors. In conclusion, CNN-based segmentation models demonstrated high potential for automated segmentation and classification of jaw tumors in contrast-enhanced CT images.
Collapse
Affiliation(s)
- K Warin
- Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand.
| | - W Limprasert
- College of Interdisciplinary Studies, Thammasat University, Pathum Thani, Thailand.
| | - T Paipongna
- Sakon Nakhon Hospital, Mueang Sakon Nakhon, Sakon Nakhon, Thailand.
| | - S Chaowchuen
- Udonthani Cancer Hospital, Mueang Udon Thani, Udon Thani, Thailand.
| | - S Vicharueang
- StoreMesh, Thailand Science Park, Pathum Thani, Thailand.
| |
Collapse
|
3
|
Rusu-Both R, Socaci MC, Palagos AI, Buzoianu C, Avram C, Vălean H, Chira RI. A Deep Learning-Based Detection and Segmentation System for Multimodal Ultrasound Images in the Evaluation of Superficial Lymph Node Metastases. J Clin Med 2025; 14:1828. [PMID: 40142635 PMCID: PMC11942978 DOI: 10.3390/jcm14061828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2025] [Revised: 03/03/2025] [Accepted: 03/06/2025] [Indexed: 03/28/2025] Open
Abstract
Background/Objectives: Even with today's advancements, cancer still represents a major cause of mortality worldwide. One important aspect of cancer progression that has a big impact on diagnosis, prognosis, and treatment plans is accurate lymph node metastasis evaluation. However, regardless of the imaging method used, this process is challenging and time-consuming. This research aimed to develop and validate an automatic detection and segmentation system for superficial lymph node evaluation based on multimodal ultrasound images, such as traditional B-mode, Doppler, and elastography, using deep learning techniques. Methods: The suggested approach incorporated a Mask R-CNN architecture designed specifically for the detection and segmentation of lymph nodes. The pipeline first involved noise reduction preprocessing, after which morphological and textural feature segmentation and analysis were performed. Vascularity and stiffness parameters were further examined in Doppler and elastography pictures. Metrics, including accuracy, mean average precision (mAP), and dice coefficient, were used to assess the system's performance during training and validation on a carefully selected dataset of annotated ultrasound pictures. Results: During testing, the Mask R-CNN model showed an accuracy of 92.56%, a COCO AP score of 60.7 and a validation score of 64. Furter on, to improve diagnostic capabilities, Doppler and elastography data were added. This allowed for improved performance across several types of ultrasound images and provided thorough insights into the morphology, vascularity, and stiffness of lymph nodes. Conclusions: This paper offers a novel use of deep learning for automated lymph node assessment in ultrasound imaging. This system offers a dependable tool for doctors to evaluate lymph node metastases efficiently by fusing sophisticated segmentation techniques with multimodal image processing. It has the potential to greatly enhance patient outcomes and diagnostic accuracy.
Collapse
Affiliation(s)
- Roxana Rusu-Both
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.-I.P.); (C.B.); (H.V.)
| | | | - Adrian-Ionuț Palagos
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.-I.P.); (C.B.); (H.V.)
- AIMed Soft Solution S.R.L., 400505 Cluj-Napoca, Romania;
| | - Corina Buzoianu
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.-I.P.); (C.B.); (H.V.)
| | - Camelia Avram
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.-I.P.); (C.B.); (H.V.)
| | - Honoriu Vălean
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.-I.P.); (C.B.); (H.V.)
| | - Romeo-Ioan Chira
- Department of Internal Medicine, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania;
- Gastroenterology Department, Emergency Clinical County Hospital Cluj-Napoca, 400347 Cluj-Napoca, Romania
| |
Collapse
|
4
|
Sillmann YM, Monteiro JLGC, Eber P, Baggio AMP, Peacock ZS, Guastaldi FPS. Empowering surgeons: will artificial intelligence change oral and maxillofacial surgery? Int J Oral Maxillofac Surg 2025; 54:179-190. [PMID: 39341693 DOI: 10.1016/j.ijom.2024.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 09/03/2024] [Accepted: 09/10/2024] [Indexed: 10/01/2024]
Abstract
Artificial Intelligence (AI) can enhance the precision and efficiency of diagnostics and treatments in oral and maxillofacial surgery (OMS), leveraging advanced computational technologies to mimic intelligent human behaviors. The study aimed to examine the current state of AI in the OMS literature and highlight the urgent need for further research to optimize AI integration in clinical practice and enhance patient outcomes. A scoping review of journals related to OMS focused on OMS-related applications. PubMed was searched using terms "artificial intelligence", "convolutional networks", "neural networks", "machine learning", "deep learning", and "automation". Ninety articles were analyzed and classified into the following subcategories: pathology, orthognathic surgery, facial trauma, temporomandibular joint disorders, dentoalveolar surgery, dental implants, craniofacial deformities, reconstructive surgery, aesthetic surgery, and complications. There was a significant increase in AI-related studies published after 2019, 95.6% of the total reviewed. This surge in research reflects growing interest in AI and its potential in OMS. Among the studies, the primary uses of AI in OMS were in pathology (e.g., lesion detection, lymph node metastasis detection) and orthognathic surgery (e.g., surgical planning through facial bone segmentation). The studies predominantly employed convolutional neural networks (CNNs) and artificial neural networks (ANNs) for classification tasks, potentially improving clinical outcomes.
Collapse
Affiliation(s)
- Y M Sillmann
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - J L G C Monteiro
- Wellman Center for Photomedicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - P Eber
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - A M P Baggio
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - Z S Peacock
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - F P S Guastaldi
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA.
| |
Collapse
|
5
|
Alapati R, Renslo B, Wagoner SF, Karadaghy O, Serpedin A, Kim YE, Feucht M, Wang N, Ramesh U, Bon Nieves A, Lawrence A, Virgen C, Sawaf T, Rameau A, Bur AM. Assessing the Reporting Quality of Machine Learning Algorithms in Head and Neck Oncology. Laryngoscope 2025; 135:687-694. [PMID: 39258420 DOI: 10.1002/lary.31756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 07/25/2024] [Accepted: 08/23/2024] [Indexed: 09/12/2024]
Abstract
OBJECTIVE This study aimed to assess reporting quality of machine learning (ML) algorithms in the head and neck oncology literature using the TRIPOD-AI criteria. DATA SOURCES A comprehensive search was conducted using PubMed, Scopus, Embase, and Cochrane Database of Systematic Reviews, incorporating search terms related to "artificial intelligence," "machine learning," "deep learning," "neural network," and various head and neck neoplasms. REVIEW METHODS Two independent reviewers analyzed each published study for adherence to the 65-point TRIPOD-AI criteria. Items were classified as "Yes," "No," or "NA" for each publication. The proportion of studies satisfying each TRIPOD-AI criterion was calculated. Additionally, the evidence level for each study was evaluated independently by two reviewers using the Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence. Discrepancies were reconciled through discussion until consensus was reached. RESULTS The study highlights the need for improvements in ML algorithm reporting in head and neck oncology. This includes more comprehensive descriptions of datasets, standardization of model performance reporting, and increased sharing of ML models, data, and code with the research community. Adoption of TRIPOD-AI is necessary for achieving standardized ML research reporting in head and neck oncology. CONCLUSION Current reporting of ML algorithms hinders clinical application, reproducibility, and understanding of the data used for model training. To overcome these limitations and improve patient and clinician trust, ML developers should provide open access to models, code, and source data, fostering iterative progress through community critique, thus enhancing model accuracy and mitigating biases. LEVEL OF EVIDENCE NA Laryngoscope, 135:687-694, 2025.
Collapse
Affiliation(s)
- Rahul Alapati
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Bryan Renslo
- Department of Otolaryngology-Head & Neck Surgery, Thomas Jefferson University, Philadelphia, Pennsylvania, U.S.A
| | - Sarah F Wagoner
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Omar Karadaghy
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Aisha Serpedin
- Department of Otolaryngology-Head & Neck Surgery, Weill Cornell, New York City, New York, U.S.A
| | - Yeo Eun Kim
- Department of Otolaryngology-Head & Neck Surgery, Weill Cornell, New York City, New York, U.S.A
| | - Maria Feucht
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Naomi Wang
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Uma Ramesh
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Antonio Bon Nieves
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Amelia Lawrence
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Celina Virgen
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Tuleen Sawaf
- Department of Otolaryngology-Head & Neck Surgery, University of Maryland, Baltimore, Maryland, U.S.A
| | - Anaïs Rameau
- Department of Otolaryngology-Head & Neck Surgery, Weill Cornell, New York City, New York, U.S.A
| | - Andrés M Bur
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| |
Collapse
|
6
|
Al Hasan MM, Ghazimoghadam S, Tunlayadechanont P, Mostafiz MT, Gupta M, Roy A, Peters K, Hochhegger B, Mancuso A, Asadizanjani N, Forghani R. Automated Segmentation of Lymph Nodes on Neck CT Scans Using Deep Learning. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2955-2966. [PMID: 38937342 PMCID: PMC11612088 DOI: 10.1007/s10278-024-01114-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Revised: 04/01/2024] [Accepted: 04/03/2024] [Indexed: 06/29/2024]
Abstract
Early and accurate detection of cervical lymph nodes is essential for the optimal management and staging of patients with head and neck malignancies. Pilot studies have demonstrated the potential for radiomic and artificial intelligence (AI) approaches in increasing diagnostic accuracy for the detection and classification of lymph nodes, but implementation of many of these approaches in real-world clinical settings would necessitate an automated lymph node segmentation pipeline as a first step. In this study, we aim to develop a non-invasive deep learning (DL) algorithm for detecting and automatically segmenting cervical lymph nodes in 25,119 CT slices from 221 normal neck contrast-enhanced CT scans from patients without head and neck cancer. We focused on the most challenging task of segmentation of small lymph nodes, evaluated multiple architectures, and employed U-Net and our adapted spatial context network to detect and segment small lymph nodes measuring 5-10 mm. The developed algorithm achieved a Dice score of 0.8084, indicating its effectiveness in detecting and segmenting cervical lymph nodes despite their small size. A segmentation framework successful in this task could represent an essential initial block for future algorithms aiming to evaluate small objects such as lymph nodes in different body parts, including small lymph nodes looking normal to the naked human eye but harboring early nodal metastases.
Collapse
Affiliation(s)
- Md Mahfuz Al Hasan
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Electrical and Computer Engineering, University of Florida College of Medicine, Gainesville, FL, USA
| | - Saba Ghazimoghadam
- Augmented Intelligence and Precision Health Laboratory, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
| | - Padcha Tunlayadechanont
- Augmented Intelligence and Precision Health Laboratory, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Mohammed Tahsin Mostafiz
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Electrical and Computer Engineering, University of Florida College of Medicine, Gainesville, FL, USA
| | - Manas Gupta
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
| | - Antika Roy
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Electrical and Computer Engineering, University of Florida College of Medicine, Gainesville, FL, USA
| | - Keith Peters
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Bruno Hochhegger
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Anthony Mancuso
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Navid Asadizanjani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Electrical and Computer Engineering, University of Florida College of Medicine, Gainesville, FL, USA
| | - Reza Forghani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA.
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA.
- Division of Medical Physics, University of Florida College of Medicine, Gainesville, FL, USA.
- Department of Neurology, Division of Movement Disorders, University of Florida College of Medicine, Gainesville, FL, USA.
- Augmented Intelligence and Precision Health Laboratory, Research Institute of the McGill University Health Centre, Montreal, QC, Canada.
| |
Collapse
|
7
|
Reinders FC, Savenije MH, de Ridder M, Maspero M, Doornaert PA, Terhaard CH, Raaijmakers CP, Zakeri K, Lee NY, Aliotta E, Rangnekar A, Veeraraghavan H, Philippens ME. Automatic segmentation for magnetic resonance imaging guided individual elective lymph node irradiation in head and neck cancer patients. Phys Imaging Radiat Oncol 2024; 32:100655. [PMID: 39502445 PMCID: PMC11536060 DOI: 10.1016/j.phro.2024.100655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 09/26/2024] [Accepted: 09/26/2024] [Indexed: 11/08/2024] Open
Abstract
Background and purpose In head and neck squamous cell carcinoma (HNSCC) patients, the radiation dose to nearby organs at risk can be reduced by restricting elective neck irradiation from lymph node levels to individual lymph nodes. However, manual delineation of every individual lymph node is time-consuming and error prone. Therefore, automatic magnetic resonance imaging (MRI) segmentation of individual lymph nodes was developed and tested using a convolutional neural network (CNN). Materials and methods In 50 HNSCC patients (UMC-Utrecht), individual lymph nodes located in lymph node levels Ib-II-III-IV-V were manually segmented on MRI by consensus of two experts, obtaining ground truth segmentations. A 3D CNN (nnU-Net) was trained on 40 patients and tested on 10. Evaluation metrics were Dice Similarity Coefficient (DSC), recall, precision, and F1-score. The segmentations of the CNN was compared to segmentations of two observers. Transfer learning was used with 20 additional patients to re-train and test the CNN in another medical center. Results nnU-Net produced automatic segmentations of elective lymph nodes with median DSC: 0.72, recall: 0.76, precision: 0.78, and F1-score: 0.78. The CNN had higher recall compared to both observers (p = 0.002). No difference in evaluation scores of the networks in both medical centers was found after re-training with 5 or 10 patients. Conclusion nnU-Net was able to automatically segment individual lymph nodes on MRI. The detection rate of lymph nodes using nnU-Net was higher than manual segmentations. Re-training nnU-Net was required to successfully transfer the network to the other medical center.
Collapse
Affiliation(s)
| | - Mark H.F. Savenije
- Department of Radiotherapy, University Medical Centre Utrecht, the Netherlands
- Computational Imaging Group for MR Therapy and Diagnostics, Cancer and Imaging Division, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Mischa de Ridder
- Department of Radiotherapy, University Medical Centre Utrecht, the Netherlands
| | - Matteo Maspero
- Department of Radiotherapy, University Medical Centre Utrecht, the Netherlands
- Computational Imaging Group for MR Therapy and Diagnostics, Cancer and Imaging Division, University Medical Center Utrecht, Utrecht, the Netherlands
| | | | - Chris H.J. Terhaard
- Department of Radiotherapy, University Medical Centre Utrecht, the Netherlands
| | | | - Kaveh Zakeri
- Department of Radiotherapy, Memorial Sloan Kettering Cancer Centre, New York, United States
| | - Nancy Y. Lee
- Department of Radiotherapy, Memorial Sloan Kettering Cancer Centre, New York, United States
| | - Eric Aliotta
- Department of Radiotherapy, Memorial Sloan Kettering Cancer Centre, New York, United States
| | - Aneesh Rangnekar
- Department of Radiotherapy, Memorial Sloan Kettering Cancer Centre, New York, United States
| | - Harini Veeraraghavan
- Department of Radiotherapy, Memorial Sloan Kettering Cancer Centre, New York, United States
| | | |
Collapse
|
8
|
Deng C, Hu J, Tang P, Xu T, He L, Zeng Z, Sheng J. Application of CT and MRI images based on artificial intelligence to predict lymph node metastases in patients with oral squamous cell carcinoma: a subgroup meta-analysis. Front Oncol 2024; 14:1395159. [PMID: 38957322 PMCID: PMC11217320 DOI: 10.3389/fonc.2024.1395159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 05/30/2024] [Indexed: 07/04/2024] Open
Abstract
Background The performance of artificial intelligence (AI) in the prediction of lymph node (LN) metastasis in patients with oral squamous cell carcinoma (OSCC) has not been quantitatively evaluated. The purpose of this study was to conduct a systematic review and meta-analysis of published data on the diagnostic performance of CT and MRI based on AI algorithms for predicting LN metastases in patients with OSCC. Methods We searched the Embase, PubMed (Medline), Web of Science, and Cochrane databases for studies on the use of AI in predicting LN metastasis in OSCC. Binary diagnostic accuracy data were extracted to obtain the outcomes of interest, namely, the area under the curve (AUC), sensitivity, and specificity, and compared the diagnostic performance of AI with that of radiologists. Subgroup analyses were performed with regard to different types of AI algorithms and imaging modalities. Results Fourteen eligible studies were included in the meta-analysis. The AUC, sensitivity, and specificity of the AI models for the diagnosis of LN metastases were 0.92 (95% CI 0.89-0.94), 0.79 (95% CI 0.72-0.85), and 0.90 (95% CI 0.86-0.93), respectively. Promising diagnostic performance was observed in the subgroup analyses based on algorithm types [machine learning (ML) or deep learning (DL)] and imaging modalities (CT vs. MRI). The pooled diagnostic performance of AI was significantly better than that of experienced radiologists. Discussion In conclusion, AI based on CT and MRI imaging has good diagnostic accuracy in predicting LN metastasis in patients with OSCC and thus has the potential for clinical application. Systematic Review Registration https://www.crd.york.ac.uk/PROSPERO/#recordDetails, PROSPERO (No. CRD42024506159).
Collapse
Affiliation(s)
| | | | | | | | | | | | - Jianfeng Sheng
- Department of Thyroid, Head, Neck and Maxillofacial Surgery, the Third Hospital of Mianyang & Sichuan Mental Health Center, Mianyang, Sichuan, China
| |
Collapse
|
9
|
Warin K, Suebnukarn S. Deep learning in oral cancer- a systematic review. BMC Oral Health 2024; 24:212. [PMID: 38341571 PMCID: PMC10859022 DOI: 10.1186/s12903-024-03993-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/06/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND Oral cancer is a life-threatening malignancy, which affects the survival rate and quality of life of patients. The aim of this systematic review was to review deep learning (DL) studies in the diagnosis and prognostic prediction of oral cancer. METHODS This systematic review was conducted following the PRISMA guidelines. Databases (Medline via PubMed, Google Scholar, Scopus) were searched for relevant studies, from January 2000 to June 2023. RESULTS Fifty-four qualified for inclusion, including diagnostic (n = 51), and prognostic prediction (n = 3). Thirteen studies showed a low risk of biases in all domains, and 40 studies low risk for concerns regarding applicability. The performance of DL models was reported of the accuracy of 85.0-100%, F1-score of 79.31 - 89.0%, Dice coefficient index of 76.0 - 96.3% and Concordance index of 0.78-0.95 for classification, object detection, segmentation, and prognostic prediction, respectively. The pooled diagnostic odds ratios were 2549.08 (95% CI 410.77-4687.39) for classification studies. CONCLUSIONS The number of DL studies in oral cancer is increasing, with a diverse type of architectures. The reported accuracy showed promising DL performance in studies of oral cancer and appeared to have potential utility in improving informed clinical decision-making of oral cancer.
Collapse
Affiliation(s)
- Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand.
| | | |
Collapse
|
10
|
Holland L, Hernandez Torres SI, Snider EJ. Using AI Segmentation Models to Improve Foreign Body Detection and Triage from Ultrasound Images. Bioengineering (Basel) 2024; 11:128. [PMID: 38391614 PMCID: PMC10886314 DOI: 10.3390/bioengineering11020128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/20/2024] [Accepted: 01/26/2024] [Indexed: 02/24/2024] Open
Abstract
Medical imaging can be a critical tool for triaging casualties in trauma situations. In remote or military medicine scenarios, triage is essential for identifying how to use limited resources or prioritize evacuation for the most serious cases. Ultrasound imaging, while portable and often available near the point of injury, can only be used for triage if images are properly acquired, interpreted, and objectively triage scored. Here, we detail how AI segmentation models can be used for improving image interpretation and objective triage evaluation for a medical application focused on foreign bodies embedded in tissues at variable distances from critical neurovascular features. Ultrasound images previously collected in a tissue phantom with or without neurovascular features were labeled with ground truth masks. These image sets were used to train two different segmentation AI frameworks: YOLOv7 and U-Net segmentation models. Overall, both approaches were successful in identifying shrapnel in the image set, with U-Net outperforming YOLOv7 for single-class segmentation. Both segmentation models were also evaluated with a more complex image set containing shrapnel, artery, vein, and nerve features. YOLOv7 obtained higher precision scores across multiple classes whereas U-Net achieved higher recall scores. Using each AI model, a triage distance metric was adapted to measure the proximity of shrapnel to the nearest neurovascular feature, with U-Net more closely mirroring the triage distances measured from ground truth labels. Overall, the segmentation AI models were successful in detecting shrapnel in ultrasound images and could allow for improved injury triage in emergency medicine scenarios.
Collapse
Affiliation(s)
| | | | - Eric J. Snider
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA; (L.H.); (S.I.H.T.)
| |
Collapse
|
11
|
Eida S, Fukuda M, Katayama I, Takagi Y, Sasaki M, Mori H, Kawakami M, Nishino T, Ariji Y, Sumi M. Metastatic Lymph Node Detection on Ultrasound Images Using YOLOv7 in Patients with Head and Neck Squamous Cell Carcinoma. Cancers (Basel) 2024; 16:274. [PMID: 38254765 PMCID: PMC10813890 DOI: 10.3390/cancers16020274] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 12/28/2023] [Accepted: 01/04/2024] [Indexed: 01/24/2024] Open
Abstract
Ultrasonography is the preferred modality for detailed evaluation of enlarged lymph nodes (LNs) identified on computed tomography and/or magnetic resonance imaging, owing to its high spatial resolution. However, the diagnostic performance of ultrasonography depends on the examiner's expertise. To support the ultrasonographic diagnosis, we developed YOLOv7-based deep learning models for metastatic LN detection on ultrasonography and compared their detection performance with that of highly experienced radiologists and less experienced residents. We enrolled 462 B- and D-mode ultrasound images of 261 metastatic and 279 non-metastatic histopathologically confirmed LNs from 126 patients with head and neck squamous cell carcinoma. The YOLOv7-based B- and D-mode models were optimized using B- and D-mode training and validation images and their detection performance for metastatic LNs was evaluated using B- and D-mode testing images, respectively. The D-mode model's performance was comparable to that of radiologists and superior to that of residents' reading of D-mode images, whereas the B-mode model's performance was higher than that of residents but lower than that of radiologists on B-mode images. Thus, YOLOv7-based B- and D-mode models can assist less experienced residents in ultrasonographic diagnoses. The D-mode model could raise the diagnostic performance of residents to the same level as experienced radiologists.
Collapse
Affiliation(s)
- Sato Eida
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Motoki Fukuda
- Department of Oral Radiology, Osaka Dental University, 1-5-17 Otemae, Chuo-ku, Osaka 540-0008, Japan; (M.F.); (Y.A.)
| | - Ikuo Katayama
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Yukinori Takagi
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Miho Sasaki
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Hiroki Mori
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Maki Kawakami
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Tatsuyoshi Nishino
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Yoshiko Ariji
- Department of Oral Radiology, Osaka Dental University, 1-5-17 Otemae, Chuo-ku, Osaka 540-0008, Japan; (M.F.); (Y.A.)
| | - Misa Sumi
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| |
Collapse
|
12
|
Can S, Türk Ö, Ayral M, Kozan G, Arı H, Akdağ M, Baylan MY. Can deep learning replace histopathological examinations in the differential diagnosis of cervical lymphadenopathy? Eur Arch Otorhinolaryngol 2024; 281:359-367. [PMID: 37578497 DOI: 10.1007/s00405-023-08181-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/07/2023] [Indexed: 08/15/2023]
Abstract
INTRODUCTION We aimed to develop a diagnostic deep learning model using contrast-enhanced CT images and to investigate whether cervical lymphadenopathies can be diagnosed with these deep learning methods without radiologist interpretations and histopathological examinations. MATERIAL METHOD A total of 400 patients who underwent surgery for lymphadenopathy in the neck between 2010 and 2022 were retrospectively analyzed. They were examined in four groups of 100 patients: the granulomatous diseases group, the lymphoma group, the squamous cell tumor group, and the reactive hyperplasia group. The diagnoses of the patients were confirmed histopathologically. Two CT images from all the patients in each group were used in the study. The CT images were classified using ResNet50, NASNetMobile, and DenseNet121 architecture input. RESULTS The classification accuracies obtained with ResNet50, DenseNet121, and NASNetMobile were 92.5%, 90.62, and 87.5, respectively. CONCLUSION Deep learning is a useful diagnostic tool in diagnosing cervical lymphadenopathy. In the near future, many diseases could be diagnosed with deep learning models without radiologist interpretations and invasive examinations such as histopathological examinations. However, further studies with much larger case series are needed to develop accurate deep-learning models.
Collapse
Affiliation(s)
- Sermin Can
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey.
| | - Ömer Türk
- Department of Computer Programming, Mardin Artuklu University Vocational School, Mardin, Turkey
| | - Muhammed Ayral
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey
| | - Günay Kozan
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey
| | - Hamza Arı
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey
| | - Mehmet Akdağ
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey
| | - Müzeyyen Yıldırım Baylan
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey
| |
Collapse
|
13
|
Rahim A, Khatoon R, Khan TA, Syed K, Khan I, Khalid T, Khalid B. Artificial intelligence-powered dentistry: Probing the potential, challenges, and ethicality of artificial intelligence in dentistry. Digit Health 2024; 10:20552076241291345. [PMID: 39539720 PMCID: PMC11558748 DOI: 10.1177/20552076241291345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 09/27/2024] [Indexed: 11/16/2024] Open
Abstract
Introduction Healthcare amelioration is exponential to technological advancement. In the recent era of automation, the consolidation of artificial intelligence (AI) in dentistry has rendered transformation in oral healthcare from a hardware-centric approach to a software-centric approach, leading to enhanced efficiency and improved educational and clinical outcomes. Objectives The aim of this narrative overview is to extend the succinct of the major events and innovations that led to the creation of modern-day AI and dentistry and the applicability of the former in dentistry. This article also prompts oral healthcare workers to endeavor a liable and optimal approach for effective incorporation of AI technology into their practice to promote oral health by exploring the potentials, constraints, and ethical considerations of AI in dentistry. Methods A comprehensive approach for searching the white and grey literature was carried out to collect and assess the data on AI, its use in dentistry, and the associated challenges and ethical concerns. Results AI in dentistry is still in its evolving phase with paramount applicabilities relevant to risk prediction, diagnosis, decision-making, prognosis, tailored treatment plans, patient management, and academia as well as the associated challenges and ethical concerns in its implementation. Conclusion The upsurging advancements in AI have resulted in transformations and promising outcomes across all domains of dentistry. In futurity, AI may be capable of executing a multitude of tasks in the domain of oral healthcare, at the level of or surpassing the ability of mankind. However, AI could be of significant benefit to oral health only if it is utilized under responsibility, ethicality and universality.
Collapse
Affiliation(s)
- Abid Rahim
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Rabia Khatoon
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Tahir Ali Khan
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Kawish Syed
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Ibrahim Khan
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Tamsal Khalid
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Balaj Khalid
- Syed Babar Ali School of Science and Engineering, Lahore University of Management Sciences, Lahore, Pakistan
| |
Collapse
|
14
|
Wang W, Liu Y, Wu J. Early diagnosis of oral cancer using a hybrid arrangement of deep belief networkand combined group teaching algorithm. Sci Rep 2023; 13:22073. [PMID: 38086888 PMCID: PMC10716144 DOI: 10.1038/s41598-023-49438-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 12/07/2023] [Indexed: 12/18/2023] Open
Abstract
Oral cancer can occur in different parts of the mouth, including the lips, palate, gums, and inside the cheeks. If not treated in time, it can be life-threatening. Incidentally, using CAD-based diagnosis systems can be so helpful for early detection of this disease and curing it. In this study, a new deep learning-based methodology has been proposed for optimal oral cancer diagnosis from the images. In this method, after some preprocessing steps, a new deep belief network (DBN) has been proposed as the main part of the diagnosis system. The main contribution of the proposed DBN is its combination with a developed version of a metaheuristic technique, known as the Combined Group Teaching Optimization algorithm to provide an efficient system of diagnosis. The presented method is then implemented in the "Oral Cancer (Lips and Tongue) images dataset" and a comparison is done between the results and other methods, including ANN, Bayesian, CNN, GSO-NN, and End-to-End NN to show the efficacy of the techniques. The results showed that the DBN-CGTO method achieved a precision rate of 97.71%, sensitivity rate of 92.37%, the Matthews Correlation Coefficient of 94.65%, and 94.65% F1 score, which signifies its ability as the highest efficiency among the others to accurately classify positive samples while remaining the independent correct classification of negative samples.
Collapse
Affiliation(s)
- Wenjing Wang
- Department of Stomatology, The First Affiliated Hospital of Yangtze University, Jingzhou, 434000, Hubei, China
| | - Yi Liu
- Department of Stomatology, The First Affiliated Hospital of Yangtze University, Jingzhou, 434000, Hubei, China
| | - Jianan Wu
- Experimental and Practical Teaching Center, Hubei College of Chinese Medicine, Jingzhou, 434000, Hubei, China.
| |
Collapse
|
15
|
Katsumata A. Deep learning and artificial intelligence in dental diagnostic imaging. JAPANESE DENTAL SCIENCE REVIEW 2023; 59:329-333. [PMID: 37811196 PMCID: PMC10551806 DOI: 10.1016/j.jdsr.2023.09.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 09/04/2023] [Accepted: 09/25/2023] [Indexed: 10/10/2023] Open
Abstract
The application of artificial intelligence (AI) based on deep learning in dental diagnostic imaging is increasing. Several popular deep learning tasks have been applied to dental diagnostic images. Classification tasks are used to classify images with and without positive abnormal findings or to evaluate the progress of a lesion based on imaging findings. Region (object) detection and segmentation tasks have been used for tooth identification in panoramic radiographs. This technique is useful for automatically creating a patient's dental chart. Deep learning methods can also be used for detecting and evaluating anatomical structures of interest from images. Furthermore, generative AI based on natural language processing can automatically create written reports from the findings of diagnostic imaging.
Collapse
|
16
|
Adeoye J, Hui L, Su YX. Data-centric artificial intelligence in oncology: a systematic review assessing data quality in machine learning models for head and neck cancer. JOURNAL OF BIG DATA 2023; 10:28. [DOI: 10.1186/s40537-023-00703-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 02/23/2023] [Indexed: 01/03/2025]
Abstract
AbstractMachine learning models have been increasingly considered to model head and neck cancer outcomes for improved screening, diagnosis, treatment, and prognostication of the disease. As the concept of data-centric artificial intelligence is still incipient in healthcare systems, little is known about the data quality of the models proposed for clinical utility. This is important as it supports the generalizability of the models and data standardization. Therefore, this study overviews the quality of structured and unstructured data used for machine learning model construction in head and neck cancer. Relevant studies reporting on the use of machine learning models based on structured and unstructured custom datasets between January 2016 and June 2022 were sourced from PubMed, EMBASE, Scopus, and Web of Science electronic databases. Prediction model Risk of Bias Assessment (PROBAST) tool was used to assess the quality of individual studies before comprehensive data quality parameters were assessed according to the type of dataset used for model construction. A total of 159 studies were included in the review; 106 utilized structured datasets while 53 utilized unstructured datasets. Data quality assessments were deliberately performed for 14.2% of structured datasets and 11.3% of unstructured datasets before model construction. Class imbalance and data fairness were the most common limitations in data quality for both types of datasets while outlier detection and lack of representative outcome classes were common in structured and unstructured datasets respectively. Furthermore, this review found that class imbalance reduced the discriminatory performance for models based on structured datasets while higher image resolution and good class overlap resulted in better model performance using unstructured datasets during internal validation. Overall, data quality was infrequently assessed before the construction of ML models in head and neck cancer irrespective of the use of structured or unstructured datasets. To improve model generalizability, the assessments discussed in this study should be introduced during model construction to achieve data-centric intelligent systems for head and neck cancer management.
Collapse
|
17
|
Hung KF, Yeung AWK, Bornstein MM, Schwendicke F. Personalized dental medicine, artificial intelligence, and their relevance for dentomaxillofacial imaging. Dentomaxillofac Radiol 2023; 52:20220335. [PMID: 36472627 PMCID: PMC9793453 DOI: 10.1259/dmfr.20220335] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 11/08/2022] [Accepted: 11/11/2022] [Indexed: 12/12/2022] Open
Abstract
Personalized medicine refers to the tailoring of diagnostics and therapeutics to individuals based on one's biological, social, and behavioral characteristics. While personalized dental medicine is still far from being a reality, advanced artificial intelligence (AI) technologies with improved data analytic approaches are expected to integrate diverse data from the individual, setting, and system levels, which may facilitate a deeper understanding of the interaction of these multilevel data and therefore bring us closer to more personalized, predictive, preventive, and participatory dentistry, also known as P4 dentistry. In the field of dentomaxillofacial imaging, a wide range of AI applications, including several commercially available software options, have been proposed to assist dentists in the diagnosis and treatment planning of various dentomaxillofacial diseases, with performance similar or even superior to that of specialists. Notably, the impact of these dental AI applications on treatment decision, clinical and patient-reported outcomes, and cost-effectiveness has so far been assessed sparsely. Such information should be further investigated in future studies to provide patients, providers, and healthcare organizers a clearer picture of the true usefulness of AI in daily dental practice.
Collapse
Affiliation(s)
- Kuo Feng Hung
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Andy Wai Kan Yeung
- Division of Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Michael M. Bornstein
- Department of Oral Health & Medicine, University Center for Dental Medicine Basel UZB, University of Basel, Basel, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
18
|
Hung KF, Ai QYH, Wong LM, Yeung AWK, Li DTS, Leung YY. Current Applications of Deep Learning and Radiomics on CT and CBCT for Maxillofacial Diseases. Diagnostics (Basel) 2022; 13:110. [PMID: 36611402 PMCID: PMC9818323 DOI: 10.3390/diagnostics13010110] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 12/31/2022] Open
Abstract
The increasing use of computed tomography (CT) and cone beam computed tomography (CBCT) in oral and maxillofacial imaging has driven the development of deep learning and radiomics applications to assist clinicians in early diagnosis, accurate prognosis prediction, and efficient treatment planning of maxillofacial diseases. This narrative review aimed to provide an up-to-date overview of the current applications of deep learning and radiomics on CT and CBCT for the diagnosis and management of maxillofacial diseases. Based on current evidence, a wide range of deep learning models on CT/CBCT images have been developed for automatic diagnosis, segmentation, and classification of jaw cysts and tumors, cervical lymph node metastasis, salivary gland diseases, temporomandibular (TMJ) disorders, maxillary sinus pathologies, mandibular fractures, and dentomaxillofacial deformities, while CT-/CBCT-derived radiomics applications mainly focused on occult lymph node metastasis in patients with oral cancer, malignant salivary gland tumors, and TMJ osteoarthritis. Most of these models showed high performance, and some of them even outperformed human experts. The models with performance on par with human experts have the potential to serve as clinically practicable tools to achieve the earliest possible diagnosis and treatment, leading to a more precise and personalized approach for the management of maxillofacial diseases. Challenges and issues, including the lack of the generalizability and explainability of deep learning models and the uncertainty in the reproducibility and stability of radiomic features, should be overcome to gain the trust of patients, providers, and healthcare organizers for daily clinical use of these models.
Collapse
Affiliation(s)
- Kuo Feng Hung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Qi Yong H. Ai
- Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Lun M. Wong
- Imaging and Interventional Radiology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Dion Tik Shun Li
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Yiu Yan Leung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|