1
|
Zhou Z, Xue J, Wu Y, Mao J, Li C, Yu X, Ma C, Zhao G. Automated detection of metastatic lymph nodes in head and neck malignant tumors on high - resolution MRI images using an improved convolutional neural network. Int J Med Inform 2025; 200:105904. [PMID: 40220628 DOI: 10.1016/j.ijmedinf.2025.105904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2025] [Revised: 03/25/2025] [Accepted: 03/28/2025] [Indexed: 04/14/2025]
Abstract
PURPOSE To develop an AI-based diagnostic model for assessing cervical lymph nodes in head and neck malignant tumors using MRI, enabling non-invasive pre-surgical metastasis diagnosis. MATERIALS AND METHODS Fifty-three cases of head and neck malignant tumors were retrospectively analyzed, including 157 metastatic lymph nodes and 2,406 MRI images. The dataset was split into training, validation, and test sets. A convolutional neural network (CNN) model was optimized through ablation and comparative experiments, and its diagnostic performance was evaluated using metrics such as average precision (AP), recall (AR), and mean average precision (mAP). A clinical evaluation compared the model's diagnostic efficiency to senior and junior physicians, assessing accuracy, sensitivity, specificity, predictive values, and area under the curve (AUC). RESULTS The model achieved detection and segmentation metrics of APdet 74.88 %, APseg 74.12 %, ARdet 63.11 %, ARseg 62.28 %, mAPdet 74.64 %, and mAPseg 74.04 %. Diagnostic accuracy was 83.6 %, with sensitivity 81.3 %, specificity 85.9 %, and an AUC of 0.834. The model processed the test set in 400 s (under 1 s per image), outperforming senior (AUC 0.706) and junior physicians (AUC 0.650), who required 1368 and 2276 s, respectively (p < 0.001). CONCLUSION The LNMS Net model enhances diagnostic accuracy and efficiency for head and neck malignant tumors, supporting precise treatment planning and reducing overtreatment risks. It also offers a foundation for extending AI-based lymph node metastasis diagnosis to other clinical areas.
Collapse
Affiliation(s)
- Zhongwei Zhou
- Department of Oral and Maxillofacial Surgery, General Hospital of Ningxia Medical University, No. 804, Sheng Li South Road, Yinchuan, Ningxia 750004, P.R. China.
| | - Jiawen Xue
- Department of Stomatology, People's Hospital of Ningxia Hui Autonomous Region, Ningxia Medical Universityl, No. 301, Zhengyuan North Street, Jinfeng District, Yinchuan, Ningxia 750001, P.R. China.
| | - Yue Wu
- Ningxia Medical University, No. 1160, Sheng Li South Road, Yinchuan, Ningxia 750004, P.R. China.
| | - Jingjing Mao
- Ningxia Medical University, No. 1160, Sheng Li South Road, Yinchuan, Ningxia 750004, P.R. China
| | - Cheng Li
- Department of Stomatology, People's Hospital of Ningxia Hui Autonomous Region, Ningxia Medical Universityl, No. 301, Zhengyuan North Street, Jinfeng District, Yinchuan, Ningxia 750001, P.R. China.
| | - Xianghai Yu
- Department of Stomatology, People's Hospital of Ningxia Hui Autonomous Region, Ningxia Medical Universityl, No. 301, Zhengyuan North Street, Jinfeng District, Yinchuan, Ningxia 750001, P.R. China.
| | - Changping Ma
- Department of Oral and Maxillofacial Surgery, General Hospital of Ningxia Medical University, No. 804, Sheng Li South Road, Yinchuan, Ningxia 750004, P.R. China.
| | - Guizhi Zhao
- Department of Stomatology, People's Hospital of Ningxia Hui Autonomous Region, Ningxia Medical Universityl, No. 301, Zhengyuan North Street, Jinfeng District, Yinchuan, Ningxia 750001, P.R. China.
| |
Collapse
|
2
|
Rusu-Both R, Socaci MC, Palagos AI, Buzoianu C, Avram C, Vălean H, Chira RI. A Deep Learning-Based Detection and Segmentation System for Multimodal Ultrasound Images in the Evaluation of Superficial Lymph Node Metastases. J Clin Med 2025; 14:1828. [PMID: 40142635 PMCID: PMC11942978 DOI: 10.3390/jcm14061828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2025] [Revised: 03/03/2025] [Accepted: 03/06/2025] [Indexed: 03/28/2025] Open
Abstract
Background/Objectives: Even with today's advancements, cancer still represents a major cause of mortality worldwide. One important aspect of cancer progression that has a big impact on diagnosis, prognosis, and treatment plans is accurate lymph node metastasis evaluation. However, regardless of the imaging method used, this process is challenging and time-consuming. This research aimed to develop and validate an automatic detection and segmentation system for superficial lymph node evaluation based on multimodal ultrasound images, such as traditional B-mode, Doppler, and elastography, using deep learning techniques. Methods: The suggested approach incorporated a Mask R-CNN architecture designed specifically for the detection and segmentation of lymph nodes. The pipeline first involved noise reduction preprocessing, after which morphological and textural feature segmentation and analysis were performed. Vascularity and stiffness parameters were further examined in Doppler and elastography pictures. Metrics, including accuracy, mean average precision (mAP), and dice coefficient, were used to assess the system's performance during training and validation on a carefully selected dataset of annotated ultrasound pictures. Results: During testing, the Mask R-CNN model showed an accuracy of 92.56%, a COCO AP score of 60.7 and a validation score of 64. Furter on, to improve diagnostic capabilities, Doppler and elastography data were added. This allowed for improved performance across several types of ultrasound images and provided thorough insights into the morphology, vascularity, and stiffness of lymph nodes. Conclusions: This paper offers a novel use of deep learning for automated lymph node assessment in ultrasound imaging. This system offers a dependable tool for doctors to evaluate lymph node metastases efficiently by fusing sophisticated segmentation techniques with multimodal image processing. It has the potential to greatly enhance patient outcomes and diagnostic accuracy.
Collapse
Affiliation(s)
- Roxana Rusu-Both
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.-I.P.); (C.B.); (H.V.)
| | | | - Adrian-Ionuț Palagos
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.-I.P.); (C.B.); (H.V.)
- AIMed Soft Solution S.R.L., 400505 Cluj-Napoca, Romania;
| | - Corina Buzoianu
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.-I.P.); (C.B.); (H.V.)
| | - Camelia Avram
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.-I.P.); (C.B.); (H.V.)
| | - Honoriu Vălean
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.-I.P.); (C.B.); (H.V.)
| | - Romeo-Ioan Chira
- Department of Internal Medicine, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania;
- Gastroenterology Department, Emergency Clinical County Hospital Cluj-Napoca, 400347 Cluj-Napoca, Romania
| |
Collapse
|
3
|
Tarakçı EA, Çeliker M, Birinci M, Yemiş T, Gül O, Oğuz EF, Solak M, Kaba E, Çeliker FB, Özergin Coşkun Z, Alkan A, Erdivanlı ÖÇ. Novel Preprocessing-Based Sequence for Comparative MR Cervical Lymph Node Segmentation. J Clin Med 2025; 14:1802. [PMID: 40142614 PMCID: PMC11943128 DOI: 10.3390/jcm14061802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2025] [Revised: 02/20/2025] [Accepted: 02/24/2025] [Indexed: 03/28/2025] Open
Abstract
Background and Objective: This study aims to utilize deep learning methods for the automatic segmentation of cervical lymph nodes in magnetic resonance images (MRIs), enhancing the speed and accuracy of diagnosing pathological masses in the neck and improving patient treatment processes. Materials and Methods: This study included 1346 MRI slices from 64 patients undergoing cervical lymph node dissection, biopsy, and preoperative contrast-enhanced neck MRI. A preprocessing model was used to crop and highlight lymph nodes, along with a method for automatic re-cropping. Two datasets were created from the cropped images-one with augmentation and one without-divided into 90% training and 10% validation sets. After preprocessing, the ResNet-50 images in the DeepLabv3+ encoder block were automatically segmented. Results: According to the results of the validation set, the mean IoU values for the DWI, T2, T1, T1+C, and ADC sequences in the dataset without augmentation created for cervical lymph node segmentation were 0.89, 0.88, 0.81, 0.85, and 0.80, respectively. In the augmented dataset, the average IoU values for all sequences were 0.91, 0.89, 0.85, 0.88, and 0.84. The DWI sequence showed the highest performance in the datasets with and without augmentation. Conclusions: Our preprocessing-based deep learning architectures successfully segmented cervical lymph nodes with high accuracy. This study is the first to explore automatic segmentation of the cervical lymph nodes using comprehensive neck MRI sequences. The proposed model can streamline the detection process, reducing the need for radiology expertise. Additionally, it offers a promising alternative to manual segmentation in radiotherapy, potentially enhancing treatment effectiveness.
Collapse
Affiliation(s)
- Elif Ayten Tarakçı
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| | - Metin Çeliker
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| | - Mehmet Birinci
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| | - Tuğba Yemiş
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| | - Oğuz Gül
- Department of Otorhinolaryngology, Akçaabat Haçkalı Baba State Hospital, Trabzon 61310, Turkey;
| | - Enes Faruk Oğuz
- Department of Biomedical Device Technology, Hassa Vocational School, Hatay Mustafa Kemal University, Hatay 31000, Turkey;
| | - Merve Solak
- Department of Radiolagy, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (M.S.); (E.K.); (F.B.Ç.)
| | - Esat Kaba
- Department of Radiolagy, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (M.S.); (E.K.); (F.B.Ç.)
| | - Fatma Beyazal Çeliker
- Department of Radiolagy, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (M.S.); (E.K.); (F.B.Ç.)
| | - Zerrin Özergin Coşkun
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| | - Ahmet Alkan
- Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş 46000, Turkey;
| | - Özlem Çelebi Erdivanlı
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| |
Collapse
|
4
|
Liao W, Luo X, Li L, Xu J, He Y, Huang H, Zhang S. Automatic cervical lymph nodes detection and segmentation in heterogeneous computed tomography images using deep transfer learning. Sci Rep 2025; 15:4250. [PMID: 39905029 PMCID: PMC11794882 DOI: 10.1038/s41598-024-84804-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2024] [Accepted: 12/27/2024] [Indexed: 02/06/2025] Open
Abstract
To develop a deep learning model using transfer learning for automatic detection and segmentation of neck lymph nodes (LNs) in computed tomography (CT) images, the study included 11,013 annotated LNs with a short-axis diameter ≥ 3 mm from 626 head and neck cancer patients across four hospitals. The nnUNet model was used as a baseline, pre-trained on a large-scale head and neck dataset, and then fine-tuned with 4,729 LNs from hospital A for detection and segmentation. Validation was conducted on an internal testing cohort (ITC A) and three external testing cohorts (ETCs B, C, and D), with 1684 and 4600 LNs, respectively. Detection was evaluated via sensitivity, positive predictive value (PPV), and false positive rate per case (FP/vol), while segmentation was assessed using the Dice similarity coefficient (DSC) and Hausdorff distance (HD95). For detection, the sensitivity, PPV, and FP/vol in ITC A were 54.6%, 69.0%, and 3.4, respectively. In ETCs, the sensitivity ranged from 45.7% at 3.9 FP/vol to 63.5% at 5.8 FP/vol. Segmentation achieved a mean DSC of 0.72 in ITC A and 0.72 to 0.74 in ETCs, as well as a mean HD95 of 3.78 mm in ITC A and 2.73 mm to 2.85 mm in ETCs. No significant sensitivity difference was found between contrast-enhanced and unenhanced CT images (p = 0.502) or repeated CT images (p = 0.815) during adaptive radiotherapy. The model's segmentation accuracy was comparable to that of experienced oncologists. The model shows promise in automatically detecting and segmenting neck LNs in CT images, potentially reducing oncologists' segmentation workload.
Collapse
Affiliation(s)
- Wenjun Liao
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, Cancer Hospital Affiliate to School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610041, China
| | - Xiangde Luo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Lu Li
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, Cancer Hospital Affiliate to School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610041, China
| | - Jinfeng Xu
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Yuan He
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 23000, Anhui, China
| | - Hui Huang
- Cancer Center, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 610072, China
| | - Shichuan Zhang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, Cancer Hospital Affiliate to School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610041, China.
| |
Collapse
|
5
|
Alapati R, Renslo B, Wagoner SF, Karadaghy O, Serpedin A, Kim YE, Feucht M, Wang N, Ramesh U, Bon Nieves A, Lawrence A, Virgen C, Sawaf T, Rameau A, Bur AM. Assessing the Reporting Quality of Machine Learning Algorithms in Head and Neck Oncology. Laryngoscope 2025; 135:687-694. [PMID: 39258420 DOI: 10.1002/lary.31756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 07/25/2024] [Accepted: 08/23/2024] [Indexed: 09/12/2024]
Abstract
OBJECTIVE This study aimed to assess reporting quality of machine learning (ML) algorithms in the head and neck oncology literature using the TRIPOD-AI criteria. DATA SOURCES A comprehensive search was conducted using PubMed, Scopus, Embase, and Cochrane Database of Systematic Reviews, incorporating search terms related to "artificial intelligence," "machine learning," "deep learning," "neural network," and various head and neck neoplasms. REVIEW METHODS Two independent reviewers analyzed each published study for adherence to the 65-point TRIPOD-AI criteria. Items were classified as "Yes," "No," or "NA" for each publication. The proportion of studies satisfying each TRIPOD-AI criterion was calculated. Additionally, the evidence level for each study was evaluated independently by two reviewers using the Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence. Discrepancies were reconciled through discussion until consensus was reached. RESULTS The study highlights the need for improvements in ML algorithm reporting in head and neck oncology. This includes more comprehensive descriptions of datasets, standardization of model performance reporting, and increased sharing of ML models, data, and code with the research community. Adoption of TRIPOD-AI is necessary for achieving standardized ML research reporting in head and neck oncology. CONCLUSION Current reporting of ML algorithms hinders clinical application, reproducibility, and understanding of the data used for model training. To overcome these limitations and improve patient and clinician trust, ML developers should provide open access to models, code, and source data, fostering iterative progress through community critique, thus enhancing model accuracy and mitigating biases. LEVEL OF EVIDENCE NA Laryngoscope, 135:687-694, 2025.
Collapse
Affiliation(s)
- Rahul Alapati
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Bryan Renslo
- Department of Otolaryngology-Head & Neck Surgery, Thomas Jefferson University, Philadelphia, Pennsylvania, U.S.A
| | - Sarah F Wagoner
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Omar Karadaghy
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Aisha Serpedin
- Department of Otolaryngology-Head & Neck Surgery, Weill Cornell, New York City, New York, U.S.A
| | - Yeo Eun Kim
- Department of Otolaryngology-Head & Neck Surgery, Weill Cornell, New York City, New York, U.S.A
| | - Maria Feucht
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Naomi Wang
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Uma Ramesh
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Antonio Bon Nieves
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Amelia Lawrence
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Celina Virgen
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Tuleen Sawaf
- Department of Otolaryngology-Head & Neck Surgery, University of Maryland, Baltimore, Maryland, U.S.A
| | - Anaïs Rameau
- Department of Otolaryngology-Head & Neck Surgery, Weill Cornell, New York City, New York, U.S.A
| | - Andrés M Bur
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| |
Collapse
|
6
|
Albuquerque C, Henriques R, Castelli M. Deep learning-based object detection algorithms in medical imaging: Systematic review. Heliyon 2025; 11:e41137. [PMID: 39758372 PMCID: PMC11699422 DOI: 10.1016/j.heliyon.2024.e41137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 12/04/2024] [Accepted: 12/10/2024] [Indexed: 01/06/2025] Open
Abstract
Over the past decade, Deep Learning (DL) techniques have demonstrated remarkable advancements across various domains, driving their widespread adoption. Particularly in medical image analysis, DL received greater attention for tasks like image segmentation, object detection, and classification. This paper provides an overview of DL-based object recognition in medical images, exploring recent methods and emphasizing different imaging techniques and anatomical applications. Utilizing a meticulous quantitative and qualitative analysis following PRISMA guidelines, we examined publications based on citation rates to explore into the utilization of DL-based object detectors across imaging modalities and anatomical domains. Our findings reveal a consistent rise in the utilization of DL-based object detection models, indicating unexploited potential in medical image analysis. Predominantly within Medicine and Computer Science domains, research in this area is most active in the US, China, and Japan. Notably, DL-based object detection methods have gotten significant interest across diverse medical imaging modalities and anatomical domains. These methods have been applied to a range of techniques including CR scans, pathology images, and endoscopic imaging, showcasing their adaptability. Moreover, diverse anatomical applications, particularly in digital pathology and microscopy, have been explored. The analysis underscores the presence of varied datasets, often with significant discrepancies in size, with a notable percentage being labeled as private or internal, and with prospective studies in this field remaining scarce. Our review of existing trends in DL-based object detection in medical images offers insights for future research directions. The continuous evolution of DL algorithms highlighted in the literature underscores the dynamic nature of this field, emphasizing the need for ongoing research and fitted optimization for specific applications.
Collapse
|
7
|
Meer M, Khan MA, Jabeen K, Alzahrani AI, Alalwan N, Shabaz M, Khan F. Deep convolutional neural networks information fusion and improved whale optimization algorithm based smart oral squamous cell carcinoma classification framework using histopathological images. EXPERT SYSTEMS 2025; 42. [DOI: 10.1111/exsy.13536] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 12/21/2023] [Indexed: 08/25/2024]
Abstract
AbstractThe most prevalent type of cancer worldwide is mouth cancer. Around 2.5% of deaths are reported annually due to oral cancer in 2023. Early diagnosis of oral squamous cell carcinoma (OSCC), a prevalent oral cavity cancer, is essential for treating and recovering patients. A few computerized techniques exist but are focused on traditional machine learning methods, such as handcrafted features. In this work, we proposed a fully automated architecture based on Self‐Attention convolutional neural network and Residual Network information fusion and optimization. In the proposed framework, the augmentation process is performed on the training and testing samples, and then two developed deep models are trained. A self‐attention MobileNet‐V2 model is developed and trained using an augmented dataset. In parallel, a Self‐Attention DarkNet‐19 model is trained on the same dataset, whereas the hyperparameters have been initialized using the whale optimization algorithm (WOA). Features are extracted from the deeper layers of both models and fused using a canonical correlation analysis (CCA) approach. The CCA approach is further optimized using an improved WOA version named Quantum WOA that removes the irrelevant features and selects only important ones. The final selected features are classified using neural networks such as wide neural networks. The experimental process is performed on the augmented dataset that includes two sets: 100× and 400×. Using both sets, the proposed method obtained an accuracy of 98.7% and 96.3%. Comparison is conducted with a few state‐of‐the‐art (SOTA) techniques and shows a significant improvement in accuracy and precision rate.
Collapse
Affiliation(s)
- Momina Meer
- Department of Computer Science University of Wah Wah Cantt Pakistan
| | - Muhammad Attique Khan
- Department of Computer Science HITEC University Pakistan
- Department of Computer Science and Mathematics Lebanese American University Beirut Lebanon
| | - Kiran Jabeen
- Department of Computer Science HITEC University Pakistan
| | | | - Nasser Alalwan
- Computer Science Department, Community College King Saud University Riyadh Saudi Arabia
| | | | - Faheem Khan
- Department of Computer Engineering Gachon University Seongnam‐si South Korea
| |
Collapse
|
8
|
Chen W, Dhawan M, Liu J, Ing D, Mehta K, Tran D, Lawrence D, Ganhewa M, Cirillo N. Mapping the Use of Artificial Intelligence-Based Image Analysis for Clinical Decision-Making in Dentistry: A Scoping Review. Clin Exp Dent Res 2024; 10:e70035. [PMID: 39600121 PMCID: PMC11599430 DOI: 10.1002/cre2.70035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 09/19/2024] [Accepted: 10/20/2024] [Indexed: 11/29/2024] Open
Abstract
OBJECTIVES Artificial intelligence (AI) is an emerging field in dentistry. AI is gradually being integrated into dentistry to improve clinical dental practice. The aims of this scoping review were to investigate the application of AI in image analysis for decision-making in clinical dentistry and identify trends and research gaps in the current literature. MATERIAL AND METHODS This review followed the guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). An electronic literature search was performed through PubMed and Scopus. After removing duplicates, a preliminary screening based on titles and abstracts was performed. A full-text review and analysis were performed according to predefined inclusion criteria, and data were extracted from eligible articles. RESULTS Of the 1334 articles returned, 276 met the inclusion criteria (consisting of 601,122 images in total) and were included in the qualitative synthesis. Most of the included studies utilized convolutional neural networks (CNNs) on dental radiographs such as orthopantomograms (OPGs) and intraoral radiographs (bitewings and periapicals). AI was applied across all fields of dentistry - particularly oral medicine, oral surgery, and orthodontics - for direct clinical inference and segmentation. AI-based image analysis was use in several components of the clinical decision-making process, including diagnosis, detection or classification, prediction, and management. CONCLUSIONS A variety of machine learning and deep learning techniques are being used for dental image analysis to assist clinicians in making accurate diagnoses and choosing appropriate interventions in a timely manner.
Collapse
Affiliation(s)
- Wei Chen
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Monisha Dhawan
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Jonathan Liu
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Damie Ing
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Kruti Mehta
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Daniel Tran
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | | | - Max Ganhewa
- CoTreatAI, CoTreat Pty Ltd.MelbourneVictoriaAustralia
| | - Nicola Cirillo
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
- CoTreatAI, CoTreat Pty Ltd.MelbourneVictoriaAustralia
| |
Collapse
|
9
|
Hartoonian S, Hosseini M, Yousefi I, Mahdian M, Ghazizadeh Ahsaie M. Applications of artificial intelligence in dentomaxillofacial imaging: a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:641-655. [PMID: 38637235 DOI: 10.1016/j.oooo.2023.12.790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/02/2023] [Accepted: 12/22/2023] [Indexed: 04/20/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has been increasingly developed in oral and maxillofacial imaging. The aim of this systematic review was to assess the applications and performance of the developed algorithms in different dentomaxillofacial imaging modalities. STUDY DESIGN A systematic search of PubMed and Scopus databases was performed. The search strategy was set as a combination of the following keywords: "Artificial Intelligence," "Machine Learning," "Deep Learning," "Neural Networks," "Head and Neck Imaging," and "Maxillofacial Imaging." Full-text screening and data extraction were independently conducted by two independent reviewers; any mismatch was resolved by discussion. The risk of bias was assessed by one reviewer and validated by another. RESULTS The search returned a total of 3,392 articles. After careful evaluation of the titles, abstracts, and full texts, a total number of 194 articles were included. Most studies focused on AI applications for tooth and implant classification and identification, 3-dimensional cephalometric landmark detection, lesion detection (periapical, jaws, and bone), and osteoporosis detection. CONCLUSION Despite the AI models' limitations, they showed promising results. Further studies are needed to explore specific applications and real-world scenarios before confidently integrating these models into dental practice.
Collapse
Affiliation(s)
- Serlie Hartoonian
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Matine Hosseini
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Iman Yousefi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mina Mahdian
- Department of Prosthodontics and Digital Technology, Stony Brook University School of Dental Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Mitra Ghazizadeh Ahsaie
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
10
|
Yuan Y, Pan B, Mo H, Wu X, Long Z, Yang Z, Zhu J, Ming J, Qiu L, Sun Y, Yin S, Zhang F. Deep learning-based computer-aided diagnosis system for the automatic detection and classification of lateral cervical lymph nodes on original ultrasound images of papillary thyroid carcinoma: a prospective diagnostic study. Endocrine 2024; 85:1289-1299. [PMID: 38570388 DOI: 10.1007/s12020-024-03808-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 03/26/2024] [Indexed: 04/05/2024]
Abstract
PURPOSE This study aims to develop a deep learning-based computer-aided diagnosis (CAD) system for the automatic detection and classification of lateral cervical lymph nodes (LNs) on original ultrasound images of papillary thyroid carcinoma (PTC) patients. METHODS A retrospective data set of 1801 cervical LN ultrasound images from 1675 patients with PTC and a prospective test set including 185 images from 160 patients were collected. Four different deep leaning models were trained and validated in the retrospective data set. The best model was selected for CAD system development and compared with three sonographers in the retrospective and prospective test sets. RESULTS The Deformable Detection Transformer (DETR) model showed the highest diagnostic efficacy, with a mean average precision score of 86.3% in the retrospective test set, and was therefore used in constructing the CAD system. The detection performance of the CAD system was superior to the junior sonographer and intermediate sonographer with accuracies of 86.3% and 92.4% in the retrospective and prospective test sets, respectively. The classification performance of the CAD system was better than all sonographers with the areas under the curve (AUCs) of 94.4% and 95.2% in the retrospective and prospective test sets, respectively. CONCLUSIONS This study developed a Deformable DETR model-based CAD system for automatically detecting and classifying lateral cervical LNs on original ultrasound images, which showed excellent diagnostic efficacy and clinical utility. It can be an important tool for assisting sonographers in the diagnosis process.
Collapse
Affiliation(s)
- Yuquan Yuan
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Bin Pan
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Hongbiao Mo
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Xing Wu
- College of Computer Science, Chongqing University, Chongqing, China
| | - Zhaoxin Long
- College of Computer Science, Chongqing University, Chongqing, China
| | - Zeyu Yang
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Junping Zhu
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Jing Ming
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Lin Qiu
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Yiceng Sun
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Supeng Yin
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China.
- Chongqing Hospital of Traditional Chinese Medicine, Chongqing, China.
| | - Fan Zhang
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China.
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China.
- Chongqing Hospital of Traditional Chinese Medicine, Chongqing, China.
| |
Collapse
|
11
|
Li C, Chen X, Chen C, Gong Z, Pataer P, Liu X, Lv X. Application of deep learning radiomics in oral squamous cell carcinoma-Extracting more information from medical images using advanced feature analysis. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101840. [PMID: 38548062 DOI: 10.1016/j.jormas.2024.101840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/07/2024] [Accepted: 03/20/2024] [Indexed: 04/02/2024]
Abstract
OBJECTIVE To conduct a systematic review with meta-analyses to assess the recent scientific literature addressing the application of deep learning radiomics in oral squamous cell carcinoma (OSCC). MATERIALS AND METHODS Electronic and manual literature retrieval was performed using PubMed, Web of Science, EMbase, Ovid-MEDLINE, and IEEE databases from 2012 to 2023. The ROBINS-I tool was used for quality evaluation; random-effects model was used; and results were reported according to the PRISMA statement. RESULTS A total of 26 studies involving 64,731 medical images were included in quantitative synthesis. The meta-analysis showed that, the pooled sensitivity and specificity were 0.88 (95 %CI: 0.87∼0.88) and 0.80 (95 %CI: 0.80∼0.81), respectively. Deeks' asymmetry test revealed there existed slight publication bias (P = 0.03). CONCLUSIONS The advances in the application of radiomics combined with learning algorithm in OSCC were reviewed, including diagnosis and differential diagnosis of OSCC, efficacy assessment and prognosis prediction. The demerits of deep learning radiomics at the current stage and its future development direction aimed at medical imaging diagnosis were also summarized and analyzed at the end of the article.
Collapse
Affiliation(s)
- Chenxi Li
- Oncological Department of Oral and Maxillofacial Surgery, the First Affiliated Hospital of Xinjiang Medical University, School / Hospital of Stomatology. Urumqi 830054, PR China; Stomatological Research Institute of Xinjiang Uygur Autonomous Region. Urumqi 830054, PR China; Hubei Province Key Laboratory of Oral and Maxillofacial Development and Regeneration, School of Stomatology, Tongji Medical College, Union Hospital, Huazhong University of Science and Technology, Wuhan 430022, PR China.
| | - Xinya Chen
- College of Information Science and Engineering, Xinjiang University. Urumqi 830008, PR China
| | - Cheng Chen
- College of Software, Xinjiang University. Urumqi 830046, PR China
| | - Zhongcheng Gong
- Oncological Department of Oral and Maxillofacial Surgery, the First Affiliated Hospital of Xinjiang Medical University, School / Hospital of Stomatology. Urumqi 830054, PR China; Stomatological Research Institute of Xinjiang Uygur Autonomous Region. Urumqi 830054, PR China.
| | - Parekejiang Pataer
- Oncological Department of Oral and Maxillofacial Surgery, the First Affiliated Hospital of Xinjiang Medical University, School / Hospital of Stomatology. Urumqi 830054, PR China
| | - Xu Liu
- Department of Maxillofacial Surgery, Hospital of Stomatology, Key Laboratory of Dental-Maxillofacial Reconstruction and Biological Intelligence Manufacturing of Gansu Province, Faculty of Dentistry, Lanzhou University. Lanzhou 730013, PR China
| | - Xiaoyi Lv
- College of Information Science and Engineering, Xinjiang University. Urumqi 830008, PR China; College of Software, Xinjiang University. Urumqi 830046, PR China
| |
Collapse
|
12
|
Wang Y, Rahman A, Duggar WN, Thomas TV, Roberts PR, Vijayakumar S, Jiao Z, Bian L, Wang H. A gradient mapping guided explainable deep neural network for extracapsular extension identification in 3D head and neck cancer computed tomography images. Med Phys 2024; 51:2007-2019. [PMID: 37643447 DOI: 10.1002/mp.16680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 07/13/2023] [Accepted: 08/03/2023] [Indexed: 08/31/2023] Open
Abstract
BACKGROUND Diagnosis and treatment management for head and neck squamous cell carcinoma (HNSCC) is guided by routine diagnostic head and neck computed tomography (CT) scans to identify tumor and lymph node features. The extracapsular extension (ECE) is a strong predictor of patients' survival outcomes with HNSCC. It is essential to detect the occurrence of ECE as it changes staging and treatment planning for patients. Current clinical ECE detection relies on visual identification and pathologic confirmation conducted by clinicians. However, manual annotation of the lymph node region is a required data preprocessing step in most of the current machine learning-based ECE diagnosis studies. PURPOSE In this paper, we propose a Gradient Mapping Guided Explainable Network (GMGENet) framework to perform ECE identification automatically without requiring annotated lymph node region information. METHODS The gradient-weighted class activation mapping (Grad-CAM) technique is applied to guide the deep learning algorithm to focus on the regions that are highly related to ECE. The proposed framework includes an extractor and a classifier. In a joint training process, informative volumes of interest (VOIs) are extracted by the extractor without labeled lymph node region information, and the classifier learns the pattern to classify the extracted VOIs into ECE positive and negative. RESULTS In evaluation, the proposed methods are well-trained and tested using cross-validation. GMGENet achieved test accuracy and area under the curve (AUC) of 92.2% and 89.3%, respectively. GMGENetV2 achieved 90.3% accuracy and 91.7% AUC in the test. The results were compared with different existing models and further confirmed and explained by generating ECE probability heatmaps via a Grad-CAM technique. The presence or absence of ECE has been analyzed and correlated with ground truth histopathological findings. CONCLUSIONS The proposed deep network can learn meaningful patterns to identify ECE without providing lymph node contours. The introduced ECE heatmaps will contribute to the clinical implementations of the proposed model and reveal unknown features to radiologists. The outcome of this study is expected to promote the implementation of explainable artificial intelligence-assiste ECE detection.
Collapse
Affiliation(s)
- Yibin Wang
- Department of Industrial and Systems Engineering, Mississippi State University, Mississippi State, Mississippi, USA
| | - Abdur Rahman
- Department of Industrial and Systems Engineering, Mississippi State University, Mississippi State, Mississippi, USA
| | - William Neil Duggar
- Department of Radiation Oncology, University of Mississippi Medical Center, Jackson, Mississippi, USA
| | - Toms V Thomas
- Department of Radiation Oncology, University of Mississippi Medical Center, Jackson, Mississippi, USA
| | - Paul Russell Roberts
- Department of Radiation Oncology, University of Mississippi Medical Center, Jackson, Mississippi, USA
| | - Srinivasan Vijayakumar
- Department of Radiation Oncology, University of Mississippi Medical Center, Jackson, Mississippi, USA
| | - Zhicheng Jiao
- Warren Alpert Medical School, Brown University, Providence, Rhode Island, USA
| | - Linkan Bian
- Department of Industrial and Systems Engineering, Mississippi State University, Mississippi State, Mississippi, USA
| | - Haifeng Wang
- Department of Industrial and Systems Engineering, Mississippi State University, Mississippi State, Mississippi, USA
- Department of Radiation Oncology, University of Mississippi Medical Center, Jackson, Mississippi, USA
| |
Collapse
|
13
|
Warin K, Suebnukarn S. Deep learning in oral cancer- a systematic review. BMC Oral Health 2024; 24:212. [PMID: 38341571 PMCID: PMC10859022 DOI: 10.1186/s12903-024-03993-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/06/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND Oral cancer is a life-threatening malignancy, which affects the survival rate and quality of life of patients. The aim of this systematic review was to review deep learning (DL) studies in the diagnosis and prognostic prediction of oral cancer. METHODS This systematic review was conducted following the PRISMA guidelines. Databases (Medline via PubMed, Google Scholar, Scopus) were searched for relevant studies, from January 2000 to June 2023. RESULTS Fifty-four qualified for inclusion, including diagnostic (n = 51), and prognostic prediction (n = 3). Thirteen studies showed a low risk of biases in all domains, and 40 studies low risk for concerns regarding applicability. The performance of DL models was reported of the accuracy of 85.0-100%, F1-score of 79.31 - 89.0%, Dice coefficient index of 76.0 - 96.3% and Concordance index of 0.78-0.95 for classification, object detection, segmentation, and prognostic prediction, respectively. The pooled diagnostic odds ratios were 2549.08 (95% CI 410.77-4687.39) for classification studies. CONCLUSIONS The number of DL studies in oral cancer is increasing, with a diverse type of architectures. The reported accuracy showed promising DL performance in studies of oral cancer and appeared to have potential utility in improving informed clinical decision-making of oral cancer.
Collapse
Affiliation(s)
- Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand.
| | | |
Collapse
|
14
|
Eida S, Fukuda M, Katayama I, Takagi Y, Sasaki M, Mori H, Kawakami M, Nishino T, Ariji Y, Sumi M. Metastatic Lymph Node Detection on Ultrasound Images Using YOLOv7 in Patients with Head and Neck Squamous Cell Carcinoma. Cancers (Basel) 2024; 16:274. [PMID: 38254765 PMCID: PMC10813890 DOI: 10.3390/cancers16020274] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 12/28/2023] [Accepted: 01/04/2024] [Indexed: 01/24/2024] Open
Abstract
Ultrasonography is the preferred modality for detailed evaluation of enlarged lymph nodes (LNs) identified on computed tomography and/or magnetic resonance imaging, owing to its high spatial resolution. However, the diagnostic performance of ultrasonography depends on the examiner's expertise. To support the ultrasonographic diagnosis, we developed YOLOv7-based deep learning models for metastatic LN detection on ultrasonography and compared their detection performance with that of highly experienced radiologists and less experienced residents. We enrolled 462 B- and D-mode ultrasound images of 261 metastatic and 279 non-metastatic histopathologically confirmed LNs from 126 patients with head and neck squamous cell carcinoma. The YOLOv7-based B- and D-mode models were optimized using B- and D-mode training and validation images and their detection performance for metastatic LNs was evaluated using B- and D-mode testing images, respectively. The D-mode model's performance was comparable to that of radiologists and superior to that of residents' reading of D-mode images, whereas the B-mode model's performance was higher than that of residents but lower than that of radiologists on B-mode images. Thus, YOLOv7-based B- and D-mode models can assist less experienced residents in ultrasonographic diagnoses. The D-mode model could raise the diagnostic performance of residents to the same level as experienced radiologists.
Collapse
Affiliation(s)
- Sato Eida
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Motoki Fukuda
- Department of Oral Radiology, Osaka Dental University, 1-5-17 Otemae, Chuo-ku, Osaka 540-0008, Japan; (M.F.); (Y.A.)
| | - Ikuo Katayama
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Yukinori Takagi
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Miho Sasaki
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Hiroki Mori
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Maki Kawakami
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Tatsuyoshi Nishino
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Yoshiko Ariji
- Department of Oral Radiology, Osaka Dental University, 1-5-17 Otemae, Chuo-ku, Osaka 540-0008, Japan; (M.F.); (Y.A.)
| | - Misa Sumi
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| |
Collapse
|
15
|
Esce AR, Baca AL, Redemann JP, Rebbe RW, Schultz F, Agarwal S, Hanson JA, Olson GT, Martin DR, Boyd NH. Predicting nodal metastases in squamous cell carcinoma of the oral tongue using artificial intelligence. Am J Otolaryngol 2024; 45:104102. [PMID: 37948827 DOI: 10.1016/j.amjoto.2023.104102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 10/29/2023] [Indexed: 11/12/2023]
Abstract
OBJECTIVE The presence of occult nodal metastases in patients with squamous cell carcinoma (SCC) of the oral tongue has implications for treatment. Upwards of 30% of patients will have occult nodal metastases, yet a significant number of patients undergo unnecessary neck dissection to confirm nodal status. This study sought to predict the presence of nodal metastases in patients with SCC of the oral tongue using a convolutional neural network (CNN) that analyzed visual histopathology from the primary tumor alone. METHODS Cases of SCC of the oral tongue were identified from the records of a single institution. Only patients with complete pathology data were included in the study. The primary tumors were randomized into 2 groups for training and testing, which was performed at 2 different levels of supervision. Board-certified pathologists annotated each slide. HALO-AI convolutional neural network and image software was used to perform training and testing. Receiver operator characteristic (ROC) curves and the Youden J statistic were used for primary analysis. RESULTS Eighty-nine cases of SCC of the oral tongue were included in the study. The best performing algorithm had a high level of supervision and a sensitivity of 65% and specificity of 86% when identifying nodal metastases. The area under the curve (AUC) of the ROC curve for this algorithm was 0.729. CONCLUSION A CNN can produce an algorithm that is able to predict nodal metastases in patients with squamous cell carcinoma of the oral tongue by analyzing the visual histopathology of the primary tumor alone.
Collapse
Affiliation(s)
- Antoinette R Esce
- Department of Surgery, Division of Otolaryngology Head and Neck Surgery, 1 University of New Mexico, MSC10 5610, Albuquerque, NM, 87131, USA.
| | - Andrewe L Baca
- The University of New Mexico School of Medicine, 1 University of New Mexico, MSC08 4720, Albuquerque, NM 87131, USA
| | - Jordan P Redemann
- Department of Pathology, 1 University of New Mexico, MSC08 4640, Albuquerque, NM, 87131, USA.
| | - Ryan W Rebbe
- Department of Pathology, 1 University of New Mexico, MSC08 4640, Albuquerque, NM, 87131, USA.
| | - Fred Schultz
- Department of Pathology, 1 University of New Mexico, MSC08 4640, Albuquerque, NM, 87131, USA.
| | - Shweta Agarwal
- Department of Pathology, 1 University of New Mexico, MSC08 4640, Albuquerque, NM, 87131, USA.
| | - Joshua A Hanson
- Department of Pathology, 1 University of New Mexico, MSC08 4640, Albuquerque, NM, 87131, USA.
| | - Garth T Olson
- Department of Surgery, Division of Otolaryngology Head and Neck Surgery, 1 University of New Mexico, MSC10 5610, Albuquerque, NM, 87131, USA.
| | - David R Martin
- Department of Surgery, Division of Otolaryngology Head and Neck Surgery, 1 University of New Mexico, MSC10 5610, Albuquerque, NM, 87131, USA.
| | - Nathan H Boyd
- Department of Surgery, Division of Otolaryngology Head and Neck Surgery, 1 University of New Mexico, MSC10 5610, Albuquerque, NM, 87131, USA.
| |
Collapse
|
16
|
Rokhshad R, Salehi SN, Yavari A, Shobeiri P, Esmaeili M, Manila N, Motamedian SR, Mohammad-Rahimi H. Deep learning for diagnosis of head and neck cancers through radiographic data: a systematic review and meta-analysis. Oral Radiol 2024; 40:1-20. [PMID: 37855976 DOI: 10.1007/s11282-023-00715-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 09/23/2023] [Indexed: 10/20/2023]
Abstract
PURPOSE This study aims to review deep learning applications for detecting head and neck cancer (HNC) using magnetic resonance imaging (MRI) and radiographic data. METHODS Through January 2023, a PubMed, Scopus, Embase, Google Scholar, IEEE, and arXiv search were carried out. The inclusion criteria were implementing head and neck medical images (computed tomography (CT), positron emission tomography (PET), MRI, Planar scans, and panoramic X-ray) of human subjects with segmentation, object detection, and classification deep learning models for head and neck cancers. The risk of bias was rated with the quality assessment of diagnostic accuracy studies (QUADAS-2) tool. For the meta-analysis diagnostic odds ratio (DOR) was calculated. Deeks' funnel plot was used to assess publication bias. MIDAS and Metandi packages were used to analyze diagnostic test accuracy in STATA. RESULTS From 1967 studies, 32 were found eligible after the search and screening procedures. According to the QUADAS-2 tool, 7 included studies had a low risk of bias for all domains. According to the results of all included studies, the accuracy varied from 82.6 to 100%. Additionally, specificity ranged from 66.6 to 90.1%, sensitivity from 74 to 99.68%. Fourteen studies that provided sufficient data were included for meta-analysis. The pooled sensitivity was 90% (95% CI 0.820.94), and the pooled specificity was 92% (CI 95% 0.87-0.96). The DORs were 103 (27-251). Publication bias was not detected based on the p-value of 0.75 in the meta-analysis. CONCLUSION With a head and neck screening deep learning model, detectable screening processes can be enhanced with high specificity and sensitivity.
Collapse
Affiliation(s)
- Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
| | - Seyyede Niloufar Salehi
- Executive Secretary of Research Committee, Board Director of Scientific Society, Dental Faculty, Azad University, Tehran, Iran
| | - Amirmohammad Yavari
- Student Research Committee, School of Dentistry, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Parnian Shobeiri
- School of Medicine, Tehran University of Medical Science, Tehran, Iran
| | - Mahdieh Esmaeili
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, Tehran, Iran
| | - Nisha Manila
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
- Department of Diagnostic Sciences, Louisiana State University Health Science Center School of Dentistry, Louisiana, USA
| | - Saeed Reza Motamedian
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany.
- Dentofacial Deformities Research Center, Research Institute of Dental, Sciences & Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjou Blvd, Tehran, Iran.
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
| |
Collapse
|
17
|
Chen Z, Yu Y, Liu S, Du W, Hu L, Wang C, Li J, Liu J, Zhang W, Peng X. A deep learning and radiomics fusion model based on contrast-enhanced computer tomography improves preoperative identification of cervical lymph node metastasis of oral squamous cell carcinoma. Clin Oral Investig 2023; 28:39. [PMID: 38151672 DOI: 10.1007/s00784-023-05423-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 11/21/2023] [Indexed: 12/29/2023]
Abstract
OBJECTIVES In this study, we constructed and validated models based on deep learning and radiomics to facilitate preoperative diagnosis of cervical lymph node metastasis (LNM) using contrast-enhanced computed tomography (CECT). MATERIALS AND METHODS CECT scans of 100 patients with OSCC (217 metastatic and 1973 non-metastatic cervical lymph nodes: development set, 76 patients; internally independent test set, 24 patients) who received treatment at the Peking University School and Hospital of Stomatology between 2012 and 2016 were retrospectively collected. Clinical diagnoses and pathological findings were used to establish the gold standard for metastatic cervical LNs. A reader study with two clinicians was also performed to evaluate the lymph node status in the test set. The performance of the proposed models and the clinicians was evaluated and compared by measuring using the area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). RESULTS A fusion model combining deep learning with radiomics showed the best performance (ACC, 89.2%; SEN, 92.0%; SPE, 88.9%; and AUC, 0.950 [95% confidence interval: 0.908-0.993, P < 0.001]) in the test set. In comparison with the clinicians, the fusion model showed higher sensitivity (92.0 vs. 72.0% and 60.0%) but lower specificity (88.9 vs. 97.5% and 98.8%). CONCLUSION A fusion model combining radiomics and deep learning approaches outperformed other single-technique models and showed great potential to accurately predict cervical LNM in patients with OSCC. CLINICAL RELEVANCE The fusion model can complement the preoperative identification of LNM of OSCC performed by the clinicians.
Collapse
Affiliation(s)
- Zhen Chen
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Yao Yu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Shuo Liu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Wen Du
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Leihao Hu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Congwei Wang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Jiaqi Li
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Jianbo Liu
- Huafang Hanying Medical Technology Co., Ltd, No.19, West Bridge Road, Miyun District, Beijing, 101520, People's Republic of China
| | - Wenbo Zhang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Xin Peng
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China.
| |
Collapse
|
18
|
Katsumata A. Deep learning and artificial intelligence in dental diagnostic imaging. JAPANESE DENTAL SCIENCE REVIEW 2023; 59:329-333. [PMID: 37811196 PMCID: PMC10551806 DOI: 10.1016/j.jdsr.2023.09.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 09/04/2023] [Accepted: 09/25/2023] [Indexed: 10/10/2023] Open
Abstract
The application of artificial intelligence (AI) based on deep learning in dental diagnostic imaging is increasing. Several popular deep learning tasks have been applied to dental diagnostic images. Classification tasks are used to classify images with and without positive abnormal findings or to evaluate the progress of a lesion based on imaging findings. Region (object) detection and segmentation tasks have been used for tooth identification in panoramic radiographs. This technique is useful for automatically creating a patient's dental chart. Deep learning methods can also be used for detecting and evaluating anatomical structures of interest from images. Furthermore, generative AI based on natural language processing can automatically create written reports from the findings of diagnostic imaging.
Collapse
|
19
|
Tsilivigkos C, Athanasopoulos M, Micco RD, Giotakis A, Mastronikolis NS, Mulita F, Verras GI, Maroulis I, Giotakis E. Deep Learning Techniques and Imaging in Otorhinolaryngology-A State-of-the-Art Review. J Clin Med 2023; 12:6973. [PMID: 38002588 PMCID: PMC10672270 DOI: 10.3390/jcm12226973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/02/2023] [Accepted: 11/06/2023] [Indexed: 11/26/2023] Open
Abstract
Over the last decades, the field of medicine has witnessed significant progress in artificial intelligence (AI), the Internet of Medical Things (IoMT), and deep learning (DL) systems. Otorhinolaryngology, and imaging in its various subspecialties, has not remained untouched by this transformative trend. As the medical landscape evolves, the integration of these technologies becomes imperative in augmenting patient care, fostering innovation, and actively participating in the ever-evolving synergy between computer vision techniques in otorhinolaryngology and AI. To that end, we conducted a thorough search on MEDLINE for papers published until June 2023, utilizing the keywords 'otorhinolaryngology', 'imaging', 'computer vision', 'artificial intelligence', and 'deep learning', and at the same time conducted manual searching in the references section of the articles included in our manuscript. Our search culminated in the retrieval of 121 related articles, which were subsequently subdivided into the following categories: imaging in head and neck, otology, and rhinology. Our objective is to provide a comprehensive introduction to this burgeoning field, tailored for both experienced specialists and aspiring residents in the domain of deep learning algorithms in imaging techniques in otorhinolaryngology.
Collapse
Affiliation(s)
- Christos Tsilivigkos
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Michail Athanasopoulos
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Riccardo di Micco
- Department of Otolaryngology and Head and Neck Surgery, Medical School of Hannover, 30625 Hannover, Germany;
| | - Aris Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Nicholas S. Mastronikolis
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Francesk Mulita
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Georgios-Ioannis Verras
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Ioannis Maroulis
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Evangelos Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| |
Collapse
|
20
|
Achararit P, Manaspon C, Jongwannasiri C, Phattarataratip E, Osathanon T, Sappayatosok K. Artificial Intelligence-Based Diagnosis of Oral Lichen Planus Using Deep Convolutional Neural Networks. Eur J Dent 2023; 17:1275-1282. [PMID: 36669652 PMCID: PMC10756816 DOI: 10.1055/s-0042-1760300] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
OBJECTIVE The aim of this study was to employ artificial intelligence (AI) via convolutional neural network (CNN) for the separation of oral lichen planus (OLP) and non-OLP in biopsy-proven clinical cases of OLP and non-OLP. MATERIALS AND METHODS Data comprised of clinical photographs of 609 OLP and 480 non-OLP which diagnosis has been confirmed histopathologically. Fifty-five photographs from the OLP and non-OLP groups were randomly selected for use as the test dataset, while the remaining were used as training and validation datasets. Data augmentation was performed on the training dataset to increase the number and variation of photographs. Performance metrics for the CNN model performance included accuracy, positive predictive value, negative predictive value, sensitivity, specificity, and F1-score. Gradient-weighted class activation mapping was also used to visualize the important regions associated with discriminative clinical features on which the model relies. RESULTS All the selected CNN models were able to diagnose OLP and non-OLP lesions using photographs. The performance of the Xception model was significantly higher than that of the other models in terms of overall accuracy and F1-score. CONCLUSIONS Our demonstration shows that CNN models can achieve an accuracy of 82 to 88%. Xception model performed the best in terms of both accuracy and F1-score.
Collapse
Affiliation(s)
- Paniti Achararit
- Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Chawan Manaspon
- Biomedical Engineering Institute, Chiang Mai University, Chiang Mai, Thailand
| | - Chavin Jongwannasiri
- Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Ekarat Phattarataratip
- Department of Oral Pathology, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| | - Thanaphum Osathanon
- Dental Stem Cell Biology Research Unit, Department of Anatomy, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| | | |
Collapse
|
21
|
Rinneburger M, Carolus H, Iuga AI, Weisthoff M, Lennartz S, Hokamp NG, Caldeira L, Shahzad R, Maintz D, Laqua FC, Baeßler B, Klinder T, Persigehl T. Automated localization and segmentation of cervical lymph nodes on contrast-enhanced CT using a 3D foveal fully convolutional neural network. Eur Radiol Exp 2023; 7:45. [PMID: 37505296 PMCID: PMC10382409 DOI: 10.1186/s41747-023-00360-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 06/03/2023] [Indexed: 07/29/2023] Open
Abstract
BACKGROUND In the management of cancer patients, determination of TNM status is essential for treatment decision-making and therefore closely linked to clinical outcome and survival. Here, we developed a tool for automatic three-dimensional (3D) localization and segmentation of cervical lymph nodes (LNs) on contrast-enhanced computed tomography (CECT) examinations. METHODS In this IRB-approved retrospective single-center study, 187 CECT examinations of the head and neck region from patients with various primary diseases were collected from our local database, and 3656 LNs (19.5 ± 14.9 LNs/CECT, mean ± standard deviation) with a short-axis diameter (SAD) ≥ 5 mm were segmented manually by expert physicians. With these data, we trained an independent fully convolutional neural network based on 3D foveal patches. Testing was performed on 30 independent CECTs with 925 segmented LNs with an SAD ≥ 5 mm. RESULTS In total, 4,581 LNs were segmented in 217 CECTs. The model achieved an average localization rate (LR), i.e., percentage of localized LNs/CECT, of 78.0% in the validation dataset. In the test dataset, average LR was 81.1% with a mean Dice coefficient of 0.71. For enlarged LNs with a SAD ≥ 10 mm, LR was 96.2%. In the test dataset, the false-positive rate was 2.4 LNs/CECT. CONCLUSIONS Our trained AI model demonstrated a good overall performance in the consistent automatic localization and 3D segmentation of physiological and metastatic cervical LNs with a SAD ≥ 5 mm on CECTs. This could aid clinical localization and automatic 3D segmentation, which can benefit clinical care and radiomics research. RELEVANCE STATEMENT Our AI model is a time-saving tool for 3D segmentation of cervical lymph nodes on contrast-enhanced CT scans and serves as a solid base for N staging in clinical practice and further radiomics research. KEY POINTS • Determination of N status in TNM staging is essential for therapy planning in oncology. • Segmenting cervical lymph nodes manually is highly time-consuming in clinical practice. • Our model provides a robust, automated 3D segmentation of cervical lymph nodes. • It achieves a high accuracy for localization especially of enlarged lymph nodes. • These segmentations should assist clinical care and radiomics research.
Collapse
Affiliation(s)
- Miriam Rinneburger
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.
| | | | - Andra-Iza Iuga
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Mathilda Weisthoff
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Simon Lennartz
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Nils Große Hokamp
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Liliana Caldeira
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Rahil Shahzad
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Innovative Technologies, Philips Healthcare, Aachen, Germany
| | - David Maintz
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Fabian Christopher Laqua
- Institute of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Bettina Baeßler
- Institute of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | | | - Thorsten Persigehl
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
22
|
Bhattacharya K, Mahajan A, Vaish R, Rane S, Shukla S, D'Cruz AK. Imaging of Neck Nodes in Head and Neck Cancers - a Comprehensive Update. Clin Oncol (R Coll Radiol) 2023; 35:429-445. [PMID: 37061456 DOI: 10.1016/j.clon.2023.03.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 02/08/2023] [Accepted: 03/22/2023] [Indexed: 03/29/2023]
Abstract
Cervical lymph node metastases from head and neck squamous cell cancers significantly reduce disease-free survival and worsen overall prognosis and, hence, deserve more aggressive management and follow-up. As per the eighth edition of the American Joint Committee on Cancer staging manual, extranodal extension, especially in human papillomavirus-negative cancers, has been incorporated in staging as it is important in deciding management and significantly impacts the outcome of head and neck squamous cell cancer. Lymph node imaging with various radiological modalities, including ultrasound, computed tomography and magnetic resonance imaging, has been widely used, not only to demonstrate nodal involvement but also for guided histopathological evaluation and therapeutic intervention. Computed tomography and magnetic resonance imaging, together with positron emission tomography, are used widely for the follow-up of treated patients. Finally, there is an emerging role for artificial intelligence in neck node imaging that has shown promising results, increasing the accuracy of detection of nodal involvement, especially normal-appearing nodes. The aim of this review is to provide a comprehensive overview of the diagnosis and management of involved neck nodes with a focus on sentinel node anatomy, pathogenesis, imaging correlates (including radiogenomics and artificial intelligence) and the role of image-guided interventions.
Collapse
Affiliation(s)
- K Bhattacharya
- Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India
| | - A Mahajan
- The Clatterbridge Cancer Centre, NHS Foundation Trust, Liverpool, UK.
| | - R Vaish
- Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India
| | - S Rane
- Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India
| | - S Shukla
- Homi Bhabha Cancer Hospital, Varanasi, Uttar Pradesh, India
| | - A K D'Cruz
- Apollo Hospitals, India; Union International Cancer Control (UICC), Geneva, Switzerland; Foundation of Head Neck Oncology, India
| |
Collapse
|
23
|
Ananthakrishnan B, Shaik A, Kumar S, Narendran SO, Mattu K, Kavitha MS. Automated Detection and Classification of Oral Squamous Cell Carcinoma Using Deep Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13050918. [PMID: 36900062 PMCID: PMC10001077 DOI: 10.3390/diagnostics13050918] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/02/2023] [Accepted: 02/14/2023] [Indexed: 03/05/2023] Open
Abstract
This work aims to classify normal and carcinogenic cells in the oral cavity using two different approaches with an eye towards achieving high accuracy. The first approach extracts local binary patterns and metrics derived from a histogram from the dataset and is fed to several machine-learning models. The second approach uses a combination of neural networks as a backbone feature extractor and a random forest for classification. The results show that information can be learnt effectively from limited training images using these approaches. Some approaches use deep learning algorithms to generate a bounding box that can locate the suspected lesion. Other approaches use handcrafted textural feature extraction techniques and feed the resultant feature vectors to a classification model. The proposed method will extract the features pertaining to the images using pre-trained convolution neural networks (CNN) and train a classification model using the resulting feature vectors. By using the extracted features from a pre-trained CNN model to train a random forest, the problem of requiring a large amount of data to train deep learning models is bypassed. The study selected a dataset consisting of 1224 images, which were divided into two sets with varying resolutions.The performance of the model is calculated based on accuracy, specificity, sensitivity, and the area under curve (AUC). The proposed work is able to produce a highest test accuracy of 96.94% and an AUC of 0.976 using 696 images of 400× magnification and a highest test accuracy of 99.65% and an AUC of 0.9983 using only 528 images of 100× magnification images.
Collapse
Affiliation(s)
- Balasundaram Ananthakrishnan
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai 600127, India
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
- Correspondence: (B.A.); (A.S.)
| | - Ayesha Shaik
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
- Correspondence: (B.A.); (A.S.)
| | - Soham Kumar
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - S. O. Narendran
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - Khushi Mattu
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - Muthu Subash Kavitha
- School of Information and Data Sciences, Nagasaki University, Nagasaki 852-8521, Japan
| |
Collapse
|
24
|
Hung KF, Ai QYH, Wong LM, Yeung AWK, Li DTS, Leung YY. Current Applications of Deep Learning and Radiomics on CT and CBCT for Maxillofacial Diseases. Diagnostics (Basel) 2022; 13:110. [PMID: 36611402 PMCID: PMC9818323 DOI: 10.3390/diagnostics13010110] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 12/31/2022] Open
Abstract
The increasing use of computed tomography (CT) and cone beam computed tomography (CBCT) in oral and maxillofacial imaging has driven the development of deep learning and radiomics applications to assist clinicians in early diagnosis, accurate prognosis prediction, and efficient treatment planning of maxillofacial diseases. This narrative review aimed to provide an up-to-date overview of the current applications of deep learning and radiomics on CT and CBCT for the diagnosis and management of maxillofacial diseases. Based on current evidence, a wide range of deep learning models on CT/CBCT images have been developed for automatic diagnosis, segmentation, and classification of jaw cysts and tumors, cervical lymph node metastasis, salivary gland diseases, temporomandibular (TMJ) disorders, maxillary sinus pathologies, mandibular fractures, and dentomaxillofacial deformities, while CT-/CBCT-derived radiomics applications mainly focused on occult lymph node metastasis in patients with oral cancer, malignant salivary gland tumors, and TMJ osteoarthritis. Most of these models showed high performance, and some of them even outperformed human experts. The models with performance on par with human experts have the potential to serve as clinically practicable tools to achieve the earliest possible diagnosis and treatment, leading to a more precise and personalized approach for the management of maxillofacial diseases. Challenges and issues, including the lack of the generalizability and explainability of deep learning models and the uncertainty in the reproducibility and stability of radiomic features, should be overcome to gain the trust of patients, providers, and healthcare organizers for daily clinical use of these models.
Collapse
Affiliation(s)
- Kuo Feng Hung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Qi Yong H. Ai
- Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Lun M. Wong
- Imaging and Interventional Radiology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Dion Tik Shun Li
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Yiu Yan Leung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
25
|
Zhang Y, Yu D, Yang Q, Li W. The Diagnostic Efficacy of Preoperative Ultrasound and/or Computed Tomography in Detecting Lymph Node Metastases: A Single-center Retrospective Analysis of Patients with Squamous Cell Carcinoma of the Head and Neck. Oral Surg Oral Med Oral Pathol Oral Radiol 2022; 134:386-396. [DOI: 10.1016/j.oooo.2022.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 04/15/2022] [Accepted: 05/06/2022] [Indexed: 11/17/2022]
|
26
|
Ariji Y, Kise Y, Fukuda M, Kuwada C, Ariji E. Segmentation of metastatic cervical lymph nodes from CT images of oral cancers using deep learning technology. Dentomaxillofac Radiol 2022; 51:20210515. [PMID: 35113725 PMCID: PMC9499194 DOI: 10.1259/dmfr.20210515] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVE The purpose of this study was to establish a deep learning model for segmenting the cervical lymph nodes of oral cancer patients and diagnosing metastatic or non-metastatic lymph nodes from contrast-enhanced computed tomography (CT) images. METHODS CT images of 158 metastatic and 514 non-metastatic lymph nodes were prepared. CT images were assigned to training, validation, and test datasets. The colored images with lymph nodes were prepared together with the original images for the training and validation datasets. Learning was performed for 200 epochs using the neural network U-net. Performance in segmenting lymph nodes and diagnosing metastasis were obtained. RESULTS Performance in segmenting metastatic lymph nodes showed recall of 0.742, precision of 0.942, and F1 score of 0.831. The recall of metastatic lymph nodes at level II was 0.875, which was the highest value. The diagnostic performance of identifying metastasis showed an area under the curve (AUC) of 0.950, which was significantly higher than that of radiologists (0.896). CONCLUSIONS A deep learning model was created to automatically segment the cervical lymph nodes of oral squamous cell carcinomas. Segmentation performances should still be improved, but the segmented lymph nodes were more accurately diagnosed for metastases compared with evaluation by humans.
Collapse
Affiliation(s)
- Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.,Department of Oral Radiology, Osaka Dental University, Osaka, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
27
|
Tang H, Li G, Liu C, Huang D, Zhang X, Qiu Y, Liu Y. Diagnosis of lymph node metastasis in head and neck squamous cell carcinoma using deep learning. Laryngoscope Investig Otolaryngol 2022; 7:161-169. [PMID: 35155794 PMCID: PMC8823170 DOI: 10.1002/lio2.742] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 01/04/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND To build an automatic pathological diagnosis model to assess the lymph node metastasis status of head and neck squamous cell carcinoma (HNSCC) based on deep learning algorithms. STUDY DESIGN A retrospective study. METHODS A diagnostic model integrating two-step deep learning networks was trained to analyze the metastasis status in 85 images of HNSCC lymph nodes. The diagnostic model was tested in a test set of 21 images with metastasis and 29 images without metastasis. All images were scanned from HNSCC lymph node sections stained with hematoxylin-eosin (HE). RESULTS In the test set, the overall accuracy, sensitivity, and specificity of the diagnostic model reached 86%, 100%, and 75.9%, respectively. CONCLUSIONS Our two-step diagnostic model can be used to automatically assess the status of HNSCC lymph node metastasis with high sensitivity. LEVEL OF EVIDENCE NA.
Collapse
Affiliation(s)
- Haosheng Tang
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
| | - Guo Li
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
- National Clinical Research Center for Geriatric Disorders (Xiangya Hospital)ChangshaHunanChina
| | - Chao Liu
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
| | - Donghai Huang
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
| | - Xin Zhang
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
| | - Yuanzheng Qiu
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
- National Clinical Research Center for Geriatric Disorders (Xiangya Hospital)ChangshaHunanChina
| | - Yong Liu
- Department of Otolaryngology‐Head and Neck Surgery, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Otolaryngology Major Disease Research Key Laboratory of Hunan ProvinceChangshaHunanChina
- Clinical Research Center for Laryngopharyngeal and Voice Disorders in Hunan ProvinceChangshaHunanChina
- National Clinical Research Center for Geriatric Disorders (Xiangya Hospital)ChangshaHunanChina
| |
Collapse
|
28
|
Agarwal P, Yadav A, Mathur P, Pal V, Chakrabarty A. BID-Net: An Automated System for Bone Invasion Detection Occurring at Stage T4 in Oral Squamous Carcinoma Using Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4357088. [PMID: 35140773 PMCID: PMC8818426 DOI: 10.1155/2022/4357088] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 11/26/2021] [Accepted: 01/03/2022] [Indexed: 11/22/2022]
Abstract
Detection of the presence and absence of bone invasion by the tumor in oral squamous cell carcinoma (OSCC) patients is very significant for their treatment planning and surgical resection. For bone invasion detection, CT scan imaging is the preferred choice of radiologists because of its high sensitivity and specificity. In the present work, deep learning algorithm based model, BID-Net, has been proposed for the automation of bone invasion detection. BID-Net performs the binary classification of CT scan images as the images with bone invasion and images without bone invasion. The proposed BID-Net model has achieved an outstanding accuracy of 93.62%. The model is also compared with six Transfer Learning models like VGG16, VGG19, ResNet-50, MobileNetV2, DenseNet-121, ResNet-101 and BID-Net outperformed over the other models. As there exists no previous studies on bone invasion detection using Deep Learning models, so the results of the proposed model have been validated from the experts of practitioner radiologists, S.M.S. hospital, Jaipur, India.
Collapse
Affiliation(s)
| | | | | | - Vipin Pal
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Amitabha Chakrabarty
- Department of Computer Science and Engineering, Brac University, Dhaka, Bangladesh
| |
Collapse
|
29
|
Alabi RO, Bello IO, Youssef O, Elmusrati M, Mäkitie AA, Almangush A. Utilizing Deep Machine Learning for Prognostication of Oral Squamous Cell Carcinoma-A Systematic Review. FRONTIERS IN ORAL HEALTH 2022; 2:686863. [PMID: 35048032 PMCID: PMC8757862 DOI: 10.3389/froh.2021.686863] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 06/15/2021] [Indexed: 12/17/2022] Open
Abstract
The application of deep machine learning, a subfield of artificial intelligence, has become a growing area of interest in predictive medicine in recent years. The deep machine learning approach has been used to analyze imaging and radiomics and to develop models that have the potential to assist the clinicians to make an informed and guided decision that can assist to improve patient outcomes. Improved prognostication of oral squamous cell carcinoma (OSCC) will greatly benefit the clinical management of oral cancer patients. This review examines the recent development in the field of deep learning for OSCC prognostication. The search was carried out using five different databases-PubMed, Scopus, OvidMedline, Web of Science, and Institute of Electrical and Electronic Engineers (IEEE). The search was carried time from inception until 15 May 2021. There were 34 studies that have used deep machine learning for the prognostication of OSCC. The majority of these studies used a convolutional neural network (CNN). This review showed that a range of novel imaging modalities such as computed tomography (or enhanced computed tomography) images and spectra data have shown significant applicability to improve OSCC outcomes. The average specificity, sensitivity, area under receiving operating characteristics curve [AUC]), and accuracy for studies that used spectra data were 0.97, 0.99, 0.96, and 96.6%, respectively. Conversely, the corresponding average values for these parameters for computed tomography images were 0.84, 0.81, 0.967, and 81.8%, respectively. Ethical concerns such as privacy and confidentiality, data and model bias, peer disagreement, responsibility gap, patient-clinician relationship, and patient autonomy have limited the widespread adoption of these models in daily clinical practices. The accumulated evidence indicates that deep machine learning models have great potential in the prognostication of OSCC. This approach offers a more generic model that requires less data engineering with improved accuracy.
Collapse
Affiliation(s)
- Rasheed Omobolaji Alabi
- Department of Industrial Digitalization, School of Technology and Innovations, University of Vaasa, Vaasa, Finland.,Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Ibrahim O Bello
- Department of Oral Medicine and Diagnostic Science, College of Dentistry, King Saud University, Riyadh, Saudi Arabia
| | - Omar Youssef
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Department of Pathology, University of Helsinki, Helsinki, Finland
| | - Mohammed Elmusrati
- Department of Industrial Digitalization, School of Technology and Innovations, University of Vaasa, Vaasa, Finland
| | - Antti A Mäkitie
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Department of Otorhinolaryngology - Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland.,Division of Ear, Nose and Throat Diseases, Department of Clinical Sciences, Intervention and Technology, Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden
| | - Alhadi Almangush
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Department of Pathology, University of Helsinki, Helsinki, Finland.,Institute of Biomedicine, Pathology, University of Turku, Turku, Finland.,Faculty of Dentistry, University of Misurata, Misurata, Libya
| |
Collapse
|
30
|
Alabi RO, Almangush A, Elmusrati M, Mäkitie AA. Deep Machine Learning for Oral Cancer: From Precise Diagnosis to Precision Medicine. FRONTIERS IN ORAL HEALTH 2022; 2:794248. [PMID: 35088057 PMCID: PMC8786902 DOI: 10.3389/froh.2021.794248] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 12/13/2021] [Indexed: 12/21/2022] Open
Abstract
Oral squamous cell carcinoma (OSCC) is one of the most prevalent cancers worldwide and its incidence is on the rise in many populations. The high incidence rate, late diagnosis, and improper treatment planning still form a significant concern. Diagnosis at an early-stage is important for better prognosis, treatment, and survival. Despite the recent improvement in the understanding of the molecular mechanisms, late diagnosis and approach toward precision medicine for OSCC patients remain a challenge. To enhance precision medicine, deep machine learning technique has been touted to enhance early detection, and consequently to reduce cancer-specific mortality and morbidity. This technique has been reported to have made a significant progress in data extraction and analysis of vital information in medical imaging in recent years. Therefore, it has the potential to assist in the early-stage detection of oral squamous cell carcinoma. Furthermore, automated image analysis can assist pathologists and clinicians to make an informed decision regarding cancer patients. This article discusses the technical knowledge and algorithms of deep learning for OSCC. It examines the application of deep learning technology in cancer detection, image classification, segmentation and synthesis, and treatment planning. Finally, we discuss how this technique can assist in precision medicine and the future perspective of deep learning technology in oral squamous cell carcinoma.
Collapse
Affiliation(s)
- Rasheed Omobolaji Alabi
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Department of Industrial Digitalization, School of Technology and Innovations, University of Vaasa, Vaasa, Finland
| | - Alhadi Almangush
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Department of Pathology, University of Helsinki, Helsinki, Finland
- Institute of Biomedicine, Pathology, University of Turku, Turku, Finland
| | - Mohammed Elmusrati
- Department of Industrial Digitalization, School of Technology and Innovations, University of Vaasa, Vaasa, Finland
| | - Antti A. Mäkitie
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Department of Otorhinolaryngology – Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
- Division of Ear, Nose and Throat Diseases, Department of Clinical Sciences, Intervention and Technology, Karolinska Institute and Karolinska University Hospital, Stockholm, Sweden
| |
Collapse
|
31
|
Kise Y, Ariji Y, Kuwada C, Fukuda M, Ariji E. Effect of deep transfer learning with a different kind of lesion on classification performance of pre-trained model: Verification with radiolucent lesions on panoramic radiographs. Imaging Sci Dent 2022; 53:27-34. [PMID: 37006785 PMCID: PMC10060760 DOI: 10.5624/isd.20220133] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/28/2022] [Accepted: 10/31/2022] [Indexed: 12/02/2022] Open
Abstract
Purpose The aim of this study was to clarify the influence of training with a different kind of lesion on the performance of a target model. Materials and Methods A total of 310 patients (211 men, 99 women; average age, 47.9±16.1 years) were selected and their panoramic images were used in this study. We created a source model using panoramic radiographs including mandibular radiolucent cyst-like lesions (radicular cyst, dentigerous cyst, odontogenic keratocyst, and ameloblastoma). The model was simulatively transferred and trained on images of Stafne's bone cavity. A learning model was created using a customized DetectNet built in the Digits version 5.0 (NVIDIA, Santa Clara, CA). Two machines (Machines A and B) with identical specifications were used to simulate transfer learning. A source model was created from the data consisting of ameloblastoma, odontogenic keratocyst, dentigerous cyst, and radicular cyst in Machine A. Thereafter, it was transferred to Machine B and trained on additional data of Stafne's bone cavity to create target models. To investigate the effect of the number of cases, we created several target models with different numbers of Stafne's bone cavity cases. Results When the Stafne's bone cavity data were added to the training, both the detection and classification performances for this pathology improved. Even for lesions other than Stafne's bone cavity, the detection sensitivities tended to increase with the increase in the number of Stafne's bone cavities. Conclusion This study showed that using different lesions for transfer learning improves the performance of the model.
Collapse
Affiliation(s)
- Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Yoshiko Ariji
- Department of Oral Radiology, Osaka Dental University, Osaka, Japan
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
32
|
Chu CS, Lee NP, Ho JWK, Choi SW, Thomson PJ. Deep Learning for Clinical Image Analyses in Oral Squamous Cell Carcinoma: A Review. JAMA Otolaryngol Head Neck Surg 2021; 147:893-900. [PMID: 34410314 DOI: 10.1001/jamaoto.2021.2028] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Importance Oral squamous cell carcinoma (SCC) is a lethal malignant neoplasm with a high rate of tumor metastasis and recurrence. Accurate diagnosis, prognosis prediction, and metastasis detection can improve patient outcomes. Deep learning for clinical image analysis can be used for diagnosis and prognosis in cancers, including oral SCC; its use in these areas can improve patient care and outcome. Observations This review is a summary of the use of deep learning models for diagnosis, prognosis, and metastasis detection for oral SCC by analyzing information from pathological and radiographic images. Specifically, deep learning has been used to classify different cell types, to differentiate cancer cells from nonmalignant cells, and to identify oral SCC from other cancer types. It can also be used to predict survival, to differentiate between tumor grades, and to detect lymph node metastasis. In general, the performance of these deep learning models has an accuracy ranging from 77.89% to 97.51% and 76% to 94.2% with the use of pathological and radiographic images, respectively. The review also discusses the importance of using good-quality clinical images in sufficient quantity on model performance. Conclusions and Relevance Applying pathological and radiographic images in deep learning models for diagnosis and prognosis of oral SCC has been explored, and most studies report results showing good classification accuracy. The successful use of deep learning in these areas has a high clinical translatability in the improvement of patient care.
Collapse
Affiliation(s)
- Chui Shan Chu
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Nikki P Lee
- Department of Surgery, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | - Joshua W K Ho
- School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China.,Laboratory of Data Discovery for Health Limited (D 2 4H), Hong Kong Science Park, Hong Kong SAR, China
| | - Siu-Wai Choi
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Peter J Thomson
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|