1
|
Lee SW, Huz K, Gorelick K, Li J, Bina T, Matsumura S, Yin N, Zhang N, Anang YNA, Sachadava S, Servin-DeMarrais HI, McMahon DJ, Lu HH, Yin MT, Wadhwa S. Evaluation by dental professionals of an artificial intelligence-based application to measure alveolar bone loss. BMC Oral Health 2025; 25:329. [PMID: 40025477 PMCID: PMC11872301 DOI: 10.1186/s12903-025-05677-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2024] [Accepted: 02/17/2025] [Indexed: 03/04/2025] Open
Abstract
BACKGROUND Several commercial programs incorporate artificial intelligence in diagnosis, but very few dental professionals have been surveyed regarding its acceptability and usability. Furthermore, few have explored how these advances might be incorporated into routine practice. METHODS Our team developed and implemented a deep learning (DL) model employing semantic segmentation neural networks and object detection networks to precisely identify alveolar bone crestal levels (ABCLs) and cemento-enamel junctions (CEJs) to measure change in alveolar crestal height (ACH). The model was trained and validated using a 550 bitewing radiograph dataset curated by an oral radiologist, setting a gold standard for ACH measurements. A twenty-question survey was created to compare the accuracy and efficiency of manual X-ray examination versus the application and to assess the acceptability and usability of the application. RESULTS In total, 56 different dental professionals classified severe (ACH > 5 mm) vs. non-severe (ACH ≤ 5 mm) periodontal bone loss on 35 calculable ACH measures. Dental professionals accurately identified between 35-87% of teeth with severe periodontal disease, whereas the artificial intelligence (AI) application achieved an 82-87% accuracy rate. Among the 65 participants who completed the acceptability and usability survey, more than half the participants (52%) were from an academic setting. Only 21% of participants reported that they already used automated or AI-based software in their practice to assist in reading of X-rays. The majority, 57%, stated that they only approximate when measuring bone levels and only 9% stated that they measure with a ruler. The survey indicated that 84% of participants agreed or strongly agreed with the AI application measurement of ACH. Furthermore, 56% of participants agreed that AI would be helpful in their professional setting. CONCLUSION Overall, the study demonstrates that an AI application for detecting alveolar bone has high acceptability among dental professionals and may provide benefits in time saving and increased clinical accuracy.
Collapse
Affiliation(s)
- Sang Won Lee
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA.
| | - Kateryna Huz
- Division of Orthodontics, Columbia University College of Dental Medicine, New York, NY, 10032, USA
| | - Kayla Gorelick
- Division of Orthodontics, Columbia University College of Dental Medicine, New York, NY, 10032, USA
| | - Jackie Li
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA
| | - Thomas Bina
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA
| | - Satoko Matsumura
- Division of Oral & Maxillofacial Radiology, Columbia University College of Dental Medicine, New York, NY, 10032, USA
| | - Noah Yin
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA
| | - Nicholas Zhang
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA
| | | | - Sanam Sachadava
- Division of Orthodontics, Columbia University College of Dental Medicine, New York, NY, 10032, USA
| | | | - Donald J McMahon
- Division of Oral & Maxillofacial Radiology, Columbia University College of Dental Medicine, New York, NY, 10032, USA
| | - Helen H Lu
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA
| | - Michael T Yin
- Vagelos College of Physicians and Surgeons, Division of Infectious Diseases, Columbia University, New York, NY, 10032, USA
| | - Sunil Wadhwa
- Division of Orthodontics, Columbia University College of Dental Medicine, New York, NY, 10032, USA
| |
Collapse
|
2
|
Choi YH, Lee SW, Ahn JH, Kim GJ, Kang MH, Kim YC. Hallux valgus and pes planus: Correlation analysis using deep learning-assisted radiographic angle measurements. Foot Ankle Surg 2025; 31:170-176. [PMID: 39327104 DOI: 10.1016/j.fas.2024.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 08/11/2024] [Accepted: 09/10/2024] [Indexed: 09/28/2024]
Abstract
BACKGROUND The relationship between hallux valgus (HV) and pes planus remains unresolved. This study aims to determine the correlation between HV and pes planus using a deep learning (DL) model to measure radiographic angle parameters. METHODS In total, radiographs of 212 feet detectable by the DL model were analyzed. HV was evaluated using the hallux valgus and intermetatarsal angles, while pes planus was assessed using the lateral talo-first metatarsal (Meary's) and calcaneal pitch angles. Correlation analyses were performed for each DL model-measured angle parameter. We investigated whether pes planus worsened with increasing severity of HV and vice versa. RESULTS All parameters were significantly correlated with each other. Pes planus worsened with increasing severity of HV, and as the severity of pes planus increased, HV also worsened. CONCLUSION Utilizing the DL model-assisted radiographic angle measurements, this study established a significant correlation between HV and pes planus. LEVEL OF EVIDENCE III.
Collapse
Affiliation(s)
- Youn-Ho Choi
- Department of Orthopaedic Surgery, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
| | - Si-Wook Lee
- Department of Orthopaedic Surgery, Keimyung University School of Medicine, Daegu, Republic of Korea.
| | - Jae Hoon Ahn
- Department of Orthopaedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
| | - Gyu Jin Kim
- Department of Orthopaedic Surgery, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
| | - Mu Hyun Kang
- Department of Orthopaedic Surgery, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
| | - Yoon-Chung Kim
- Department of Orthopaedic Surgery, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
| |
Collapse
|
3
|
Eggmann F, Blatz M. Recent Advances in Intraoral Scanners. J Dent Res 2024; 103:1349-1357. [PMID: 39382136 PMCID: PMC11633065 DOI: 10.1177/00220345241271937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2024] Open
Abstract
Intraoral scanners (IOSs) have emerged as a cornerstone technology in digital dentistry. This article examines the recent advancements and multifaceted applications of IOSs, highlighting their benefits in patient care and addressing their current limitations. The IOS market has seen a competitive surge. Modern IOSs, featuring continuous image capture and advanced software for seamless image stitching, have made the scanning process more efficient. Patient comfort with IOS procedures is favorable, mitigating the discomfort associated with conventional impression taking. There has been a shift toward open data interfaces, notably enhancing interoperability. However, the integration of IOSs into large dental institutions is slow, facing challenges such as compatibility with existing health record systems and extensive data storage management. IOSs now extend beyond their use in computer-aided design and manufacturing, with software solutions transforming them into platforms for diagnostics, patient communication, and treatment planning. Several IOSs are equipped with tools for caries detection, employing fluorescence technologies or near-infrared imaging to identify carious lesions. IOSs facilitate quantitative monitoring of tooth wear and soft-tissue dimensions. For precise tooth segmentation in intraoral scans, essential for orthodontic applications, developers are leveraging innovative deep neural network-based approaches. The clinical performance of restorations fabricated based on intraoral scans has proven to be comparable to those obtained using conventional impressions, substantiating the reliability of IOSs in restorative dentistry. In oral and maxillofacial surgery, IOSs enhance airway safety during impression taking and aid in treating conditions such as cleft lip and palate, among other congenital craniofacial disorders, across diverse age groups. While IOSs have improved various aspects of dental care, ongoing enhancements in usability, diagnostic accuracy, and image segmentation are crucial to exploit the potential of this technology in optimizing patient care.
Collapse
Affiliation(s)
- F. Eggmann
- Department of Periodontology, Endodontology, and Cariology, University Center for Dental Medicine Basel UZB, University of Basel, Basel, Switzerland
- Department of Preventive and Restorative Sciences, Robert Schattner Center, Penn Dental Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - M.B. Blatz
- Department of Preventive and Restorative Sciences, Robert Schattner Center, Penn Dental Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
4
|
Adnan N, Faizan Ahmed SM, Das JK, Aijaz S, Sukhia RH, Hoodbhoy Z, Umer F. Developing an AI-based application for caries index detection on intraoral photographs. Sci Rep 2024; 14:26752. [PMID: 39500993 PMCID: PMC11538444 DOI: 10.1038/s41598-024-78184-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 10/29/2024] [Indexed: 11/08/2024] Open
Abstract
This study evaluates the effectiveness of an Artificial Intelligence (AI)-based smartphone application designed for decay detection on intraoral photographs, comparing its performance to that of junior dentists. Conducted at The Aga Khan University Hospital, Karachi, Pakistan, this study utilized a dataset comprising 7,465 intraoral images, including both primary and secondary dentitions. These images were meticulously annotated by two experienced dentists and further verified by senior dentists. A YOLOv5s model was trained on this dataset and integrated into a smartphone application, while a Detection Transformer was also fine-tuned for comparative purposes. Explainable AI techniques were employed to assess the AI's decision-making processes. A sample of 70 photographs was used to directly compare the application's performance with that of junior dentists. Results showed that the YOLOv5s-based smartphone application achieved a precision of 90.7%, sensitivity of 85.6%, and an F1 score of 88.0% in detecting dental decay. In contrast, junior dentists achieved 83.3% precision, 64.1% sensitivity, and an F1 score of 72.4%. The study concludes that the YOLOv5s algorithm effectively detects dental decay on intraoral photographs and performs comparably to junior dentists. This application holds potential for aiding in the evaluation of the caries index within populations, thus contributing to efforts aimed at reducing the disease burden at the community level.
Collapse
Affiliation(s)
- Niha Adnan
- Department of Surgery, Aga Khan University Hospital, Karachi, Pakistan
- MeDenTec, Karachi, Pakistan
| | | | | | - Sehrish Aijaz
- Dow University of Health Sciences, Karachi, Pakistan
| | | | | | - Fahad Umer
- Department of Surgery, Aga Khan University Hospital, Karachi, Pakistan.
- MeDenTec, Karachi, Pakistan.
| |
Collapse
|
5
|
Alfadley A, Shujaat S, Jamleh A, Riaz M, Aboalela AA, Ma H, Orhan K. Progress of Artificial Intelligence-Driven Solutions for Automated Segmentation of Dental Pulp Space on Cone-Beam Computed Tomography Images. A Systematic Review. J Endod 2024; 50:1221-1232. [PMID: 38821262 DOI: 10.1016/j.joen.2024.05.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 05/15/2024] [Accepted: 05/21/2024] [Indexed: 06/02/2024]
Abstract
INTRODUCTION Automated segmentation of 3-dimensional pulp space on cone-beam computed tomography images presents a significant opportunity for enhancing diagnosis, treatment planning, and clinical education in endodontics. The aim of this systematic review was to investigate the performance of artificial intelligence-driven automated pulp space segmentation on cone-beam computed tomography images. METHODS A comprehensive electronic search was performed using PubMed, Web of Science, and Cochrane databases, up until February 2024. Two independent reviewers participated in the selection of studies, data extraction, and evaluation of the included studies. Any disagreements were resolved by a third reviewer. The Quality Assessment of Diagnostic Accuracy Studies-2 tool was used to assess the risk of bias. RESULTS Thirteen studies that met the eligibility criteria were included. Most studies demonstrated high accuracy in their respective segmentation methods, although there was some variation across different structures (pulp chamber, root canal) and tooth types (single-rooted, multirooted). Automated segmentation showed slightly superior performance for segmenting the pulp chamber compared to the root canal and single-rooted teeth compared to multi-rooted ones. Furthermore, the second mesiobuccal (MB2) canalsegmentation also demonstrated high performance. In terms of time efficiency, the minimum time required for segmentation was 13 seconds. CONCLUSION Artificial intelligence-driven models demonstrated outstanding performance in pulp space segmentation. Nevertheless, these findings warrant careful interpretation, and their generalizability is limited due to the potential risk and low evidence level arising from inadequately detailed methodologies and inconsistent assessment techniques. In addition, there is room for further improvement, specifically for root canal segmentation and testing of artificial intelligence performance in artifact-induced images.
Collapse
Affiliation(s)
- Abdulmohsen Alfadley
- Department of Restorative and Prosthetic Dental Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia; King Abdullah International Medical Research Center, Riyadh, Saudi Arabia.
| | - Sohaib Shujaat
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia; OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
| | - Ahmed Jamleh
- Department of Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Marryam Riaz
- Department of Physiology, Azra Naheed Dental College, Superior University, Lahore, Pakistan
| | - Ali Anwar Aboalela
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
| | - Hongyang Ma
- 2nd dental center, School of Stomatology, Peking University
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| |
Collapse
|
6
|
Takeya A, Watanabe K, Haga A. Fine structural human phantom in dentistry and instance tooth segmentation. Sci Rep 2024; 14:12630. [PMID: 38824210 PMCID: PMC11144222 DOI: 10.1038/s41598-024-63319-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Accepted: 05/28/2024] [Indexed: 06/03/2024] Open
Abstract
In this study, we present the development of a fine structural human phantom designed specifically for applications in dentistry. This research focused on assessing the viability of applying medical computer vision techniques to the task of segmenting individual teeth within a phantom. Using a virtual cone-beam computed tomography (CBCT) system, we generated over 170,000 training datasets. These datasets were produced by varying the elemental densities and tooth sizes within the human phantom, as well as varying the X-ray spectrum, noise intensity, and projection cutoff intensity in the virtual CBCT system. The deep-learning (DL) based tooth segmentation model was trained using the generated datasets. The results demonstrate an agreement with manual contouring when applied to clinical CBCT data. Specifically, the Dice similarity coefficient exceeded 0.87, indicating the robust performance of the developed segmentation model even when virtual imaging was used. The present results show the practical utility of virtual imaging techniques in dentistry and highlight the potential of medical computer vision for enhancing precision and efficiency in dental imaging processes.
Collapse
Affiliation(s)
- Atsushi Takeya
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan
| | - Keiichiro Watanabe
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan
| | - Akihiro Haga
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan.
| |
Collapse
|
7
|
Zhang HW, Huang DL, Wang YR, Zhong HS, Pang HW. CT radiomics based on different machine learning models for classifying gross tumor volume and normal liver tissue in hepatocellular carcinoma. Cancer Imaging 2024; 24:20. [PMID: 38279133 PMCID: PMC10811872 DOI: 10.1186/s40644-024-00652-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 12/29/2023] [Indexed: 01/28/2024] Open
Abstract
BACKGROUND & AIMS The present study utilized extracted computed tomography radiomics features to classify the gross tumor volume and normal liver tissue in hepatocellular carcinoma by mainstream machine learning methods, aiming to establish an automatic classification model. METHODS We recruited 104 pathologically confirmed hepatocellular carcinoma patients for this study. GTV and normal liver tissue samples were manually segmented into regions of interest and randomly divided into five-fold cross-validation groups. Dimensionality reduction using LASSO regression. Radiomics models were constructed via logistic regression, support vector machine (SVM), random forest, Xgboost, and Adaboost algorithms. The diagnostic efficacy, discrimination, and calibration of algorithms were verified using area under the receiver operating characteristic curve (AUC) analyses and calibration plot comparison. RESULTS Seven screened radiomics features excelled at distinguishing the gross tumor area. The Xgboost machine learning algorithm had the best discrimination and comprehensive diagnostic performance with an AUC of 0.9975 [95% confidence interval (CI): 0.9973-0.9978] and mean MCC of 0.9369. SVM had the second best discrimination and diagnostic performance with an AUC of 0.9846 (95% CI: 0.9835- 0.9857), mean Matthews correlation coefficient (MCC)of 0.9105, and a better calibration. All other algorithms showed an excellent ability to distinguish between gross tumor area and normal liver tissue (mean AUC 0.9825, 0.9861,0.9727,0.9644 for Adaboost, random forest, logistic regression, naivem Bayes algorithm respectively). CONCLUSION CT radiomics based on machine learning algorithms can accurately classify GTV and normal liver tissue, while the Xgboost and SVM algorithms served as the best complementary algorithms.
Collapse
Affiliation(s)
- Huai-Wen Zhang
- Department of Radiotherapy, The Second Affiliated Hospital of Nanchang Medical College, Jiangxi Clinical Research Center for Cancer, Jiangxi Cancer Hospital, 330029, Nanchang, China
- Department of Oncology, The third people's hospital of Jingdezhen, The third people's hospital of Jingdezhen affiliated to Nanchang Medical College, 333000, Jingdezhen, China
| | - De-Long Huang
- School of Clinical Medicine, Southwest Medical University, 646000, Luzhou, China
| | - Yi-Ren Wang
- School of Nursing, Southwest Medical University, 646000, Luzhou, China
| | - Hao-Shu Zhong
- Department of Hematology, Huashan Hospital, Fudan University, 200040, Shanghai, China.
| | - Hao-Wen Pang
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, 646000, Luzhou, China.
| |
Collapse
|
8
|
Moufti MA, Trabulsi N, Ghousheh M, Fattal T, Ashira A, Danishvar S. Developing an Artificial Intelligence Solution to Autosegment the Edentulous Mandibular Bone for Implant Planning. Eur J Dent 2023; 17:1330-1337. [PMID: 37172946 PMCID: PMC10756774 DOI: 10.1055/s-0043-1764425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/15/2023] Open
Abstract
OBJECTIVE Dental implants are considered the optimum solution to replace missing teeth and restore the mouth's function and aesthetics. Surgical planning of the implant position is critical to avoid damage to vital anatomical structures; however, the manual measurement of the edentulous (toothless) bone on cone beam computed tomography (CBCT) images is time-consuming and is subject to human error. An automated process has the potential to reduce human errors and save time and costs. This study developed an artificial intelligence (AI) solution to identify and delineate edentulous alveolar bone on CBCT images before implant placement. MATERIALS AND METHODS After obtaining the ethical approval, CBCT images were extracted from the database of the University Dental Hospital Sharjah based on predefined selection criteria. Manual segmentation of the edentulous span was done by three operators using ITK-SNAP software. A supervised machine learning approach was undertaken to develop a segmentation model on a "U-Net" convolutional neural network (CNN) in the Medical Open Network for Artificial Intelligence (MONAI) framework. Out of the 43 labeled cases, 33 were utilized to train the model, and 10 were used for testing the model's performance. STATISTICAL ANALYSIS The degree of 3D spatial overlap between the segmentation made by human investigators and the model's segmentation was measured by the dice similarity coefficient (DSC). RESULTS The sample consisted mainly of lower molars and premolars. DSC yielded an average value of 0.89 for training and 0.78 for testing. Unilateral edentulous areas, comprising 75% of the sample, resulted in a better DSC (0.91) than bilateral cases (0.73). CONCLUSION Segmentation of the edentulous spans on CBCT images was successfully conducted by machine learning with good accuracy compared to manual segmentation. Unlike traditional AI object detection models that identify objects present in the image, this model identifies missing objects. Finally, challenges in data collection and labeling are discussed, together with an outlook at the prospective stages of a larger project for a complete AI solution for automated implant planning.
Collapse
Affiliation(s)
- Mohammad Adel Moufti
- Department of Preventive and Restorative Dentistry, University of Sharjah, United Arab Emirates
| | - Nuha Trabulsi
- Department of Preventive and Restorative Dentistry, University of Sharjah, United Arab Emirates
| | - Marah Ghousheh
- Department of Preventive and Restorative Dentistry, University of Sharjah, United Arab Emirates
| | - Tala Fattal
- Department of Preventive and Restorative Dentistry, University of Sharjah, United Arab Emirates
| | - Ali Ashira
- Department of Preventive and Restorative Dentistry, University of Sharjah, United Arab Emirates
| | | |
Collapse
|
9
|
Rich JM, Bhardwaj LN, Shah A, Gangal K, Rapaka MS, Oberai AA, Fields BKK, Matcuk GR, Duddalwar VA. Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis. FRONTIERS IN RADIOLOGY 2023; 3:1241651. [PMID: 37614529 PMCID: PMC10442705 DOI: 10.3389/fradi.2023.1241651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 07/28/2023] [Indexed: 08/25/2023]
Abstract
Introduction Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT). Method The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review. Results The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9. Discussion Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.
Collapse
Affiliation(s)
- Joseph M. Rich
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Lokesh N. Bhardwaj
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Aman Shah
- Department of Applied Biostatistics and Epidemiology, University of Southern California, Los Angeles, CA, United States
| | - Krish Gangal
- Bridge UnderGrad Science Summer Research Program, Irvington High School, Fremont, CA, United States
| | - Mohitha S. Rapaka
- Department of Biology, University of Texas at Austin, Austin, TX, United States
| | - Assad A. Oberai
- Department of Aerospace and Mechanical Engineering Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brandon K. K. Fields
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - George R. Matcuk
- Department of Radiology, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Vinay A. Duddalwar
- Department of Radiology, Keck School of Medicine of the University of Southern California, Los Angeles, CA, United States
- Department of Radiology, USC Radiomics Laboratory, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
10
|
Zidane M, Makky A, Bruhns M, Rochwarger A, Babaei S, Claassen M, Schürch CM. A review on deep learning applications in highly multiplexed tissue imaging data analysis. FRONTIERS IN BIOINFORMATICS 2023; 3:1159381. [PMID: 37564726 PMCID: PMC10410935 DOI: 10.3389/fbinf.2023.1159381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Accepted: 07/12/2023] [Indexed: 08/12/2023] Open
Abstract
Since its introduction into the field of oncology, deep learning (DL) has impacted clinical discoveries and biomarker predictions. DL-driven discoveries and predictions in oncology are based on a variety of biological data such as genomics, proteomics, and imaging data. DL-based computational frameworks can predict genetic variant effects on gene expression, as well as protein structures based on amino acid sequences. Furthermore, DL algorithms can capture valuable mechanistic biological information from several spatial "omics" technologies, such as spatial transcriptomics and spatial proteomics. Here, we review the impact that the combination of artificial intelligence (AI) with spatial omics technologies has had on oncology, focusing on DL and its applications in biomedical image analysis, encompassing cell segmentation, cell phenotype identification, cancer prognostication, and therapy prediction. We highlight the advantages of using highly multiplexed images (spatial proteomics data) compared to single-stained, conventional histopathological ("simple") images, as the former can provide deep mechanistic insights that cannot be obtained by the latter, even with the aid of explainable AI. Furthermore, we provide the reader with the advantages/disadvantages of DL-based pipelines used in preprocessing highly multiplexed images (cell segmentation, cell type annotation). Therefore, this review also guides the reader to choose the DL-based pipeline that best fits their data. In conclusion, DL continues to be established as an essential tool in discovering novel biological mechanisms when combined with technologies such as highly multiplexed tissue imaging data. In balance with conventional medical data, its role in clinical routine will become more important, supporting diagnosis and prognosis in oncology, enhancing clinical decision-making, and improving the quality of care for patients. Since its introduction into the field of oncology, deep learning (DL) has impacted clinical discoveries and biomarker predictions. DL-driven discoveries and predictions in oncology are based on a variety of biological data such as genomics, proteomics, and imaging data. DL-based computational frameworks can predict genetic variant effects on gene expression, as well as protein structures based on amino acid sequences. Furthermore, DL algorithms can capture valuable mechanistic biological information from several spatial "omics" technologies, such as spatial transcriptomics and spatial proteomics. Here, we review the impact that the combination of artificial intelligence (AI) with spatial omics technologies has had on oncology, focusing on DL and its applications in biomedical image analysis, encompassing cell segmentation, cell phenotype identification, cancer prognostication, and therapy prediction. We highlight the advantages of using highly multiplexed images (spatial proteomics data) compared to single-stained, conventional histopathological ("simple") images, as the former can provide deep mechanistic insights that cannot be obtained by the latter, even with the aid of explainable AI. Furthermore, we provide the reader with the advantages/disadvantages of the DL-based pipelines used in preprocessing the highly multiplexed images (cell segmentation, cell type annotation). Therefore, this review also guides the reader to choose the DL-based pipeline that best fits their data. In conclusion, DL continues to be established as an essential tool in discovering novel biological mechanisms when combined with technologies such as highly multiplexed tissue imaging data. In balance with conventional medical data, its role in clinical routine will become more important, supporting diagnosis and prognosis in oncology, enhancing clinical decision-making, and improving the quality of care for patients.
Collapse
Affiliation(s)
- Mohammed Zidane
- Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Ahmad Makky
- Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Matthias Bruhns
- Department of Internal Medicine I, University Hospital Tübingen, Tübingen, Germany
- Department of Computer Science, University of Tübingen, Tübingen, Germany
| | - Alexander Rochwarger
- Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| | - Sepideh Babaei
- Department of Internal Medicine I, University Hospital Tübingen, Tübingen, Germany
| | - Manfred Claassen
- Department of Internal Medicine I, University Hospital Tübingen, Tübingen, Germany
- Department of Computer Science, University of Tübingen, Tübingen, Germany
| | - Christian M. Schürch
- Department of Pathology and Neuropathology, University Hospital and Comprehensive Cancer Center Tübingen, Tübingen, Germany
| |
Collapse
|
11
|
Synergy between artificial intelligence and precision medicine for computer-assisted oral and maxillofacial surgical planning. Clin Oral Investig 2023; 27:897-906. [PMID: 36323803 DOI: 10.1007/s00784-022-04706-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 08/29/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVES The aim of this review was to investigate the application of artificial intelligence (AI) in maxillofacial computer-assisted surgical planning (CASP) workflows with the discussion of limitations and possible future directions. MATERIALS AND METHODS An in-depth search of the literature was undertaken to review articles concerned with the application of AI for segmentation, multimodal image registration, virtual surgical planning (VSP), and three-dimensional (3D) printing steps of the maxillofacial CASP workflows. RESULTS The existing AI models were trained to address individual steps of CASP, and no single intelligent workflow was found encompassing all steps of the planning process. Segmentation of dentomaxillofacial tissue from computed tomography (CT)/cone-beam CT imaging was the most commonly explored area which could be applicable in a clinical setting. Nevertheless, a lack of generalizability was the main issue, as the majority of models were trained with the data derived from a single device and imaging protocol which might not offer similar performance when considering other devices. In relation to registration, VSP and 3D printing, the presence of inadequate heterogeneous data limits the automatization of these tasks. CONCLUSION The synergy between AI and CASP workflows has the potential to improve the planning precision and efficacy. However, there is a need for future studies with big data before the emergent technology finds application in a real clinical setting. CLINICAL RELEVANCE The implementation of AI models in maxillofacial CASP workflows could minimize a surgeon's workload and increase efficiency and consistency of the planning process, meanwhile enhancing the patient-specific predictability.
Collapse
|
12
|
Santos GNM, da Silva HEC, Ossege FEL, Figueiredo PTDS, Melo NDS, Stefani CM, Leite AF. Radiomics in bone pathology of the jaws. Dentomaxillofac Radiol 2023; 52:20220225. [PMID: 36416666 PMCID: PMC9793454 DOI: 10.1259/dmfr.20220225] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 09/02/2022] [Accepted: 10/02/2022] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVE To define which are and how the radiomics features of jawbone pathologies are extracted for diagnosis, predicting prognosis and therapeutic response. METHODS A comprehensive literature search was conducted using eight databases and gray literature. Two independent observers rated these articles according to exclusion and inclusion criteria. 23 papers were included to assess the radiomics features related to jawbone pathologies. Included studies were evaluated by using JBI Critical Appraisal Checklist for Analytical Cross-Sectional Studies. RESULTS Agnostic features were mined from periapical, dental panoramic radiographs, cone beam CT, CT and MRI images of six different jawbone alterations. The most frequent features mined were texture-, shape- and intensity-based features. Only 13 studies described the machine learning step, and the best results were obtained with Support Vector Machine and random forest classifier. For osteoporosis diagnosis and classification, filtering, shape-based and Tamura texture features showed the best performance. For temporomandibular joint pathology, gray-level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), Gray Level Size Zone Matrix (GLSZM), first-order statistics analysis and shape-based analysis showed the best results. Considering odontogenic and non-odontogenic cysts and tumors, contourlet and SPHARM features, first-order statistical features, GLRLM, GLCM had better indexes. For odontogenic cysts and granulomas, first-order statistical analysis showed better classification results. CONCLUSIONS GLCM was the most frequent feature, followed by first-order statistics, and GLRLM features. No study reported predicting response, prognosis or therapeutic response, but instead diseases diagnosis or classification. Although the lack of standardization in the radiomics workflow of the included studies, texture analysis showed potential to contribute to radiologists' reports, decreasing the subjectivity and leading to personalized healthcare.
Collapse
Affiliation(s)
| | | | | | | | - Nilce de Santos Melo
- Dentistry Department, Faculty of Health Science, University of Brasília, Brasilia, Brazil
| | - Cristine Miron Stefani
- Dentistry Department, Faculty of Health Science, University of Brasília, Brasilia, Brazil
| | - André Ferreira Leite
- Dentistry Department, Faculty of Health Science, University of Brasília, Brasilia, Brazil
| |
Collapse
|
13
|
Lin S, Hao X, Liu Y, Yan D, Liu J, Zhong M. Lightweight deep learning methods for panoramic dental X-ray image segmentation. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08102-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
AbstractDental X-ray image segmentation is helpful for assisting clinicians to examine tooth conditions and identify dental diseases. Fast and lightweight segmentation algorithms without using cloud computing may be required to be implemented in X-ray imaging systems. This paper aims to investigate lightweight deep learning methods for dental X-ray image segmentation for the purpose of deployment on edge devices, such as dental X-ray imaging systems. A novel lightweight neural network scheme using knowledge distillation is proposed in this paper. The proposed lightweight method and a number of existing lightweight deep learning methods were trained on a panoramic dental X-ray image data set. These lightweight methods were evaluated and compared by using several accuracy metrics. The proposed lightweight method only requires 0.33 million parameters ($$\sim 7.5$$
∼
7.5
megabytes) for the trained model, while it achieved the best performance in terms of IoU (0.804) and Dice (0.89) comparing to other lightweight methods. This work shows that the proposed method for dental X-ray image segmentation requires small memory storage, while it achieved comparative performance. The method could be deployed on edge devices and could potentially assist clinicians to alleviate their daily workflow and improve the quality of their analysis.
Collapse
|
14
|
Hsu K, Yuh DY, Lin SC, Lyu PS, Pan GX, Zhuang YC, Chang CC, Peng HH, Lee TY, Juan CH, Juan CE, Liu YJ, Juan CJ. Improving performance of deep learning models using 3.5D U-Net via majority voting for tooth segmentation on cone beam computed tomography. Sci Rep 2022; 12:19809. [PMID: 36396696 PMCID: PMC9672125 DOI: 10.1038/s41598-022-23901-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Accepted: 11/07/2022] [Indexed: 11/18/2022] Open
Abstract
Deep learning allows automatic segmentation of teeth on cone beam computed tomography (CBCT). However, the segmentation performance of deep learning varies among different training strategies. Our aim was to propose a 3.5D U-Net to improve the performance of the U-Net in segmenting teeth on CBCT. This study retrospectively enrolled 24 patients who received CBCT. Five U-Nets, including 2Da U-Net, 2Dc U-Net, 2Ds U-Net, 2.5Da U-Net, 3D U-Net, were trained to segment the teeth. Four additional U-Nets, including 2.5Dv U-Net, 3.5Dv5 U-Net, 3.5Dv4 U-Net, and 3.5Dv3 U-Net, were obtained using majority voting. Mathematical morphology operations including erosion and dilation (E&D) were applied to remove diminutive noise speckles. Segmentation performance was evaluated by fourfold cross validation using Dice similarity coefficient (DSC), accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV). Kruskal-Wallis test with post hoc analysis using Bonferroni correction was used for group comparison. P < 0.05 was considered statistically significant. Performance of U-Nets significantly varies among different training strategies for teeth segmentation on CBCT (P < 0.05). The 3.5Dv5 U-Net and 2.5Dv U-Net showed DSC and PPV significantly higher than any of five originally trained U-Nets (all P < 0.05). E&D significantly improved the DSC, accuracy, specificity, and PPV (all P < 0.005). The 3.5Dv5 U-Net achieved highest DSC and accuracy among all U-Nets. The segmentation performance of the U-Net can be improved by majority voting and E&D. Overall speaking, the 3.5Dv5 U-Net achieved the best segmentation performance among all U-Nets.
Collapse
Affiliation(s)
- Kang Hsu
- grid.260565.20000 0004 0634 0356Department of Periodontology, School of Dentistry, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, ROC ,grid.260565.20000 0004 0634 0356School of Dentistry and Graduate Institute of Dental Science, National Defense Medical Center, Taipei, Taiwan, ROC
| | - Da-Yo Yuh
- grid.260565.20000 0004 0634 0356Department of Periodontology, School of Dentistry, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, ROC
| | - Shao-Chieh Lin
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Ph.D. Program in Electrical and Communication Engineering, Feng Chia University, Taichung, Taiwan, ROC
| | - Pin-Sian Lyu
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhwa Rd., Seatwen, 40724 Taichung Taiwan, ROC
| | - Guan-Xin Pan
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC
| | - Yi-Chun Zhuang
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC
| | - Chia-Ching Chang
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.260539.b0000 0001 2059 7017Department of Management Science, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Hsu-Hsia Peng
- grid.38348.340000 0004 0532 0580Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Tung-Yang Lee
- grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC ,grid.413844.e0000 0004 0638 8798Cheng Ching Hospital, Taichung, Taiwan, ROC
| | - Cheng-Hsuan Juan
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC ,grid.413844.e0000 0004 0638 8798Cheng Ching Hospital, Taichung, Taiwan, ROC
| | - Cheng-En Juan
- grid.411298.70000 0001 2175 4846Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhwa Rd., Seatwen, 40724 Taichung Taiwan, ROC
| | - Yi-Jui Liu
- grid.411298.70000 0001 2175 4846Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhwa Rd., Seatwen, 40724 Taichung Taiwan, ROC
| | - Chun-Jung Juan
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.38348.340000 0004 0532 0580Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC ,grid.254145.30000 0001 0083 6092Department of Radiology, School of Medicine, College of Medicine, China Medical University, Taichung, Taiwan, ROC ,grid.411508.90000 0004 0572 9415Department of Medical Imaging, China Medical University Hospital, Taichung, Taiwan, ROC ,grid.260565.20000 0004 0634 0356Department of Biomedical Engineering, National Defense Medical Center, Taipei, Taiwan, ROC ,grid.19188.390000 0004 0546 0241Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, ROC
| |
Collapse
|