1
|
Wu Y, Zhang Y, Wu Y, Zheng Q, Li X, Chen X. ChatIOS: Improving automatic 3-dimensional tooth segmentation via GPT-4V and multimodal pre-training. J Dent 2025; 157:105755. [PMID: 40228651 DOI: 10.1016/j.jdent.2025.105755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Revised: 03/26/2025] [Accepted: 04/10/2025] [Indexed: 04/16/2025] Open
Abstract
OBJECTIVES This study aims to propose a framework that integrates GPT-4V, a recent advanced version of ChatGPT, and multimodal pre-training techniques to enhance deep learning algorithms for 3-dimensional (3D) tooth segmentation in scans produced by intraoral scanners (IOSs). METHODS The framework was developed on 1800 intraoral scans of approximately 24,000 annotated teeth (training set: 1200 scans, 16,004 teeth; testing set: 600 scans, 7995 teeth), from the Teeth3DS dataset, which was gathered from 900 patients with both maxillary and mandible regions. The first step of the proposed framework, ChatIOS, is to pre-process the 3D IOS data to extract 3D point clouds. Then, GPT-4V generates detailed descriptions of 2-dimensional (2D) IOS images taken from different view angles. In the multimodal pre-training, triplets, which comprise point clouds, 2D images, and text descriptions, serve as inputs. A series of ablation studies were systematically conducted to illustrate the superior design of the automatic 3D tooth segmentation system. Our quantitative evaluation criteria included segmentation quality, processing speed, and clinical applicability. RESULTS When tested on 600 scans, ChatIOS substantially outperformed the existing benchmarks such as PointNet++ across all metrics, including mean intersection-over-union (mIoU, from 90.3 % to 93.0 % for maxillary and from 89.2 % to 92.3 % for mandible scans), segmentation accuracy (from 97.0 % to 98.0 % for maxillary and from 96.8 % to 97.9 % for mandible scans) and dice similarity coefficient (DSC, from 98.1 % to 98.7 % for maxillary and from 97.9 % to 98.6 % for mandible scans). Our model took only approximately 2s to generate segmentation outputs per scan and exhibited acceptable consistency with clinical expert evaluations. CONCLUSIONS Our ChatIOS framework can increase the effectiveness and efficiency of 3D tooth segmentation that clinical procedures require, including orthodontic and prosthetic treatments. This study presents an early exploration of the applications of GPT-4V in digital dentistry and also pioneers the multimodal pre-training paradigm for 3D tooth segmentation. CLINICAL SIGNIFICANCE Accurate segmentation of teeth on 3D intraoral scans is critical for orthodontic and prosthetic treatments. ChatIOS can integrate GPT-4V with pre-trained vision-language models (VLMs) to gain an in-depth understanding of IOS data, which can contribute to more efficient and precise tooth segmentation systems.
Collapse
Affiliation(s)
- Yongjia Wu
- Department of Orthodontics, Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Hangzhou, PR China.
| | - Yun Zhang
- Department of Orthodontics, Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Hangzhou, PR China.
| | - Yange Wu
- Department of Orthodontics, Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Hangzhou, PR China
| | - Qianhan Zheng
- Department of Orthodontics, Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Hangzhou, PR China
| | - Xiaojun Li
- Department of Periodontics, Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Hangzhou, PR China.
| | - Xuepeng Chen
- Department of Orthodontics, Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Hangzhou, PR China.
| |
Collapse
|
2
|
Li K, Zhu J, Cui Z, Chen X, Liu Y, Wang F, Zhao Y. A Novel Hierarchical Cross-Stream Aggregation Neural Network for Semantic Segmentation of 3-D Dental Surface Models. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:7382-7394. [PMID: 38848227 DOI: 10.1109/tnnls.2024.3404276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2024]
Abstract
Accurate teeth delineation on 3-D dental models is essential for individualized orthodontic treatment planning. Pioneering works like PointNet suggest a promising direction to conduct efficient and accurate 3-D dental model analyses in end-to-end learnable fashions. Recent studies further imply that multistream architectures to concurrently learn geometric representations from different inputs/views (e.g., coordinates and normals) are beneficial for segmenting teeth with varying conditions. However, such multistream networks typically adopt simple late-fusion strategies to combine features captured from raw inputs that encode complementary but fundamentally different geometric information, potentially hampering their accuracy in end-to-end semantic segmentation. This article presents a hierarchical cross-stream aggregation (HiCA) network to learn more discriminative point/cell-wise representations from multiview inputs for fine-grained 3-D semantic segmentation. Specifically, based upon our multistream backbone with input-tailored feature extractors, we first design a contextual cross-steam aggregation (CA) module conditioned on interstream consistency to boost each view's contextual representation learning jointly. Then, before the late fusion of different streams' outputs for segmentation, we further deploy a discriminative cross-stream aggregation (DA) module to concurrently update all views' discriminative representation learning by leveraging a specific graph attention strategy induced by multiview prototype learning. On both public and in-house datasets of real-patient dental models, our method significantly outperformed state-of-the-art (SOTA) deep learning methods for teeth semantic segmentation. In addition, extended experimental results suggest the applicability of HiCA to other general 3-D shape segmentation tasks. The code is available at https://github.com/ladderlab-xjtu/HiCA.
Collapse
|
3
|
Chen R, Yang J, Xiong H, Xu R, Feng Y, Wu J, Liu Z. Cross-center Model Adaptive Tooth segmentation. Med Image Anal 2025; 101:103443. [PMID: 39778266 DOI: 10.1016/j.media.2024.103443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 08/22/2024] [Accepted: 12/17/2024] [Indexed: 01/11/2025]
Abstract
Automatic 3-dimensional tooth segmentation on intraoral scans (IOS) plays a pivotal role in computer-aided orthodontic treatments. In practice, deploying existing well-trained models to different medical centers suffers from two main problems: (1) the data distribution shifts between existing and new centers, which causes significant performance degradation. (2) The data in the existing center(s) is usually not permitted to be shared, and annotating additional data in the new center(s) is time-consuming and expensive, thus making re-training or fine-tuning unfeasible. In this paper, we propose a framework for Cross-center Model Adaptive Tooth segmentation (CMAT) to alleviate these issues. CMAT takes the trained model(s) from the source center(s) as input and adapts them to different target centers, without data transmission or additional annotations. CMAT is applicable to three cross-center scenarios: source-data-free, multi-source-data-free, and test-time. The model adaptation in CMAT is realized by a tooth-level prototype alignment module, a progressive pseudo-labeling transfer module, and a tooth-prior regularized information maximization module. Experiments under three cross-center scenarios on two datasets show that CMAT can consistently surpass existing baselines. The effectiveness is further verified with extensive ablation studies and statistical analysis, demonstrating its applicability for privacy-preserving model adaptive tooth segmentation in real-world digital dentistry.
Collapse
Affiliation(s)
- Ruizhe Chen
- Stomatology Hospital Affliated to Zhejiang University of Medicine, Zhejiang University, Hangzhou, 310016, China; ZJU-Angelalign R&D Center for Intelligence Healthcare, ZJU-UIUC Institute, Zhejiang University, Haining, 314400, China; Zhejiang Key Laboratory of Medical Imaging Artificial Intelligence, Zhejiang University, Hangzhou, 310058, China
| | - Jianfei Yang
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, 639798, Singapore
| | - Huimin Xiong
- ZJU-Angelalign R&D Center for Intelligence Healthcare, ZJU-UIUC Institute, Zhejiang University, Haining, 314400, China; Zhejiang Key Laboratory of Medical Imaging Artificial Intelligence, Zhejiang University, Hangzhou, 310058, China
| | - Ruiling Xu
- ZJU-Angelalign R&D Center for Intelligence Healthcare, ZJU-UIUC Institute, Zhejiang University, Haining, 314400, China
| | - Yang Feng
- Angelalign Technology Inc., Shanghai, 200433, China
| | - Jian Wu
- Zhejiang Key Laboratory of Medical Imaging Artificial Intelligence, Zhejiang University, Hangzhou, 310058, China; State Key Laboratory of Transvascular Implantation Devices of The Second Affiliated Hospital, School of Medicine and School of Public Health, Zhejiang University, Hangzhou, 310058, China
| | - Zuozhu Liu
- Stomatology Hospital Affliated to Zhejiang University of Medicine, Zhejiang University, Hangzhou, 310016, China; ZJU-Angelalign R&D Center for Intelligence Healthcare, ZJU-UIUC Institute, Zhejiang University, Haining, 314400, China; Zhejiang Key Laboratory of Medical Imaging Artificial Intelligence, Zhejiang University, Hangzhou, 310058, China.
| |
Collapse
|
4
|
Ganapathi II, Dharejo FA, Javed S, Ali SS, Werghi N. Unsupervised Dual Transformer Learning for 3-D Textured Surface Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5020-5031. [PMID: 38466603 DOI: 10.1109/tnnls.2024.3365515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Analysis of the 3-D texture is indispensable for various tasks, such as retrieval, segmentation, classification, and inspection of sculptures, knit fabrics, and biological tissues. A 3-D texture represents a locally repeated surface variation (SV) that is independent of the overall shape of the surface and can be determined using the local neighborhood and its characteristics. Existing methods mostly employ computer vision techniques that analyze a 3-D mesh globally, derive features, and then utilize them for classification or retrieval tasks. While several traditional and learning-based methods have been proposed in the literature, only a few have addressed 3-D texture analysis, and none have considered unsupervised schemes so far. This article proposes an original framework for the unsupervised segmentation of 3-D texture on the mesh manifold. The problem is approached as a binary surface segmentation task, where the mesh surface is partitioned into textured and nontextured regions without prior annotation. The proposed method comprises a mutual transformer-based system consisting of a label generator (LG) and a label cleaner (LC). Both models take geometric image representations of the surface mesh facets and label them as texture or nontexture using an iterative mutual learning scheme. Extensive experiments on three publicly available datasets with diverse texture patterns demonstrate that the proposed framework outperforms standard and state-of-the-art unsupervised techniques and performs reasonably well compared to supervised methods.
Collapse
|
5
|
Rekik A, Ben-Hamadou A, Smaoui O, Bouzguenda F, Pujades S, Boyer E. TSegLab: Multi-stage 3D dental scan segmentation and labeling. Comput Biol Med 2025; 185:109535. [PMID: 39708498 DOI: 10.1016/j.compbiomed.2024.109535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 08/17/2024] [Accepted: 12/03/2024] [Indexed: 12/23/2024]
Abstract
This study introduces a novel deep learning approach for 3D teeth scan segmentation and labeling, designed to enhance accuracy in computer-aided design (CAD) systems. Our method is organized into three key stages: coarse localization, fine teeth segmentation, and labeling. In the teeth localization stage, we employ a Mask-RCNN model to detect teeth in a rendered three-channel 2D representation of the input scan. For fine teeth segmentation, each detected tooth mesh is isomorphically mapped to a 2D harmonic parameter space and segmented with a Mask-RCNN model for precise crown delineation. Finally, for labeling, we propose a graph neural network that captures both the 3D shape and spatial distribution of the teeth, along with a new data augmentation technique to simulate missing teeth and teeth position variation during training. The method is evaluated using three key metrics: Teeth Localization Accuracy (TLA), Teeth Segmentation Accuracy (TSA), and Teeth Identification Rate (TIR). We tested our approach on the Teeth3DS dataset, consisting of 1800 intraoral 3D scans, and achieved a TLA of 98.45%, TSA of 98.17%, and TIR of 97.61%, outperforming existing state-of-the-art techniques. These results suggest that our approach significantly enhances the precision and reliability of automatic teeth segmentation and labeling in dental CAD applications. Link to the project page: https://crns-smartvision.github.io/tseglab.
Collapse
Affiliation(s)
- Ahmed Rekik
- Digital Research Center of Sfax, Technopark of Sfax, Sakiet Ezzit, 3021 Sfax, Tunisia; ISSAT, Gafsa university, Sidi Ahmed Zarrouk University Campus, 2112 Gafsa, Tunisia; Laboratory of Signals, systeMs, aRtificial Intelligence and neTworkS, Technopark of Sfax, Sakiet Ezzit, 3021 Sfax, Tunisia
| | - Achraf Ben-Hamadou
- Digital Research Center of Sfax, Technopark of Sfax, Sakiet Ezzit, 3021 Sfax, Tunisia; Laboratory of Signals, systeMs, aRtificial Intelligence and neTworkS, Technopark of Sfax, Sakiet Ezzit, 3021 Sfax, Tunisia.
| | - Oussama Smaoui
- Udini, 37 BD Aristide Briand, 13100 Aix-En-Provence, France
| | | | - Sergi Pujades
- Inria, Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, France
| | - Edmond Boyer
- Inria, Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, France
| |
Collapse
|
6
|
Liu Y, Liu X, Yang C, Yang Y, Chen H, Yuan Y. Geo-Net: Geometry-Guided Pretraining for Tooth Point Cloud Segmentation. J Dent Res 2024; 103:1358-1364. [PMID: 39548729 DOI: 10.1177/00220345241292566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2024] Open
Abstract
Accurately delineating individual teeth in 3-dimensional tooth point clouds is an important orthodontic application. Learning-based segmentation methods rely on labeled datasets, which are typically limited in scale due to the labor-intensive process of annotating each tooth. In this article, we propose a self-supervised pretraining framework, named Geo-Net, to boost segmentation performance by leveraging large-scale unlabeled data. The framework is based on the scalable masked autoencoders, and 2 geometry-guided designs, curvature-aware patching algorithm (CPA) and scale-aware reconstruction (SCR), are proposed to enhance the masked pretraining for tooth point cloud segmentation. In particular, CPA is designed to assemble informative patches as the reconstruction unit, guided by the estimated pointwise curvatures. Aimed at equipping the pretrained encoder with scale-aware modeling capacity, we also propose SCR to perform multiple reconstructions across shallow and deep layers. In vitro experiments reveal that after pretraining with large-scale unlabeled data, the proposed Geo-Net can outperform the supervised counterparts in mean Intersection of Union (mIoU) with the same amount of annotated labeled data. The code and data are available at https://github.com/yifliu3/Geo-Net.
Collapse
Affiliation(s)
- Y Liu
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR, PR China
| | - X Liu
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR, PR China
| | - C Yang
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR, PR China
| | - Y Yang
- Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, PR China
| | - H Chen
- Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, PR China
| | - Y Yuan
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR, PR China
| |
Collapse
|
7
|
Alsheghri A, Zhang Y, Hosseinimanesh G, Keren J, Cheriet F, Guibault F. Robust Segmentation of Partial and Imperfect Dental Arches. APPLIED SCIENCES 2024; 14:10784. [DOI: 10.3390/app142310784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Abstract
Automatic and accurate dental arch segmentation is a fundamental task in computer-aided dentistry. Recent trends in digital dentistry are tackling the design of 3D crowns using artificial intelligence, which initially requires a proper semantic segmentation of teeth from intraoral scans (IOS). In practice, most IOS are partial with as few as three teeth on the scanned arch, and some of them might have preparations, missing, or incomplete teeth. Existing deep learning-based methods (e.g., MeshSegNet, DArch) were proposed for dental arch segmentation, but they are not as efficient for partial arches that include imperfections such as missing teeth and preparations. In this work, we present the ArchSeg framework that can leverage various deep learning models for semantic segmentation of perfect and imperfect dental arches. The Point Transformer V2 deep learning model is used as the backbone for the ArchSeg framework. We present experiments to demonstrate the efficiency of the proposed framework to segment arches with various types of imperfections. Using a raw dental arch scan with two labels indicating the range of present teeth in the arch (i.e., the first and the last teeth), our ArchSeg can segment a standalone dental arch or a pair of aligned master/antagonist arches with more available information (i.e., die mesh). Two generic models are trained for lower and upper arches; they achieve dice similarity coefficient scores of 0.936±0.008 and 0.948±0.007, respectively, on test sets composed of challenging imperfect arches. Our work also highlights the impact of appropriate data pre-processing and post-processing on the final segmentation performance. Our ablation study shows that the segmentation performance of the Point Transformer V2 model integrated in our framework is improved compared with the original standalone model.
Collapse
Affiliation(s)
- Ammar Alsheghri
- Mechanical Engineering Department, King Fahd University of Petroleum and Minerals (KFUPM), Dhahran 31261, Saudi Arabia
- Biosystems and Machines Interdisciplinary Research Center, King Fahd University of Petroleum and Minerals (KFUPM), Dhahran 31261, Saudi Arabia
| | - Ying Zhang
- Department of Computer Engineering, École Polytechnique Montréal, 2900 Edouard-Montpetit Boul, Montréal, QC H3T1J4, Canada
| | - Golriz Hosseinimanesh
- Department of Computer Engineering, École Polytechnique Montréal, 2900 Edouard-Montpetit Boul, Montréal, QC H3T1J4, Canada
| | - Julia Keren
- Intelligent Dentaire Inc., Bureau 540, 1310 av Greene, Westmont, QC H3Z2B2, Canada
| | - Farida Cheriet
- Department of Computer Engineering, École Polytechnique Montréal, 2900 Edouard-Montpetit Boul, Montréal, QC H3T1J4, Canada
| | - François Guibault
- Department of Computer Engineering, École Polytechnique Montréal, 2900 Edouard-Montpetit Boul, Montréal, QC H3T1J4, Canada
| |
Collapse
|
8
|
Wu Y, Yan H, Ding K. Transformer based 3D tooth segmentation via point cloud region partition. Sci Rep 2024; 14:28513. [PMID: 39557955 PMCID: PMC11574114 DOI: 10.1038/s41598-024-79485-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 11/11/2024] [Indexed: 11/20/2024] Open
Abstract
Automatic and accurate tooth segmentation on 3D dental point clouds plays a pivotal role in computer-aided dentistry. Existing Transformer-based methods focus on aggregating local features, but fail to directly model global contexts due to memory limitations and high computational cost. In this paper, we propose a novel Transformer-based 3D tooth segmentation network, called PointRegion, which can process the entire point cloud at a low cost. Following a novel modeling of semantic segmentation that interprets the point cloud as a tessellation of learnable regions, we first design a RegionPartition module to partition the 3D point cloud into a certain number of regions and project these regions as embeddings in an effective way. Then, an offset-attention based RegionEncoder module is applied on all region embeddings to model global context among regions and predict the class logits for each region. Considering the irregularity and disorder of 3D point cloud data, a novel mechanism is proposed to build the point-to-region association to replace traditional convolutional operations. The mechanism, as a medium between points and regions, automatically learns the probabilities that each point belongs to its neighboring regions from the similarity between point and region features, achieving the goal of point-level segmentation. Since the number of regions is far less than the number of points, our proposed PointRegion model can leverage the capability of the global-based Transformer on large-scale point clouds with low computational cost and memory consumption. Finally, extensive experiments demonstrate the effectiveness and superiority of our method on our dental dataset.
Collapse
Affiliation(s)
- You Wu
- School of Information Engineering, China University of Geosciences(Beijing), Beijing, 100083, China
| | - Hongping Yan
- School of Information Engineering, China University of Geosciences(Beijing), Beijing, 100083, China.
| | - Kun Ding
- State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| |
Collapse
|
9
|
Kubík T, Španěl M. LMVSegRNN and Poseidon3D: Addressing Challenging Teeth Segmentation Cases in 3D Dental Surface Orthodontic Scans. Bioengineering (Basel) 2024; 11:1014. [PMID: 39451390 PMCID: PMC11505287 DOI: 10.3390/bioengineering11101014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2024] [Revised: 09/18/2024] [Accepted: 09/28/2024] [Indexed: 10/26/2024] Open
Abstract
The segmentation of teeth in 3D dental scans is difficult due to variations in teeth shapes, misalignments, occlusions, or the present dental appliances. Existing methods consistently adhere to geometric representations, omitting the perceptual aspects of the inputs. In addition, current works often lack evaluation on anatomically complex cases due to the unavailability of such datasets. We present a projection-based approach towards accurate teeth segmentation that operates in a detect-and-segment manner locally on each tooth in a multi-view fashion. Information is spatially correlated via recurrent units. We show that a projection-based framework can precisely segment teeth in cases with anatomical anomalies with negligible information loss. It outperforms point-based, edge-based, and Graph Cut-based geometric approaches, achieving an average weighted IoU score of 0.97122±0.038 and a Hausdorff distance at 95 percentile of 0.49012±0.571 mm. We also release Poseidon's Teeth 3D (Poseidon3D), a novel dataset of real orthodontic cases with various dental anomalies like teeth crowding and missing teeth.
Collapse
Affiliation(s)
- Tibor Kubík
- Department of Computer Graphics and Multimedia, Brno University of Technology, Božetěchova 2, 612 66 Brno, Czech Republic;
- TESCAN 3DIM, s.r.o., Libušina tř./21a, 623 00 Brno, Czech Republic
| | - Michal Španěl
- Department of Computer Graphics and Multimedia, Brno University of Technology, Božetěchova 2, 612 66 Brno, Czech Republic;
- TESCAN 3DIM, s.r.o., Libušina tř./21a, 623 00 Brno, Czech Republic
| |
Collapse
|
10
|
Krenmayr L, von Schwerin R, Schaudt D, Riedel P, Hafner A. DilatedToothSegNet: Tooth Segmentation Network on 3D Dental Meshes Through Increasing Receptive Vision. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1846-1862. [PMID: 38441700 PMCID: PMC11574236 DOI: 10.1007/s10278-024-01061-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 02/12/2024] [Accepted: 02/12/2024] [Indexed: 08/07/2024]
Abstract
The utilization of advanced intraoral scanners to acquire 3D dental models has gained significant popularity in the fields of dentistry and orthodontics. Accurate segmentation and labeling of teeth on digitized 3D dental surface models are crucial for computer-aided treatment planning. At the same time, manual labeling of these models is a time-consuming task. Recent advances in geometric deep learning have demonstrated remarkable efficiency in surface segmentation when applied to raw 3D models. However, segmentation of the dental surface remains challenging due to the atypical and diverse appearance of the patients' teeth. Numerous deep learning methods have been proposed to automate dental surface segmentation. Nevertheless, they still show limitations, particularly in cases where teeth are missing or severely misaligned. To overcome these challenges, we introduce a network operator called dilated edge convolution, which enhances the network's ability to learn additional, more distant features by expanding its receptive field. This leads to improved segmentation results, particularly in complex and challenging cases. To validate the effectiveness of our proposed method, we performed extensive evaluations on the recently published benchmark data set for dental model segmentation Teeth3DS. We compared our approach with several other state-of-the-art methods using a quantitative and qualitative analysis. Through these evaluations, we demonstrate the superiority of our proposed method, showcasing its ability to outperform existing approaches in dental surface segmentation.
Collapse
Affiliation(s)
- Lucas Krenmayr
- Cooperative Doctoral Program for Data Science and Analytics, Ulm University and University of Applied Sciences, Ulm, 89075, Germany.
- Department of Computer Science, University of Applied Sciences, Prittwitzstr. 10, Ulm, 89075, Germany.
| | - Reinhold von Schwerin
- Department of Computer Science, University of Applied Sciences, Prittwitzstr. 10, Ulm, 89075, Germany
| | - Daniel Schaudt
- Department of Computer Science, University of Applied Sciences, Prittwitzstr. 10, Ulm, 89075, Germany
| | - Pascal Riedel
- Department of Computer Science, University of Applied Sciences, Prittwitzstr. 10, Ulm, 89075, Germany
| | - Alexander Hafner
- Department of Computer Science, University of Applied Sciences, Prittwitzstr. 10, Ulm, 89075, Germany
| |
Collapse
|
11
|
Wang X, Alqahtani KA, Van den Bogaert T, Shujaat S, Jacobs R, Shaheen E. Convolutional neural network for automated tooth segmentation on intraoral scans. BMC Oral Health 2024; 24:804. [PMID: 39014389 PMCID: PMC11250967 DOI: 10.1186/s12903-024-04582-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 07/05/2024] [Indexed: 07/18/2024] Open
Abstract
BACKGROUND Tooth segmentation on intraoral scanned (IOS) data is a prerequisite for clinical applications in digital workflows. Current state-of-the-art methods lack the robustness to handle variability in dental conditions. This study aims to propose and evaluate the performance of a convolutional neural network (CNN) model for automatic tooth segmentation on IOS images. METHODS A dataset of 761 IOS images (380 upper jaws, 381 lower jaws) was acquired using an intraoral scanner. The inclusion criteria included a full set of permanent teeth, teeth with orthodontic brackets, and partially edentulous dentition. A multi-step 3D U-Net pipeline was designed for automated tooth segmentation on IOS images. The model's performance was assessed in terms of time and accuracy. Additionally, the model was deployed on an online cloud-based platform, where a separate subsample of 18 IOS images was used to test the clinical applicability of the model by comparing three modes of segmentation: automated artificial intelligence-driven (A-AI), refined (R-AI), and semi-automatic (SA) segmentation. RESULTS The average time for automated segmentation was 31.7 ± 8.1 s per jaw. The CNN model achieved an Intersection over Union (IoU) score of 91%, with the full set of teeth achieving the highest performance and the partially edentulous group scoring the lowest. In terms of clinical applicability, SA took an average of 860.4 s per case, whereas R-AI showed a 2.6-fold decrease in time (328.5 s). Furthermore, R-AI offered higher performance and reliability compared to SA, regardless of the dentition group. CONCLUSIONS The 3D U-Net pipeline was accurate, efficient, and consistent for automatic tooth segmentation on IOS images. The online cloud-based platform could serve as a viable alternative for IOS segmentation.
Collapse
Affiliation(s)
- Xiaotong Wang
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Harbin Medical University, Youzheng Street 23, Nangang, Harbin, 150001, China
| | - Khalid Ayidh Alqahtani
- Department of Oral and Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Sattam Bin Abdulaziz University, Al-Kharj, 16278, Saudi Arabia
| | - Tom Van den Bogaert
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, 14611, Saudi Arabia
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium.
- Department of Oral and Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Sattam Bin Abdulaziz University, Al-Kharj, 16278, Saudi Arabia.
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium.
| | - Eman Shaheen
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- Department of Dental Medicine, Karolinska Institutet, Solnavägen 1, 171 77, stockholm, 3000, Sweden
| |
Collapse
|
12
|
Li C, Jin Y, Du Y, Luo K, Fiorenza L, Chen H, Tian S, Sun Y. Efficient complete denture metal base design via a dental feature-driven segmentation network. Comput Biol Med 2024; 175:108550. [PMID: 38701590 DOI: 10.1016/j.compbiomed.2024.108550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 11/23/2023] [Accepted: 04/28/2024] [Indexed: 05/05/2024]
Abstract
BACKGROUND AND OBJECTIVE Complete denture is a common restorative treatment in dental patients and the design of the core components (major connector and retentive mesh) of complete denture metal base (CDMB) is the basis of successful restoration. However, the automated design process of CDMB has become a challenging task primarily due to the complexity of manual interaction, low personalization, and low design accuracy. METHODS To solve the existing problems, we develop a computer-aided Segmentation Network-driven CDMB design framework, called CDMB-SegNet, to automatically generate personalized digital design boundaries for complete dentures of edentulous patients. Specifically, CDMB-SegNet consists of a novel upright-orientation adjustment module (UO-AM), a dental feature-driven segmentation network, and a specific boundary-optimization design module (BO-DM). UO-AM automatically identifies key points for locating spatial attitude of the three-dimensional dental model with arbitrary posture, while BO-DM can result in smoother and more personalized designs for complete denture. In addition, to achieve efficient and accurate feature extraction and segmentation of 3D edentulous models with irregular gingival tissues, the light-weight backbone network is also incorporated into CDMB-SegNet. RESULTS Experimental results on a large clinical dataset showed that CDMB-SegNet can achieve superior performance over the state-of-the-art methods. Quantitative evaluation (major connector/retentive mesh) showed improved Accuracy (98.54 ± 0.58 %/97.73 ± 0.92 %) and IoU (87.42 ± 5.48 %/70.42 ± 7.95 %), and reduced Maximum Symmetric Surface Distance (4.54 ± 2.06 mm/4.62 ± 1.68 mm), Average Symmetric Surface Distance (1.45 ± 0.63mm/1.28 ± 0.54 mm), Roughness Rate (6.17 ± 1.40 %/6.80 ± 1.23 %) and Vertices Number (23.22 ± 1.85/43.15 ± 2.72). Moreover, CDMB-SegNet shortened the overall design time to around 4 min, which is one tenth of the comparison methods. CONCLUSIONS CDMB-SegNet is the first intelligent neural network for automatic CDMB design driven by oral big data and dental features. The designed CDMB is able to couple with patient's personalized dental anatomical morphology, providing higher clinical applicability compared with the state-of-the-art methods.
Collapse
Affiliation(s)
- Cheng Li
- Center of Digital Dentistry, Faculty of Prosthodontics, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No.22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, PR China
| | - Yaming Jin
- Nanjing Profeta Intelligent Technology Co., Ltd, No. 12, Mozhou East Road, Jiangning District, Nanjing City, Jiangsu Province, 211111, PR China
| | - Yunhan Du
- Nanjing Profeta Intelligent Technology Co., Ltd, No. 12, Mozhou East Road, Jiangning District, Nanjing City, Jiangsu Province, 211111, PR China
| | - Kaiyuan Luo
- Department of Computer Science, University of Illinois Urbana-Champaign, Urbana, IL, 61820, USA
| | - Luca Fiorenza
- Biomedicine Discovery Institute, Monash University, Melbourne, Victoria, 3800, Australia
| | - Hu Chen
- Center of Digital Dentistry, Faculty of Prosthodontics, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No.22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, PR China.
| | - Sukun Tian
- Center of Digital Dentistry, Faculty of Prosthodontics, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No.22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, PR China.
| | - Yuchun Sun
- Center of Digital Dentistry, Faculty of Prosthodontics, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No.22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, PR China.
| |
Collapse
|
13
|
Kofod Petersen A, Forgie A, Bindslev DA, Villesen P, Staun Larsen L. Automatic removal of soft tissue from 3D dental photo scans; an important step in automating future forensic odontology identification. Sci Rep 2024; 14:12421. [PMID: 38816447 PMCID: PMC11139984 DOI: 10.1038/s41598-024-63198-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 05/27/2024] [Indexed: 06/01/2024] Open
Abstract
The potential of intraoral 3D photo scans in forensic odontology identification remains largely unexplored, even though the high degree of detail could allow automated comparison of ante mortem and post mortem dentitions. Differences in soft tissue conditions between ante- and post mortem intraoral 3D photo scans may cause ambiguous variation, burdening the potential automation of the matching process and underlining the need for limiting inclusion of soft tissue in dental comparison. The soft tissue removal must be able to handle dental arches with missing teeth, and intraoral 3D photo scans not originating from plaster models. To address these challenges, we have developed the grid-cutting method. The method is customisable, allowing fine-grained analysis using a small grid size and adaptation of how much of the soft tissues are excluded from the cropped dental scan. When tested on 66 dental scans, the grid-cutting method was able to limit the amount of soft tissue without removing any teeth in 63/66 dental scans. The remaining 3 dental scans had partly erupted third molars (wisdom teeth) which were removed by the grid-cutting method. Overall, the grid-cutting method represents an important step towards automating the matching process in forensic odontology identification using intraoral 3D photo scans.
Collapse
Affiliation(s)
| | - Andrew Forgie
- School of Medicine, Dentistry and Nursing, University of Glasgow, Glasgow, Scotland
| | - Dorthe Arenholt Bindslev
- Department of Forensic Medicine, Aarhus University, Aarhus, Denmark
- Department of Dentistry and Oral Health, Aarhus University, Aarhus, Denmark
| | - Palle Villesen
- Bioinformatics Research Centre, Aarhus University, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Line Staun Larsen
- Department of Forensic Medicine, Aarhus University, Aarhus, Denmark
- Department of Dentistry and Oral Health, Aarhus University, Aarhus, Denmark
| |
Collapse
|
14
|
Jang TJ, Yun HS, Hyun CM, Kim JE, Lee SH, Seo JK. Fully automatic integration of dental CBCT images and full-arch intraoral impressions with stitching error correction via individual tooth segmentation and identification. Med Image Anal 2024; 93:103096. [PMID: 38301347 DOI: 10.1016/j.media.2024.103096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 12/31/2023] [Accepted: 01/25/2024] [Indexed: 02/03/2024]
Abstract
We present a fully automated method of integrating intraoral scan (IOS) and dental cone-beam computerized tomography (CBCT) images into one image by complementing each image's weaknesses. Dental CBCT alone may not be able to delineate precise details of the tooth surface due to limited image resolution and various CBCT artifacts, including metal-induced artifacts. IOS is very accurate for the scanning of narrow areas, but it produces cumulative stitching errors during full-arch scanning. The proposed method is intended not only to compensate the low-quality of CBCT-derived tooth surfaces with IOS, but also to correct the cumulative stitching errors of IOS across the entire dental arch. Moreover, the integration provides both gingival structure of IOS and tooth roots of CBCT in one image. The proposed fully automated method consists of four parts; (i) individual tooth segmentation and identification module for IOS data (TSIM-IOS); (ii) individual tooth segmentation and identification module for CBCT data (TSIM-CBCT); (iii) global-to-local tooth registration between IOS and CBCT; and (iv) stitching error correction for full-arch IOS. The experimental results show that the proposed method achieved landmark and surface distance errors of 112.4μm and 301.7μm, respectively.
Collapse
Affiliation(s)
- Tae Jun Jang
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul, South Korea
| | - Hye Sun Yun
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul, South Korea.
| | - Chang Min Hyun
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul, South Korea
| | - Jong-Eun Kim
- Department of Prosthodontics, College of Dentistry, Yonsei University, Seoul, South Korea
| | - Sang-Hwy Lee
- Department of Oral and Maxillofacial Surgery, Oral Science Research Center, College of Dentistry, Yonsei University, Seoul, South Korea
| | - Jin Keun Seo
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul, South Korea
| |
Collapse
|
15
|
Chen X, Ma N, Xu T, Xu C. Deep learning-based tooth segmentation methods in medical imaging: A review. Proc Inst Mech Eng H 2024; 238:115-131. [PMID: 38314788 DOI: 10.1177/09544119231217603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
Deep learning approaches for tooth segmentation employ convolutional neural networks (CNNs) or Transformers to derive tooth feature maps from extensive training datasets. Tooth segmentation serves as a critical prerequisite for clinical dental analysis and surgical procedures, enabling dentists to comprehensively assess oral conditions and subsequently diagnose pathologies. Over the past decade, deep learning has experienced significant advancements, with researchers introducing efficient models such as U-Net, Mask R-CNN, and Segmentation Transformer (SETR). Building upon these frameworks, scholars have proposed numerous enhancement and optimization modules to attain superior tooth segmentation performance. This paper discusses the deep learning methods of tooth segmentation on dental panoramic radiographs (DPRs), cone-beam computed tomography (CBCT) images, intro oral scan (IOS) models, and others. Finally, we outline performance-enhancing techniques and suggest potential avenues for ongoing research. Numerous challenges remain, including data annotation and model generalization limitations. This paper offers insights for future tooth segmentation studies, potentially facilitating broader clinical adoption.
Collapse
Affiliation(s)
- Xiaokang Chen
- Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, China
| | - Nan Ma
- Faculty of Information and Technology, Beijing University of Technology, Beijing, China
- Engineering Research Center of Intelligence Perception and Autonomous Control, Ministry of Education, Beijing University of Technology, Beijing, China
| | - Tongkai Xu
- Department of General Dentistry II, Peking University School and Hospital of Stomatology, Beijing, China
| | - Cheng Xu
- Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, China
| |
Collapse
|
16
|
Li J, Cheng B, Niu N, Gao G, Ying S, Shi J, Zeng T. A fine-grained orthodontics segmentation model for 3D intraoral scan data. Comput Biol Med 2024; 168:107821. [PMID: 38064844 DOI: 10.1016/j.compbiomed.2023.107821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 11/01/2023] [Accepted: 12/04/2023] [Indexed: 01/10/2024]
Abstract
With the widespread application of digital orthodontics in the diagnosis and treatment of oral diseases, more and more researchers focus on the accurate segmentation of teeth from intraoral scan data. The accuracy of the segmentation results will directly affect the follow-up diagnosis of dentists. Although the current research on tooth segmentation has achieved promising results, the 3D intraoral scan datasets they use are almost all indirect scans of plaster models, and only contain limited samples of abnormal teeth, so it is difficult to apply them to clinical scenarios under orthodontic treatment. The current issue is the lack of a unified and standardized dataset for analyzing and validating the effectiveness of tooth segmentation. In this work, we focus on deformed teeth segmentation and provide a fine-grained tooth segmentation dataset (3D-IOSSeg). The dataset consists of 3D intraoral scan data from more than 200 patients, with each sample labeled with a fine-grained mesh unit. Meanwhile, 3D-IOSSeg meticulously classified every tooth in the upper and lower jaws. In addition, we propose a fast graph convolutional network for 3D tooth segmentation named Fast-TGCN. In the model, the relationship between adjacent mesh cells is directly established by the naive adjacency matrix to better extract the local geometric features of the tooth. Extensive experiments show that Fast-TGCN can quickly and accurately segment teeth from the mouth with complex structures and outperforms other methods in various evaluation metrics. Moreover, we present the results of multiple classical tooth segmentation methods on this dataset, providing a comprehensive analysis of the field. All code and data will be available at https://github.com/MIVRC/Fast-TGCN.
Collapse
Affiliation(s)
- Juncheng Li
- School of Communication Information Engineering, Shanghai University, Shanghai, China.
| | - Bodong Cheng
- School of Computer Science and Technology, East China Normal University, Shanghai, China.
| | - Najun Niu
- School of Stomatology, Nanjing Medical University, Nanjing, China.
| | - Guangwei Gao
- Institute of Advanced Technology, Nanjing University of Posts and Telecommunications, Nanjing, China.
| | - Shihui Ying
- Department of Mathematics, School of Science, Shanghai University, Shanghai, China.
| | - Jun Shi
- School of Communication Information Engineering, Shanghai University, Shanghai, China.
| | - Tieyong Zeng
- Department of Mathematics, The Chinese University of Hong Kong, New Territories, Hong Kong.
| |
Collapse
|
17
|
Kapila S, Vora SR, Rengasamy Venugopalan S, Elnagar MH, Akyalcin S. Connecting the dots towards precision orthodontics. Orthod Craniofac Res 2023; 26 Suppl 1:8-19. [PMID: 37968678 DOI: 10.1111/ocr.12725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2023] [Indexed: 11/17/2023]
Abstract
Precision orthodontics entails the use of personalized clinical, biological, social and environmental knowledge of each patient for deep individualized clinical phenotyping and diagnosis combined with the delivery of care using advanced customized devices, technologies and biologics. From its historical origins as a mechanotherapy and materials driven profession, the most recent advances in orthodontics in the past three decades have been propelled by technological innovations including volumetric and surface 3D imaging and printing, advances in software that facilitate the derivation of diagnostic details, enhanced personalization of treatment plans and fabrication of custom appliances. Still, the use of these diagnostic and therapeutic technologies is largely phenotype driven, focusing mainly on facial/skeletal morphology and tooth positions. Future advances in orthodontics will involve comprehensive understanding of an individual's biology through omics, a field of biology that involves large-scale rapid analyses of DNA, mRNA, proteins and other biological regulators from a cell, tissue or organism. Such understanding will define individual biological attributes that will impact diagnosis, treatment decisions, risk assessment and prognostics of therapy. Equally important are the advances in artificial intelligence (AI) and machine learning, and its applications in orthodontics. AI is already being used to perform validation of approaches for diagnostic purposes such as landmark identification, cephalometric tracings, diagnosis of pathologies and facial phenotyping from radiographs and/or photographs. Other areas for future discoveries and utilization of AI will include clinical decision support, precision orthodontics, payer decisions and risk prediction. The synergies between deep 3D phenotyping and advances in materials, omics and AI will propel the technological and omics era towards achieving the goal of delivering optimized and predictable precision orthodontics.
Collapse
Affiliation(s)
- Sunil Kapila
- Strategic Initiatives and Operations, UCLA School of Dentistry, Los Angeles, California, USA
| | - Siddharth R Vora
- Oral Health Sciences, University of British Columbia, Vancouver, British Columbia, USA
| | | | - Mohammed H Elnagar
- Department of Orthodontics, College of Dentistry, University of Illinois Chicago, Chicago, Illinois, USA
| | - Sercan Akyalcin
- Department of Developmental Biology, Harvard School of Dental Medicine, Boston, Massachusetts, USA
| |
Collapse
|
18
|
Chen G, Qin J, Amor BB, Zhou W, Dai H, Zhou T, Huang H, Shao L. Automatic Detection of Tooth-Gingiva Trim Lines on Dental Surfaces. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3194-3204. [PMID: 37015112 DOI: 10.1109/tmi.2023.3263161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Detecting the tooth-gingiva trim line from a dental surface plays a critical role in dental treatment planning and aligner 3D printing. Existing methods treat this task as a segmentation problem, which is resolved with geometric deep learning based mesh segmentation techniques. However, these methods can only provide indirect results (i.e., segmented teeth) and suffer from unsatisfactory accuracy due to the incapability of making full use of high-resolution dental surfaces. To this end, we propose a two-stage geometric deep learning framework for automatically detecting tooth-gingiva trim lines from dental surfaces. Our framework consists of a trim line proposal network (TLP-Net) for predicting an initial trim line from the low-resolution dental surface as well as a trim line refinement network (TLR-Net) for refining the initial trim line with the information from the high-resolution dental surface. Specifically, our TLP-Net predicts the initial trim line by fusing the multi-scale features from a U-Net with a proposed residual multi-scale attention fusion module. Moreover, we propose feature bridge modules and a trim line loss to further improve the accuracy. The resulting trim line is then fed to our TLR-Net, which is a deep-based LDDMM model with the high-resolution dental surface as input. In addition, dense connections are incorporated into TLR-Net for improved performance. Our framework provides an automatic solution to trim line detection by making full use of raw high-resolution dental surfaces. Extensive experiments on a clinical dental surface dataset demonstrate that our TLP-Net and TLR-Net are superior trim line detection methods and outperform cutting-edge methods in both qualitative and quantitative evaluations.
Collapse
|
19
|
Shen X, Zhang C, Jia X, Li D, Liu T, Tian S, Wei W, Sun Y, Liao W. TranSDFNet: Transformer-Based Truncated Signed Distance Fields for the Shape Design of Removable Partial Denture Clasps. IEEE J Biomed Health Inform 2023; 27:4950-4960. [PMID: 37471183 DOI: 10.1109/jbhi.2023.3295387] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
The ever-growing aging population has led to an increasing need for removable partial dentures (RPDs) since they are typically the least expensive treatment options for partial edentulism. However, the digital design of RPDs remains challenging for dental technicians due to the variety of partially edentulous scenarios and complex combinations of denture components. To accelerate the design of RPDs, we propose a U-shape network incorporated with Transformer blocks to automatically generate RPD clasps, one of the most frequently used RPD components. Unlike existing dental restoration design algorithms, we introduce the voxel-based truncated signed distance field (TSDF) as an intermediate representation, which reduces the sensitivity of the network to resolution and contributes to more smooth reconstruction. Besides, a selective insertion scheme is proposed for solving the memory issue caused by Transformer blocks and enables the algorithm to work well in scenarios with insufficient data. We further design two weighted loss functions to filter out the noisy signals generated from the zero-gradient areas in TSDF. Ablation and comparison studies demonstrate that our algorithm outperforms state-of-the-art reconstruction methods by a large margin and can serve as an intelligent auxiliary in denture design.
Collapse
|
20
|
Bonny T, Al Nassan W, Obaideen K, Al Mallahi MN, Mohammad Y, El-damanhoury HM. Contemporary Role and Applications of Artificial Intelligence in Dentistry. F1000Res 2023; 12:1179. [PMID: 37942018 PMCID: PMC10630586 DOI: 10.12688/f1000research.140204.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/24/2023] [Indexed: 11/10/2023] Open
Abstract
Artificial Intelligence (AI) technologies play a significant role and significantly impact various sectors, including healthcare, engineering, sciences, and smart cities. AI has the potential to improve the quality of patient care and treatment outcomes while minimizing the risk of human error. Artificial Intelligence (AI) is transforming the dental industry, just like it is revolutionizing other sectors. It is used in dentistry to diagnose dental diseases and provide treatment recommendations. Dental professionals are increasingly relying on AI technology to assist in diagnosis, clinical decision-making, treatment planning, and prognosis prediction across ten dental specialties. One of the most significant advantages of AI in dentistry is its ability to analyze vast amounts of data quickly and accurately, providing dental professionals with valuable insights to enhance their decision-making processes. The purpose of this paper is to identify the advancement of artificial intelligence algorithms that have been frequently used in dentistry and assess how well they perform in terms of diagnosis, clinical decision-making, treatment, and prognosis prediction in ten dental specialties; dental public health, endodontics, oral and maxillofacial surgery, oral medicine and pathology, oral & maxillofacial radiology, orthodontics and dentofacial orthopedics, pediatric dentistry, periodontics, prosthodontics, and digital dentistry in general. We will also show the pros and cons of using AI in all dental specialties in different ways. Finally, we will present the limitations of using AI in dentistry, which made it incapable of replacing dental personnel, and dentists, who should consider AI a complimentary benefit and not a threat.
Collapse
Affiliation(s)
- Talal Bonny
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Wafaa Al Nassan
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Khaled Obaideen
- Sustainable Energy and Power Systems Research Centre, RISE, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Maryam Nooman Al Mallahi
- Department of Mechanical and Aerospace Engineering, United Arab Emirates University, Al Ain City, Abu Dhabi, 27272, United Arab Emirates
| | - Yara Mohammad
- College of Engineering and Information Technology, Ajman University, Ajman University, Ajman, Ajman, United Arab Emirates
| | - Hatem M. El-damanhoury
- Department of Preventive and Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, 27272, United Arab Emirates
| |
Collapse
|
21
|
Gu Z, Wu Z, Dai N. Image generation technology for functional occlusal pits and fissures based on a conditional generative adversarial network. PLoS One 2023; 18:e0291728. [PMID: 37725620 PMCID: PMC10508633 DOI: 10.1371/journal.pone.0291728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 09/02/2023] [Indexed: 09/21/2023] Open
Abstract
The occlusal surfaces of natural teeth have complex features of functional pits and fissures. These morphological features directly affect the occlusal state of the upper and lower teeth. An image generation technology for functional occlusal pits and fissures is proposed to address the lack of local detailed crown surface features in existing dental restoration methods. First, tooth depth image datasets were constructed using an orthogonal projection method. Second, the optimization and improvement of the model parameters were guided by introducing the jaw position spatial constraint, the L1 loss and the perceptual loss functions. Finally, two image quality evaluation metrics were applied to evaluate the quality of the generated images, and deform the dental crown by using the generated occlusal pits and fissures as constraints to compare with expert data. The results showed that the images generated using the network constructed in this study had high quality, and the detailed pit and fissure features on the crown were effectively restored, with a standard deviation of 0.1802mm compared to the expert-designed tooth crown models.
Collapse
Affiliation(s)
- Zhaodan Gu
- Jiangsu Automation Research Institute, Lianyungang, P.R. China
| | - Zhilei Wu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, P.R. China
| | - Ning Dai
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, P.R. China
| |
Collapse
|
22
|
Liu J, Hao J, Lin H, Pan W, Yang J, Feng Y, Wang G, Li J, Jin Z, Zhao Z, Liu Z. Deep learning-enabled 3D multimodal fusion of cone-beam CT and intraoral mesh scans for clinically applicable tooth-bone reconstruction. PATTERNS (NEW YORK, N.Y.) 2023; 4:100825. [PMID: 37720330 PMCID: PMC10499902 DOI: 10.1016/j.patter.2023.100825] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 03/24/2023] [Accepted: 07/21/2023] [Indexed: 09/19/2023]
Abstract
High-fidelity three-dimensional (3D) models of tooth-bone structures are valuable for virtual dental treatment planning; however, they require integrating data from cone-beam computed tomography (CBCT) and intraoral scans (IOS) using methods that are either error-prone or time-consuming. Hence, this study presents Deep Dental Multimodal Fusion (DDMF), an automatic multimodal framework that reconstructs 3D tooth-bone structures using CBCT and IOS. Specifically, the DDMF framework comprises CBCT and IOS segmentation modules as well as a multimodal reconstruction module with novel pixel representation learning architectures, prior knowledge-guided losses, and geometry-based 3D fusion techniques. Experiments on real-world large-scale datasets revealed that DDMF achieved superior segmentation performance on CBCT and IOS, achieving a 0.17 mm average symmetric surface distance (ASSD) for 3D fusion with a substantial processing time reduction. Additionally, clinical applicability studies have demonstrated DDMF's potential for accurately simulating tooth-bone structures throughout the orthodontic treatment process.
Collapse
Affiliation(s)
- Jiaxiang Liu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Hangzhou 310000, China
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, China
| | - Jin Hao
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
- Harvard School of Dental Medicine, Harvard University, Boston, MA 02115, USA
| | - Hangzheng Lin
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
| | - Wei Pan
- OPT Machine Vision Tech Co., Ltd., Tokyo 135-0064, Japan
| | - Jianfei Yang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Yang Feng
- Angelalign Inc., Shanghai 200433, China
| | - Gaoang Wang
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
| | - Jin Li
- Department of Stomatology, The First Affiliated Hospital of Shenzhen University, Shenzhen Second People’s Hospital, Shenzhen 518025, China
| | - Zuolin Jin
- Department of Orthodontics, School of Stomatology, Air Force Medical University, Xi’an 710032, China
| | - Zhihe Zhao
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Zuozhu Liu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Hangzhou 310000, China
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
| |
Collapse
|
23
|
Vinayahalingam S, Kempers S, Schoep J, Hsu TMH, Moin DA, van Ginneken B, Flügge T, Hanisch M, Xi T. Intra-oral scan segmentation using deep learning. BMC Oral Health 2023; 23:643. [PMID: 37670290 PMCID: PMC10481506 DOI: 10.1186/s12903-023-03362-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 08/26/2023] [Indexed: 09/07/2023] Open
Abstract
OBJECTIVE Intra-oral scans and gypsum cast scans (OS) are widely used in orthodontics, prosthetics, implantology, and orthognathic surgery to plan patient-specific treatments, which require teeth segmentations with high accuracy and resolution. Manual teeth segmentation, the gold standard up until now, is time-consuming, tedious, and observer-dependent. This study aims to develop an automated teeth segmentation and labeling system using deep learning. MATERIAL AND METHODS As a reference, 1750 OS were manually segmented and labeled. A deep-learning approach based on PointCNN and 3D U-net in combination with a rule-based heuristic algorithm and a combinatorial search algorithm was trained and validated on 1400 OS. Subsequently, the trained algorithm was applied to a test set consisting of 350 OS. The intersection over union (IoU), as a measure of accuracy, was calculated to quantify the degree of similarity between the annotated ground truth and the model predictions. RESULTS The model achieved accurate teeth segmentations with a mean IoU score of 0.915. The FDI labels of the teeth were predicted with a mean accuracy of 0.894. The optical inspection showed excellent position agreements between the automatically and manually segmented teeth components. Minor flaws were mostly seen at the edges. CONCLUSION The proposed method forms a promising foundation for time-effective and observer-independent teeth segmentation and labeling on intra-oral scans. CLINICAL SIGNIFICANCE Deep learning may assist clinicians in virtual treatment planning in orthodontics, prosthetics, implantology, and orthognathic surgery. The impact of using such models in clinical practice should be explored.
Collapse
Affiliation(s)
- Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
- Department of Artificial Intelligence, Radboud University, Nijmegen, the Netherlands
- Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Münster, Germany
| | - Steven Kempers
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
- Department of Artificial Intelligence, Radboud University, Nijmegen, the Netherlands
| | - Julian Schoep
- Promaton Co. Ltd, 1076 GR, Amsterdam, The Netherlands
| | - Tzu-Ming Harry Hsu
- MIT Computer Science & Artificial Intelligence Laboratory, 32 Vassar St, Cambridge, MA, 02139, USA
| | | | - Bram van Ginneken
- Department of Radiology, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| | - Tabea Flügge
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität Zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203, Berlin, Germany.
| | - Marcel Hanisch
- Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Münster, Germany
- Promaton Co. Ltd, 1076 GR, Amsterdam, The Netherlands
| | - Tong Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| |
Collapse
|
24
|
Liu Y, Li W, Liu J, Chen H, Yuan Y. GRAB-Net: Graph-Based Boundary-Aware Network for Medical Point Cloud Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2776-2786. [PMID: 37023152 DOI: 10.1109/tmi.2023.3265000] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Point cloud segmentation is fundamental in many medical applications, such as aneurysm clipping and orthodontic planning. Recent methods mainly focus on designing powerful local feature extractors and generally overlook the segmentation around the boundaries between objects, which is extremely harmful to the clinical practice and degenerates the overall segmentation performance. To remedy this problem, we propose a GRAph-based Boundary-aware Network (GRAB-Net) with three paradigms, Graph-based Boundary-perception Module (GBM), Outer-boundary Context-assignment Module (OCM), and Inner-boundary Feature-rectification Module (IFM), for medical point cloud segmentation. Aiming to improve the segmentation performance around boundaries, GBM is designed to detect boundaries and interchange complementary information inside semantic and boundary features in the graph domain, where semantics-boundary correlations are modelled globally and informative clues are exchanged by graph reasoning. Furthermore, to reduce the context confusion that degenerates the segmentation performance outside the boundaries, OCM is proposed to construct the contextual graph, where dissimilar contexts are assigned to points of different categories guided by geometrical landmarks. In addition, we advance IFM to distinguish ambiguous features inside boundaries in a contrastive manner, where boundary-aware contrast strategies are proposed to facilitate the discriminative representation learning. Extensive experiments on two public datasets, IntrA and 3DTeethSeg, demonstrate the superiority of our method over state-of-the-art methods.
Collapse
|
25
|
Jana A, Maiti A, Metaxas DN. A Critical Analysis of the Limitation of Deep Learning based 3D Dental Mesh Segmentation Methods in Segmenting Partial Scans. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-7. [PMID: 38082617 DOI: 10.1109/embc40787.2023.10339972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Tooth segmentation from intraoral scans is a crucial part of digital dentistry. Many Deep Learning based tooth segmentation algorithms have been developed for this task. In most of the cases, high accuracy has been achieved, although, most of the available tooth segmentation techniques make an implicit restrictive assumption of full jaw model and they report accuracy based on full jaw models. Medically, however, in certain cases, full jaw tooth scan is not required or may not be available. Given this practical issue, it is important to understand the robustness of currently available widely used Deep Learning based tooth segmentation techniques. For this purpose, we applied available segmentation techniques on partial intraoral scans and we discovered that the available deep Learning techniques under-perform drastically. The analysis and comparison presented in this work would help us in understanding the severity of the problem and allow us to develop robust tooth segmentation technique without strong assumption of full jaw model.Clinical relevance- Deep learning based tooth mesh segmentation algorithms have achieved high accuracy. In the clinical setting, robustness of deep learning based methods is of utmost importance. We discovered that the high performing tooth segmentation methods under-perform when segmenting partial intraoral scans. In our current work, we conduct extensive experiments to show the extent of this problem. We also discuss why adding partial scans to the training data of the tooth segmentation models is non-trivial. An in-depth understanding of this problem can help in developing robust tooth segmentation tenichniques.
Collapse
|
26
|
Leclercq M, Ruellas A, Gurgel M, Yatabe M, Bianchi J, Cevidanes L, Styner M, Paniagua B, Prieto JC. DENTALMODELSEG: FULLY AUTOMATED SEGMENTATION OF UPPER AND LOWER 3D INTRA-ORAL SURFACES. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230397. [PMID: 38505097 PMCID: PMC10949221 DOI: 10.1109/isbi53787.2023.10230397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
In this paper, we present a deep learning-based method for surface segmentation. This technique consists of acquiring 2D views and extracting features from the surface such as the normal vectors. The rendered images are analyzed with a 2D convolutional neural network, such as a UNET. We test our method in a dental application for the segmentation of dental crowns. The neural network is trained for multi-class segmentation, using image labels as ground truth. A 5-fold cross-validation was performed, and the segmentation task achieved an average Dice of 0.97, sensitivity of 0.98 and precision of 0.98. Our method and algorithms are available as a 3DSlicer extension.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Martin Styner
- University of North Carolina, Chapel Hill, United States
| | | | | |
Collapse
|
27
|
Murphy SJ, Lee S, Scharm JC, Kim S, Amin AA, Wu TH, Lu WE, Ni A, Ko CC, Fields HW, Deguchi T. Comparison of maxillary anterior tooth movement between Invisalign and fixed appliances. Am J Orthod Dentofacial Orthop 2023:S0889-5406(23)00032-X. [PMID: 36801092 DOI: 10.1016/j.ajodo.2022.10.024] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 10/01/2022] [Accepted: 10/01/2022] [Indexed: 02/17/2023]
Abstract
INTRODUCTION This research project aimed to compare the number of maxillary incisors and canine movement between Invisalign and fixed orthodontic appliances using artificial intelligence and identify any limitations of Invisalign. METHODS Sixty patients (Invisalign, n = 30; braces, n = 30) were randomly selected from the Ohio State University Graduate Orthodontic Clinic archive. Peer Assessment Rating (PAR) analysis was used to indicate the severity of the patients in both groups. To analyze the incisors and canine movement, specific landmarks were identified on incisors and canines using an artificial intelligence framework, two-stage mesh deep learning. Total average tooth movement in the maxilla and individual (incisors and canine) tooth movement in 6 directions (buccolingual, mesiodistal, vertical, tipping, torque, rotation) were then analyzed at a significance level of α = 0.05. RESULTS Based on the posttreatment Peer Assessment Rating scores, the quality of finished patients in both groups was similar. In maxillary incisors and canines, there was a significant difference in movement between Invisalign and conventional appliances for all 6 movement directions (P <0.05). The greatest differences were with rotation and tipping of the maxillary canine, along with incisor and canine torque. The smallest statistical differences observed for incisors and canines were crown translational tooth movement in the mesiodistal and buccolingual directions. CONCLUSIONS When comparing fixed orthodontic appliances to Invisalign, patients treated with fixed appliances were found to have significantly more maxillary tooth movement in all directions, especially with rotation and tipping of the maxillary canine.
Collapse
Affiliation(s)
- Shaun J Murphy
- Division of Orthodontics, College of Dentistry, Ohio State University, Columbus, Ohio
| | - Sanghee Lee
- Division of Orthodontics, College of Dentistry, Ohio State University, Columbus, Ohio
| | - Joshua C Scharm
- Division of Orthodontics, College of Dentistry, Ohio State University, Columbus, Ohio
| | - Stella Kim
- Division of Orthodontics, College of Dentistry, Ohio State University, Columbus, Ohio
| | - Aya A Amin
- Division of Orthodontics, College of Dentistry, Ohio State University, Columbus, Ohio
| | - Tai-Hsien Wu
- Division of Orthodontics, College of Dentistry, Ohio State University, Columbus, Ohio
| | - Wei-En Lu
- Division of Biostatistics, The Ohio State University College of Public health, Columbus, Ohio
| | - Ai Ni
- Division of Biostatistics, The Ohio State University College of Public health, Columbus, Ohio
| | - Ching-Chang Ko
- Department of Rehabilitative and Reconstructive Dentistry Division of Orthodontics and Prosthodontics, University of Louisville, School of Dentistry, Louisville, Ky
| | - Henry W Fields
- Department of Rehabilitative and Reconstructive Dentistry Division of Orthodontics and Prosthodontics, University of Louisville, School of Dentistry, Louisville, Ky
| | - Toru Deguchi
- Department of Rehabilitative and Reconstructive Dentistry Division of Orthodontics and Prosthodontics, University of Louisville, School of Dentistry, Louisville, Ky.
| |
Collapse
|
28
|
Liu Z, He X, Wang H, Xiong H, Zhang Y, Wang G, Hao J, Feng Y, Zhu F, Hu H. Hierarchical Self-Supervised Learning for 3D Tooth Segmentation in Intra-Oral Mesh Scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:467-480. [PMID: 36378797 DOI: 10.1109/tmi.2022.3222388] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Accurately delineating individual teeth and the gingiva in the three-dimension (3D) intraoral scanned (IOS) mesh data plays a pivotal role in many digital dental applications, e.g., orthodontics. Recent research shows that deep learning based methods can achieve promising results for 3D tooth segmentation, however, most of them rely on high-quality labeled dataset which is usually of small scales as annotating IOS meshes requires intensive human efforts. In this paper, we propose a novel self-supervised learning framework, named STSNet, to boost the performance of 3D tooth segmentation leveraging on large-scale unlabeled IOS data. The framework follows two-stage training, i.e., pre-training and fine-tuning. In pre-training, three hierarchical-level, i.e., point-level, region-level, cross-level, contrastive losses are proposed for unsupervised representation learning on a set of predefined matched points from different augmented views. The pretrained segmentation backbone is further fine-tuned in a supervised manner with a small number of labeled IOS meshes. With the same amount of annotated samples, our method can achieve an mIoU of 89.88%, significantly outperforming the supervised counterparts. The performance gain becomes more remarkable when only a small amount of labeled samples are available. Furthermore, STSNet can achieve better performance with only 40% of the annotated samples as compared to the fully supervised baselines. To the best of our knowledge, we present the first attempt of unsupervised pre-training for 3D tooth segmentation, demonstrating its strong potential in reducing human efforts for annotation and verification.
Collapse
|
29
|
Arsiwala-Scheppach LT, Chaurasia A, Müller A, Krois J, Schwendicke F. Machine Learning in Dentistry: A Scoping Review. J Clin Med 2023; 12:937. [PMID: 36769585 PMCID: PMC9918184 DOI: 10.3390/jcm12030937] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/06/2023] [Accepted: 01/23/2023] [Indexed: 01/27/2023] Open
Abstract
Machine learning (ML) is being increasingly employed in dental research and application. We aimed to systematically compile studies using ML in dentistry and assess their methodological quality, including the risk of bias and reporting standards. We evaluated studies employing ML in dentistry published from 1 January 2015 to 31 May 2021 on MEDLINE, IEEE Xplore, and arXiv. We assessed publication trends and the distribution of ML tasks (classification, object detection, semantic segmentation, instance segmentation, and generation) in different clinical fields. We appraised the risk of bias and adherence to reporting standards, using the QUADAS-2 and TRIPOD checklists, respectively. Out of 183 identified studies, 168 were included, focusing on various ML tasks and employing a broad range of ML models, input data, data sources, strategies to generate reference tests, and performance metrics. Classification tasks were most common. Forty-two different metrics were used to evaluate model performances, with accuracy, sensitivity, precision, and intersection-over-union being the most common. We observed considerable risk of bias and moderate adherence to reporting standards which hampers replication of results. A minimum (core) set of outcome and outcome metrics is necessary to facilitate comparisons across studies.
Collapse
Affiliation(s)
- Lubaina T. Arsiwala-Scheppach
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Akhilanand Chaurasia
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
- Department of Oral Medicine and Radiology, King George’s Medical University, Lucknow 226003, India
| | - Anne Müller
- Pharmacovigilance Institute (Pharmakovigilanz- und Beratungszentrum, PVZ) for Embryotoxicology, Institute of Clinical Pharmacology and Toxicology, Charité—Universitätsmedizin Berlin, 13353 Berlin, Germany
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| |
Collapse
|
30
|
Ma L, Lian C, Kim D, Xiao D, Wei D, Liu Q, Kuang T, Ghanbari M, Li G, Gateno J, Shen SGF, Wang L, Shen D, Xia JJ, Yap PT. Bidirectional prediction of facial and bony shapes for orthognathic surgical planning. Med Image Anal 2023; 83:102644. [PMID: 36272236 PMCID: PMC10445637 DOI: 10.1016/j.media.2022.102644] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 07/18/2022] [Accepted: 09/27/2022] [Indexed: 11/07/2022]
Abstract
This paper proposes a deep learning framework to encode subject-specific transformations between facial and bony shapes for orthognathic surgical planning. Our framework involves a bidirectional point-to-point convolutional network (P2P-Conv) to predict the transformations between facial and bony shapes. P2P-Conv is an extension of the state-of-the-art P2P-Net and leverages dynamic point-wise convolution (i.e., PointConv) to capture local-to-global spatial information. Data augmentation is carried out in the training of P2P-Conv with multiple point subsets from the facial and bony shapes. During inference, network outputs generated for multiple point subsets are combined into a dense transformation. Finally, non-rigid registration using the coherent point drift (CPD) algorithm is applied to generate surface meshes based on the predicted point sets. Experimental results on real-subject data demonstrate that our method substantially improves the prediction of facial and bony shapes over state-of-the-art methods.
Collapse
Affiliation(s)
- Lei Ma
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Daeseung Kim
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dongming Wei
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA
| | - Maryam Ghanbari
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Guoshi Li
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA; Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA
| | - Steve G F Shen
- Shanghai Ninth Hospital, Shanghai Jiaotong University College of Medicine, Shanghai 200025, China
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA; Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA.
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
31
|
Kakehbaraei S, Arvanaghi R, Seyedarabi H, Esmaeili F, Zenouz AT. 3D tooth segmentation in cone-beam computed tomography images using distance transform. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
32
|
Woo H, Jha N, Kim YJ, Sung SJ. Evaluating the accuracy of automated orthodontic digital setup models. Semin Orthod 2022. [DOI: 10.1053/j.sodo.2022.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
33
|
Ma T, Yang Y, Zhai J, Yang J, Zhang J. A Tooth Segmentation Method Based on Multiple Geometric Feature Learning. Healthcare (Basel) 2022; 10:2089. [PMID: 36292536 PMCID: PMC9601705 DOI: 10.3390/healthcare10102089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 10/18/2022] [Indexed: 08/10/2023] Open
Abstract
Tooth segmentation is an important aspect of virtual orthodontic systems. In some existing studies using deep learning-based tooth segmentation methods, the feature learning of point coordinate information and normal vector information is not effectively distinguished. This will lead to the feature information of these two methods not producing complementary intermingling. To address this problem, a tooth segmentation method based on multiple geometric feature learning is proposed in this paper. First, the spatial transformation (T-Net) module is used to complete the alignment of dental model mesh features. Second, a multiple geometric feature learning module is designed to encode and enhance the centroid coordinates and normal vectors of each triangular mesh to highlight the differences between geometric features of different meshes. Finally, for local to global fusion features, feature downscaling and channel optimization are accomplished layer by layer using multilayer perceptron (MLP) and efficient channel attention (ECA). The experimental results show that our algorithm achieves better accuracy and efficiency of tooth segmentation and can assist dentists in their treatment work.
Collapse
|
34
|
Efficient tooth gingival margin line reconstruction via adversarial learning. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
35
|
Chandrashekar G, AlQarni S, Bumann EE, Lee Y. Collaborative deep learning model for tooth segmentation and identification using panoramic radiographs. Comput Biol Med 2022; 148:105829. [PMID: 35868047 DOI: 10.1016/j.compbiomed.2022.105829] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 04/04/2022] [Accepted: 07/03/2022] [Indexed: 11/27/2022]
Abstract
Panoramic radiographs are an integral part of effective dental treatment planning, supporting dentists in identifying impacted teeth, infections, malignancies, and other dental issues. However, screening for anomalies solely based on a dentist's assessment may result in diagnostic inconsistency, posing difficulties in developing a successful treatment plan. Recent advancements in deep learning-based segmentation and object detection algorithms have enabled the provision of predictable and practical identification to assist in the evaluation of a patient's mineralized oral health, enabling dentists to construct a more successful treatment plan. However, there has been a lack of efforts to develop collaborative models that enhance learning performance by leveraging individual models. The article describes a novel technique for enabling collaborative learning by incorporating tooth segmentation and identification models created independently from panoramic radiographs. This collaborative technique permits the aggregation of tooth segmentation and identification to produce enhanced results by recognizing and numbering existing teeth (up to 32 teeth). The experimental findings indicate that the proposed collaborative model is significantly more effective than individual learning models (e.g., 98.77% vs. 96% and 98.44% vs.91% for tooth segmentation and recognition, respectively). Additionally, our models outperform the state-of-the-art segmentation and identification research. We demonstrated the effectiveness of collaborative learning in detecting and segmenting teeth in a variety of complex situations, including healthy dentition, missing teeth, orthodontic treatment in progress, and dentition with dental implants.
Collapse
Affiliation(s)
- Geetha Chandrashekar
- Department of Computer Science Electrical Engineering, University of Missouri(2), Kansas City, MO, USA.
| | - Saeed AlQarni
- Department of Computer Science Electrical Engineering, University of Missouri(2), Kansas City, MO, USA; Department of Computing and Informatics, Saudi Electronic University, Saudi Arabia.
| | - Erin Ealba Bumann
- Department of Oral and Craniofacial Sciences, University of Missouri, Kansas City, MO, USA.
| | - Yugyung Lee
- Department of Computer Science Electrical Engineering, University of Missouri(2), Kansas City, MO, USA.
| |
Collapse
|
36
|
Liu D, Tian Y, Zhang Y, Gelernter J, Wang X. Heterogeneous data fusion and loss function design for tooth point cloud segmentation. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07379-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
37
|
A Dual Discriminator Adversarial Learning Approach for Dental Occlusal Surface Reconstruction. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1933617. [PMID: 35449834 PMCID: PMC9018184 DOI: 10.1155/2022/1933617] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 03/12/2022] [Indexed: 11/18/2022]
Abstract
Objective. Restoring the correct masticatory function of partially edentulous patient is a challenging task primarily due to the complex tooth morphology between individuals. Although some deep learning-based approaches have been proposed for dental restorations, most of them do not consider the influence of dental biological characteristics for the occlusal surface reconstruction. Description. In this article, we propose a novel dual discriminator adversarial learning network to address these challenges. In particular, this network architecture integrates two models: a dilated convolutional-based generative model and a dual global-local discriminative model. While the generative model adopts dilated convolution layers to generate a feature representation that preserves clear tissue structure, the dual discriminative model makes use of two discriminators to jointly distinguish whether the input is real or fake. While the global discriminator focuses on the missing teeth and adjacent teeth to assess whether it is coherent as a whole, the local discriminator aims only at the defective teeth to ensure the local consistency of the generated dental crown. Results. Experiments on 1000 real-world patient dental samples demonstrate the effectiveness of our method. For quantitative comparison, the image quality metrics are used to measure the similarity of the generated occlusal surface, and the root mean square between the generated result and the target crown calculated by our method is 0.114 mm. In qualitative analysis, the proposed approach can generate more reasonable dental biological morphology. Conclusion. The results demonstrate that our method significantly outperforms the state-of-the-art methods in occlusal surface reconstruction. Importantly, the designed occlusal surface has enough anatomical morphology of natural teeth and superior clinical application value.
Collapse
|
38
|
Zhao Y, Zhang L, Liu Y, Meng D, Cui Z, Gao C, Gao X, Lian C, Shen D. Two-Stream Graph Convolutional Network for Intra-Oral Scanner Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:826-835. [PMID: 34714743 DOI: 10.1109/tmi.2021.3124217] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Precise segmentation of teeth from intra-oral scanner images is an essential task in computer-aided orthodontic surgical planning. The state-of-the-art deep learning-based methods often simply concatenate the raw geometric attributes (i.e., coordinates and normal vectors) of mesh cells to train a single-stream network for automatic intra-oral scanner image segmentation. However, since different raw attributes reveal completely different geometric information, the naive concatenation of different raw attributes at the (low-level) input stage may bring unnecessary confusion in describing and differentiating between mesh cells, thus hampering the learning of high-level geometric representations for the segmentation task. To address this issue, we design a two-stream graph convolutional network (i.e., TSGCN), which can effectively handle inter-view confusion between different raw attributes to more effectively fuse their complementary information and learn discriminative multi-view geometric representations. Specifically, our TSGCN adopts two input-specific graph-learning streams to extract complementary high-level geometric representations from coordinates and normal vectors, respectively. Then, these single-view representations are further fused by a self-attention module to adaptively balance the contributions of different views in learning more discriminative multi-view representations for accurate and fully automatic tooth segmentation. We have evaluated our TSGCN on a real-patient dataset of dental (mesh) models acquired by 3D intraoral scanners. Experimental results show that our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
Collapse
|
39
|
Tooth Defect Segmentation in 3D Mesh Scans Using Deep Learning. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20503-3_15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
40
|
|
41
|
|
42
|
Zhao Y, Zhang L, Yang C, Tan Y, Liu Y, Li P, Huang T, Gao C. 3D Dental model segmentation with graph attentional convolution network. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.09.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
43
|
Carrillo-Perez F, Pecho OE, Morales JC, Paravina RD, Della Bona A, Ghinea R, Pulgar R, Pérez MDM, Herrera LJ. Applications of artificial intelligence in dentistry: A comprehensive review. J ESTHET RESTOR DENT 2021; 34:259-280. [PMID: 34842324 DOI: 10.1111/jerd.12844] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 09/30/2021] [Accepted: 11/09/2021] [Indexed: 12/25/2022]
Abstract
OBJECTIVE To perform a comprehensive review of the use of artificial intelligence (AI) and machine learning (ML) in dentistry, providing the community with a broad insight on the different advances that these technologies and tools have produced, paying special attention to the area of esthetic dentistry and color research. MATERIALS AND METHODS The comprehensive review was conducted in MEDLINE/PubMed, Web of Science, and Scopus databases, for papers published in English language in the last 20 years. RESULTS Out of 3871 eligible papers, 120 were included for final appraisal. Study methodologies included deep learning (DL; n = 76), fuzzy logic (FL; n = 12), and other ML techniques (n = 32), which were mainly applied to disease identification, image segmentation, image correction, and biomimetic color analysis and modeling. CONCLUSIONS The insight provided by the present work has reported outstanding results in the design of high-performance decision support systems for the aforementioned areas. The future of digital dentistry goes through the design of integrated approaches providing personalized treatments to patients. In addition, esthetic dentistry can benefit from those advances by developing models allowing a complete characterization of tooth color, enhancing the accuracy of dental restorations. CLINICAL SIGNIFICANCE The use of AI and ML has an increasing impact on the dental profession and is complementing the development of digital technologies and tools, with a wide application in treatment planning and esthetic dentistry procedures.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Oscar E Pecho
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Juan Carlos Morales
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Rade D Paravina
- Department of Restorative Dentistry and Prosthodontics, School of Dentistry, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Alvaro Della Bona
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Razvan Ghinea
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Rosa Pulgar
- Department of Stomatology, Campus Cartuja, University of Granada, Granada, Spain
| | - María Del Mar Pérez
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| |
Collapse
|
44
|
Hao J, Liao W, Zhang YL, Peng J, Zhao Z, Chen Z, Zhou BW, Feng Y, Fang B, Liu ZZ, Zhao ZH. Toward Clinically Applicable 3-Dimensional Tooth Segmentation via Deep Learning. J Dent Res 2021; 101:304-311. [PMID: 34719980 DOI: 10.1177/00220345211040459] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Digital dentistry plays a pivotal role in dental health care. A critical step in many digital dental systems is to accurately delineate individual teeth and the gingiva in the 3-dimension intraoral scanned mesh data. However, previous state-of-the-art methods are either time-consuming or error prone, hence hindering their clinical applicability. This article presents an accurate, efficient, and fully automated deep learning model trained on a data set of 4,000 intraoral scanned data annotated by experienced human experts. On a holdout data set of 200 scans, our model achieves a per-face accuracy, average-area accuracy, and area under the receiver operating characteristic curve of 96.94%, 98.26%, and 0.9991, respectively, significantly outperforming the state-of-the-art baselines. In addition, our model takes only about 24 s to generate segmentation outputs, as opposed to >5 min by the baseline and 15 min by human experts. A clinical performance test of 500 patients with malocclusion and/or abnormal teeth shows that 96.9% of the segmentations are satisfactory for clinical applications, 2.9% automatically trigger alarms for human improvement, and only 0.2% of them need rework. Our research demonstrates the potential for deep learning to improve the efficacy and efficiency of dental treatment and digital dentistry.
Collapse
Affiliation(s)
- J Hao
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and West China Hospital of Stomatology, Sichuan University, Chengdu, China.,Harvard School of Dental Medicine, Harvard University, Boston, MA, USA
| | - W Liao
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Y L Zhang
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - J Peng
- DeepAlign Tech Inc., Ningbo, China
| | - Z Zhao
- DeepAlign Tech Inc., Ningbo, China
| | - Z Chen
- DeepAlign Tech Inc., Ningbo, China
| | - B W Zhou
- Angelalign Research Institute, Angel Align Inc., Shanghai, China
| | - Y Feng
- Angelalign Research Institute, Angel Align Inc., Shanghai, China
| | - B Fang
- Ninth People's Hospital Affiliated to Shanghai Jiao Tong University, Shanghai Research Institute of Stomatology, National Clinical Research Center of Stomatology, Shanghai, China
| | - Z Z Liu
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining, China
| | - Z H Zhao
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and West China Hospital of Stomatology, Sichuan University, Chengdu, China
| |
Collapse
|
45
|
Tian S, Wang M, Dai N, Ma H, Li L, Fiorenza L, Sun Y, Li Y. DCPR-GAN: Dental Crown Prosthesis Restoration Using Two-stage Generative Adversarial Networks. IEEE J Biomed Health Inform 2021; 26:151-160. [PMID: 34637385 DOI: 10.1109/jbhi.2021.3119394] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Restoring the correct masticatory function of broken teeth is the basis of dental crown prosthesis rehabilitation. However, it is a challenging task primarily due to the complex and personalized morphology of the occlusal surface. In this article, we address this problem by designing a new two-stage generative adversarial network (GAN) to reconstruct a dental crown surface in the data-driven perspective. Specifically, in the first stage, a conditional GAN (CGAN) is designed to learn the inherent relationship between the defective tooth and the target crown, which can solve the problem of the occlusal relationship restoration. In the second stage, an improved CGAN is further devised by considering an occlusal groove parsing network (GroNet) and an occlusal fingerprint constraint to enforce the generator to enrich the functional characteristics of the occlusal surface. Experimental results demonstrate that the proposed framework significantly outperforms the state-of-the-art deep learning methods in functional occlusal surface reconstruction using a real-world patient database. Moreover, the standard deviation (SD) and root mean square (RMS) between the generated occlusal surface and the target crown calculated by our method are both less than 0.161mm. Importantly, the designed dental crown has enough anatomical morphology and higher clinical applicability.
Collapse
|
46
|
Schneider L, Niemann A, Beuing O, Preim B, Saalfeld S. MedmeshCNN - Enabling meshcnn for medical surface models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 210:106372. [PMID: 34474194 DOI: 10.1016/j.cmpb.2021.106372] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 08/20/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE MeshCNN is a recently proposed Deep Learning framework that drew attention due to its direct operation on irregular, non-uniform 3D meshes. It outperformed state-of-the-art methods in classification and segmentation tasks of popular benchmarking datasets. The medical domain provides a large amount of complex 3D surface models that may benefit from processing with MeshCNN. However, several limitations prevent outstanding performances on highly diverse medical surface models. Within this work, we propose MedMeshCNN as an expansion dedicated to complex, diverse, and fine-grained medical data. METHODS MedMeshCNN follows the functionality of MeshCNN with a significantly increased memory efficiency that allows retaining patient-specific properties during processing. Furthermore, it enables the segmentation of pathological structures that often come with highly imbalanced class distributions. RESULTS MedMeshCNN achieved an Intersection over Union of 63.24% on a highly complex part segmentation task of intracranial aneurysms and their surrounding vessel structures. Pathological aneurysms were segmented with an Intersection over Union of 71.4%. CONCLUSIONS MedMeshCNN enables the application of MeshCNN on complex, fine-grained medical surface meshes. It considers imbalanced class distributions derived from pathological findings and retains patient-specific properties during processing.
Collapse
Affiliation(s)
- Lisa Schneider
- Department of Simulation and Graphics, Otto von Guericke University Magdeburg, Germany
| | - Annika Niemann
- Department of Simulation and Graphics, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany.
| | - Oliver Beuing
- Department for Radiology, AMEOS Hospital Bernburg, Germany
| | - Bernhard Preim
- Department of Simulation and Graphics, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany
| | - Sylvia Saalfeld
- Department of Simulation and Graphics, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany
| |
Collapse
|
47
|
Tian S, Wang M, Yuan F, Dai N, Sun Y, Xie W, Qin J. Efficient Computer-Aided Design of Dental Inlay Restoration: A Deep Adversarial Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2415-2427. [PMID: 33945473 DOI: 10.1109/tmi.2021.3077334] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Restoring the normal masticatory function of broken teeth is a challenging task primarily due to the defect location and size of a patient's teeth. In recent years, although some representative image-to-image transformation methods (e.g. Pix2Pix) can be potentially applicable to restore the missing crown surface, most of them fail to generate dental inlay surface with realistic crown details (e.g. occlusal groove) that are critical to the restoration of defective teeth with varying shapes. In this article, we design a computer-aided Deep Adversarial-driven dental Inlay reStoration (DAIS) framework to automatically reconstruct a realistic surface for a defective tooth. Specifically, DAIS consists of a Wasserstein generative adversarial network (WGAN) with a specially designed loss measurement, and a new local-global discriminator mechanism. The local discriminator focuses on missing regions to ensure the local consistency of a generated occlusal surface, while the global discriminator aims at defective teeth and adjacent teeth to assess if it is coherent as a whole. Experimental results demonstrate that DAIS is highly efficient to deal with a large area of missing teeth in arbitrary shapes and generate realistic occlusal surface completion. Moreover, the designed watertight inlay prostheses have enough anatomical morphology, thus providing higher clinical applicability compared with more state-of-the-art methods.
Collapse
|
48
|
Lang Y, Deng HH, Xiao D, Lian C, Kuang T, Gateno J, Yap PT, Xia JJ. DLLNet: An Attention-Based Deep Learning Method for Dental Landmark Localization on High-Resolution 3D Digital Dental Models. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12904:478-487. [PMID: 34927177 PMCID: PMC8675275 DOI: 10.1007/978-3-030-87202-1_46] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Dental landmark localization is a fundamental step to analyzing dental models in the planning of orthodontic or orthognathic surgery. However, current clinical practices require clinicians to manually digitize more than 60 landmarks on 3D dental models. Automatic methods to detect landmarks can release clinicians from the tedious labor of manual annotation and improve localization accuracy. Most existing landmark detection methods fail to capture local geometric contexts, causing large errors and misdetections. We propose an end-to-end learning framework to automatically localize 68 landmarks on high-resolution dental surfaces. Our network hierarchically extracts multi-scale local contextual features along two paths: a landmark localization path and a landmark area-of-interest segmentation path. Higher-level features are learned by combining local-to-global features from the two paths by feature fusion to predict the landmark heatmap and the landmark area segmentation map. An attention mechanism is then applied to the two maps to refine the landmark position. We evaluated our framework on a real-patient dataset consisting of 77 high-resolution dental surfaces. Our approach achieves an average localization error of 0.42 mm, significantly outperforming related start-of-the-art methods.
Collapse
Affiliation(s)
- Yankun Lang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Hannah H Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY, USA
| |
Collapse
|
49
|
Chen Q, Huang J, Salehi HS, Zhu H, Lian L, Lai X, Wei K. Hierarchical CNN-based occlusal surface morphology analysis for classifying posterior tooth type using augmented images from 3D dental surface models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106295. [PMID: 34329895 DOI: 10.1016/j.cmpb.2021.106295] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 07/15/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE 3D Digitization of dental model is growing in popularity for dental application. Classification of tooth type from single 3D point cloud model without assist of relative position among teeth is still a challenging task. METHODS In this paper, 8-class posterior tooth type classification (first premolar, second premolar, first molar, second molar in maxilla and mandible respectively) was investigated by convolutional neural network (CNN)-based occlusal surface morphology analysis. 3D occlusal surface was transformed to depth image for basic CNN-based classification. Considering the logical hierarchy of tooth categories, a hierarchical classification structure was proposed to decompose 8-class classification task into two-stage cascaded classification subtasks. Image augmentations including traditional geometrical transformation and deep convolutional generative adversarial networks (DCGANs) were applied for each subnetworks and cascaded network. RESULTS Results indicate that combing traditional and DCGAN-based augmented images to train CNN models can improve classification performance. In the paper, we achieve overall accuracy 91.35%, macro precision 91.49%, macro-recall 91.29%, and macro-F1 0.9139 for the 8-class posterior tooth type classification, which outperform other deep learning models. Meanwhile, Grad-cam results demonstrate that CNN model trained by our augmented images will focus on smaller important region for better generality. And anatomic landmarks of cusp, fossa, and groove work as important regions for cascaded classification model. CONCLUSION The reported work has proved that using basic CNN to construct two-stage hierarchical structure can achieve the best classification performance of posterior tooth type in 3D model without assistance of relative position information. The proposed method has advantages of easy training, great ability to learn discriminative features from small image region.
Collapse
Affiliation(s)
- Qingguang Chen
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China.
| | - Junchao Huang
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China
| | - Hassan S Salehi
- Department of Electrical and Computer Engineering, California State University, Chico, 95929, United States
| | - Haihua Zhu
- Hospital of Stomatology of Zhejiang University, Hangzhou, 310018, China
| | - Luya Lian
- Hospital of Stomatology of Zhejiang University, Hangzhou, 310018, China
| | - Xiaomin Lai
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China
| | - Kaihua Wei
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China
| |
Collapse
|
50
|
Etemad L, Wu TH, Heiner P, Liu J, Lee S, Chao WL, Zaytoun ML, Guez C, Lin FC, Jackson CB, Ko CC. Machine learning from clinical data sets of a contemporary decision for orthodontic tooth extraction. Orthod Craniofac Res 2021; 24 Suppl 2:193-200. [PMID: 34031981 DOI: 10.1111/ocr.12502] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Revised: 04/29/2021] [Accepted: 05/14/2021] [Indexed: 11/28/2022]
Abstract
OBJECTIVE To examine the robustness of the published machine learning models in the prediction of extraction vs non-extraction for a diverse US sample population seen by multiple providers. SETTING AND SAMPLE POPULATION Diverse group of 838 patients (208 extraction, 630 non-extraction) were consecutively enrolled. MATERIALS AND METHODS Two sets of input features (117 and 22) including clinical and cephalometric variables were identified based on previous studies. Random forest (RF) and multilayer perception (MLP) models were trained using these feature sets on the sample population and evaluated using measures including accuracy (ACC) and balanced accuracy (BA). A technique to identify incongruent data was used to explore underlying characteristics of the data set and split all samples into 2 groups (G1 and G2) for further model training. RESULTS Performance of the models (75%-79% ACC and 72%-76% BA) on the total sample population was lower than in previous research. Models were retrained and evaluated using G1 and G2 separately, and individual group MLP models yielded improved accuracy for G1 (96% ACC and 94% BA) and G2 (88% ACC and 85% BA). RF feature ranking showed differences between top features for G1 (maxillary crowding, mandibular crowding and L1-NB) and G2 (age, mandibular crowding and lower lip to E-plane). CONCLUSIONS An incongruent data pattern exists in a consecutively enrolled patient population. Future work with incongruent data segregation and advanced artificial intelligence algorithms is needed to improve the generalization ability to make it ready to support clinical decision-making.
Collapse
Affiliation(s)
- Lily Etemad
- Division of Orthodontics, College of Dentistry, The Ohio State University, Columbus, OH, USA
| | - Tai-Hsien Wu
- Division of Orthodontics, College of Dentistry, The Ohio State University, Columbus, OH, USA
| | - Parker Heiner
- College of Dentistry, The Ohio State University, Columbus, OH, USA
| | - Jie Liu
- Division of Orthodontics, College of Dentistry, The Ohio State University, Columbus, OH, USA
| | - Sanghee Lee
- Division of Orthodontics, College of Dentistry, The Ohio State University, Columbus, OH, USA
| | - Wei-Lun Chao
- Computer Science and Engineering, College of Engineering, The Ohio State University, Columbus, OH, USA
| | | | | | - Feng-Chang Lin
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | | | - Ching-Chang Ko
- Division of Orthodontics, College of Dentistry, The Ohio State University, Columbus, OH, USA
| |
Collapse
|