51
|
Gao S, Li X, Li X, Li Z, Deng Y. Transformer based tooth classification from cone-beam computed tomography for dental charting. Comput Biol Med 2022; 148:105880. [PMID: 35914362 DOI: 10.1016/j.compbiomed.2022.105880] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/12/2022] [Accepted: 07/09/2022] [Indexed: 11/16/2022]
Abstract
Dental charting is a useful tool in physical examination, dental surgery, and forensic identification. However, manual dental charting faces some difficulties, such as inaccuracy and psychiatric burden in forensic identification. As a critical step of dental charting, tooth classification can be completed on dental cone-beam computed tomography (CBCT) automatically to solve the above difficulties. In this paper, we build a deep neuron network which accepts a 3D CBCT image patch that contains the region of interest (ROI) of a tooth as input and then outputs the type of the tooth. Although Transformer-based neural networks outperform CNN-based neural networks in many natural image processing tasks, they are difficult to apply to 3D medical images. Therefore, we combine the advantages of CNN and Transformer structure to improve the existing methods and propose the Grouped Bottleneck Transformer to overcome the drawbacks of the Transformer, namely the requirement of large training dataset and high computational complexity. We conducted an experiment on a clinical data set containing 450 training samples and 104 testing samples. Experiments show that our network can achieve a classification accuracy of 91.3% and an AUC score of 99.7%. To further evaluate the effectiveness of our method, we tested our network on the publicly available medical image classification dataset MedMNIST3D. The result shows that our network outperforms other networks on 5 out of 6 3-dimensional medical image subsets.
Collapse
Affiliation(s)
- Shen Gao
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, 1098 Xueyuan Avenue, Nanshan District, Shenzhen, 518055, Guangdong, China; Institute of Stomatological Research, Shenzhen University, 1098 Xueyuan Avenue, Nanshan District, Shenzhen, 518055, Guangdong, China; School of Science and Engineering, the Chinese University of Hong Kong, Shenzhen, 2001 Longxiang Avenue, Longgang District, Shenzhen, 518172, Guangdong, China.
| | - Xuguang Li
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, 1098 Xueyuan Avenue, Nanshan District, Shenzhen, 518055, Guangdong, China; Institute of Stomatological Research, Shenzhen University, 1098 Xueyuan Avenue, Nanshan District, Shenzhen, 518055, Guangdong, China.
| | - Xin Li
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, 1098 Xueyuan Avenue, Nanshan District, Shenzhen, 518055, Guangdong, China; Division of Restorative Dental Sciences, Faculty of Dentistry, PPDH 34 Hospital Road, Pok Fu Lam, Hong Kong Special Administrative Region of China.
| | - Zhen Li
- School of Science and Engineering, the Chinese University of Hong Kong, Shenzhen, 2001 Longxiang Avenue, Longgang District, Shenzhen, 518172, Guangdong, China.
| | - Yongqiang Deng
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, 1098 Xueyuan Avenue, Nanshan District, Shenzhen, 518055, Guangdong, China; Institute of Stomatological Research, Shenzhen University, 1098 Xueyuan Avenue, Nanshan District, Shenzhen, 518055, Guangdong, China.
| |
Collapse
|
52
|
Du M, Wu X, Ye Y, Fang S, Zhang H, Chen M. A Combined Approach for Accurate and Accelerated Teeth Detection on Cone Beam CT Images. Diagnostics (Basel) 2022; 12:diagnostics12071679. [PMID: 35885584 PMCID: PMC9323385 DOI: 10.3390/diagnostics12071679] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/04/2022] [Accepted: 07/07/2022] [Indexed: 11/23/2022] Open
Abstract
Teeth detection and tooth segmentation are essential for processing Cone Beam Computed Tomography (CBCT) images. The accuracy decides the credibility of the subsequent applications, such as diagnosis, treatment plans in clinical practice or other research that is dependent on automatic dental identification. The main problems are complex noises and metal artefacts which would affect the accuracy of teeth detection and segmentation with traditional algorithms. In this study, we proposed a teeth-detection method to avoid the problems above and to accelerate the operation speed. In our method, (1) a Convolutional Neural Network (CNN) was employed to classify layer classes; (2) images were chosen to perform Region of Interest (ROI) cropping; (3) in ROI regions, we used a YOLO v3 and multi-level combined teeth detection method to locate each tooth bounding box; (4) we obtained tooth bounding boxes on all layers. We compared our method with a Faster R-CNN method which was commonly used in previous studies. The training and prediction time were shortened by 80% and 62% in our method, respectively. The Object Inclusion Ratio (OIR) metric of our method was 96.27%, while for the Faster R-CNN method, it was 91.40%. When testing images with severe noise or with different missing teeth, our method promises a stable result. In conclusion, our method of teeth detection on dental CBCT is practical and reliable for its high prediction speed and robust detection.
Collapse
Affiliation(s)
- Mingjun Du
- Institute of Biomedical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (M.D.); (H.Z.)
| | - Xueying Wu
- Department of Prosthodontics, Shanghai Stomatological Hospital & School of Stomatology, Fudan University and Shanghai Key Laboratory of Craniomaxillofacial Development and Diseases, Fudan University, Shanghai 200040, China; (X.W.); (Y.Y.); (S.F.)
| | - Ye Ye
- Department of Prosthodontics, Shanghai Stomatological Hospital & School of Stomatology, Fudan University and Shanghai Key Laboratory of Craniomaxillofacial Development and Diseases, Fudan University, Shanghai 200040, China; (X.W.); (Y.Y.); (S.F.)
| | - Shuobo Fang
- Department of Prosthodontics, Shanghai Stomatological Hospital & School of Stomatology, Fudan University and Shanghai Key Laboratory of Craniomaxillofacial Development and Diseases, Fudan University, Shanghai 200040, China; (X.W.); (Y.Y.); (S.F.)
| | - Hengwei Zhang
- Institute of Biomedical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (M.D.); (H.Z.)
| | - Ming Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (M.D.); (H.Z.)
- Correspondence:
| |
Collapse
|
53
|
Yaren Tekin B, Ozcan C, Pekince A, Yasa Y. An enhanced tooth segmentation and numbering according to FDI notation in bitewing radiographs. Comput Biol Med 2022; 146:105547. [DOI: 10.1016/j.compbiomed.2022.105547] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 03/29/2022] [Accepted: 04/17/2022] [Indexed: 12/21/2022]
|
54
|
Artificial Intelligence-Based Prediction of Oroantral Communication after Tooth Extraction Utilizing Preoperative Panoramic Radiography. Diagnostics (Basel) 2022; 12:diagnostics12061406. [PMID: 35741216 PMCID: PMC9221677 DOI: 10.3390/diagnostics12061406] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 06/02/2022] [Accepted: 06/04/2022] [Indexed: 02/01/2023] Open
Abstract
Oroantral communication (OAC) is a common complication after tooth extraction of upper molars. Profound preoperative panoramic radiography analysis might potentially help predict OAC following tooth extraction. In this exploratory study, we evaluated n = 300 consecutive cases (100 OAC and 200 controls) and trained five machine learning algorithms (VGG16, InceptionV3, MobileNetV2, EfficientNet, and ResNet50) to predict OAC versus non-OAC (binary classification task) from the input images. Further, four oral and maxillofacial experts evaluated the respective panoramic radiography and determined performance metrics (accuracy, area under the curve (AUC), precision, recall, F1-score, and receiver operating characteristics curve) of all diagnostic approaches. Cohen’s kappa was used to evaluate the agreement between expert evaluations. The deep learning algorithms reached high specificity (highest specificity 100% for InceptionV3) but low sensitivity (highest sensitivity 42.86% for MobileNetV2). The AUCs from VGG16, InceptionV3, MobileNetV2, EfficientNet, and ResNet50 were 0.53, 0.60, 0.67, 0.51, and 0.56, respectively. Expert 1–4 reached an AUC of 0.550, 0.629, 0.500, and 0.579, respectively. The specificity of the expert evaluations ranged from 51.74% to 95.02%, whereas sensitivity ranged from 14.14% to 59.60%. Cohen’s kappa revealed a poor agreement for the oral and maxillofacial expert evaluations (Cohen’s kappa: 0.1285). Overall, present data indicate that OAC cannot be sufficiently predicted from preoperative panoramic radiography. The false-negative rate, i.e., the rate of positive cases (OAC) missed by the deep learning algorithms, ranged from 57.14% to 95.24%. Surgeons should not solely rely on panoramic radiography when evaluating the probability of OAC occurrence. Clinical testing of OAC is warranted after each upper-molar tooth extraction.
Collapse
|
55
|
A Few-Shot Dental Object Detection Method Based on a Priori Knowledge Transfer. Symmetry (Basel) 2022. [DOI: 10.3390/sym14061129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
With the continuous improvement in oral health awareness, people’s demand for oral health diagnosis has also increased. Dental object detection is a key step in automated dental diagnosis; however, because of the particularity of medical data, researchers usually cannot obtain sufficient medical data. Therefore, this study proposes a dental object detection method for small-size datasets based on teeth semantics, structural information feature extraction, and an a priori knowledge migration, called a segmentation, points, segmentation, and classification network (SPSC-NET). In the region of interest area extraction method, the SPSC-NET method converts the teeth X-ray image into an a priori knowledge information image, composed of the edges of the teeth and the semantic segmentation image; the network structure used to extract the a priori knowledge information is a symmetric structure, which then generates the key points of the object instance. Next, it uses the key points of the object instance (i.e., the dental semantic segmentation image and the dental edge image) to obtain the object instance image (i.e., the positioning of the teeth). Using 10 training images, the test precision and recall rate of the tooth object center point of the SPSC-NET method were between 99–100%. In the classification method, the SPSC-NET identified the single instance segmentation image generated by migrating the dental object area, the edge image, and the semantic segmentation image as a priori knowledge. Under the premise of using the same deep neural network classification model, the model classification with a priori knowledge was 20% more accurate than the ordinary classification methods. For the overall object detection performance indicators, the SPSC-NET’s average precision (AP) value was more than 92%, which is better than that of the transfer-based faster region-based convolutional neural network (Faster-RCNN) object detection model; moreover, its AP and mean intersection-over-union (mIOU) were 14.72% and 19.68% better than the transfer-based Faster-CNN model, respectively.
Collapse
|
56
|
Tsoromokos N, Parinussa S, Claessen F, Moin DA, Loos BG. Estimation of Alveolar Bone Loss in Periodontitis Using Machine Learning. Int Dent J 2022; 72:621-627. [PMID: 35570013 PMCID: PMC9485533 DOI: 10.1016/j.identj.2022.02.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 02/14/2022] [Accepted: 02/24/2022] [Indexed: 12/11/2022] Open
Abstract
Aim The objective of this research was to perform a pilot study to develop an automatic analysis of periapical radiographs from patients with and without periodontitis for the percentage alveolar bone loss (ABL) on the approximal surfaces of teeth using a supervised machine learning model, that is, convolutional neural networks (CNN). Material and methods A total of 1546 approximal sites from 54 participants on mandibular periapical radiographs were manually annotated (MA) for a training set (n = 1308 sites), a validation set (n = 98 sites), and a test set (n = 140 sites). The training and validation sets were used for the development of a CNN algorithm. The algorithm recognised the cemento-enamel junction, the most apical extent of the alveolar crest, the apex, and the surrounding alveolar bone. Results For the total of 140 images in the test set, the CNN scored a mean of 23.1 ± 11.8 %ABL, whilst the corresponding value for MA was 27.8 ± 13.8 %ABL. The intraclass correlation (ICC) was 0.601 (P < .001), indicating moderate reliability. Further subanalyses for various tooth types and various bone loss patterns showed that ICCs remained significant, although the algorithm performed with excellent reliability for %ABL on nonmolar teeth (incisors, canines, premolars; ICC = 0.763). Conclusions A CNN trained algorithm on radiographic images showed a diagnostic performance with moderate to good reliability to detect and quantify %ABL in periapical radiographs.
Collapse
Affiliation(s)
- Nektarios Tsoromokos
- Department of Periodontology, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
| | | | | | | | - Bruno G Loos
- Department of Periodontology, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
57
|
Raj R, Mathew J, Kannath SK, Rajan J. Crossover based technique for data augmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106716. [PMID: 35290901 DOI: 10.1016/j.cmpb.2022.106716] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 02/19/2022] [Accepted: 02/26/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Medical image classification problems are frequently constrained by the availability of datasets. "Data augmentation" has come as a data enhancement and data enrichment solution to the challenge of limited data. Traditionally data augmentation techniques are based on linear and label preserving transformations; however, recent works have demonstrated that even non-linear, non-label preserving techniques can be unexpectedly effective. This paper proposes a non-linear data augmentation technique for the medical domain and explores its results. METHODS This paper introduces "Crossover technique", a new data augmentation technique for Convolutional Neural Networks in Medical Image Classification problems. Our technique synthesizes a pair of samples by applying two-point crossover on the already available training dataset. By this technique, we create N new samples from N training samples. The proposed crossover based data augmentation technique, although non-label preserving, has performed significantly better in terms of increased accuracy and reduced loss for all the tested datasets over varied architectures. RESULTS The proposed method was tested on three publicly available medical datasets with various network architectures. For the mini-MIAS database of mammograms, our method improved the accuracy by 1.47%, achieving 80.15% using VGG-16 architecture. Our method works fine for both gray-scale as well as RGB images, as on the PH2 database for Skin Cancer, it improved the accuracy by 3.57%, achieving 85.71% using VGG-19 architecture. In addition, our technique improved accuracy on the brain tumor dataset by 0.40%, achieving 97.97% using VGG-16 architecture. CONCLUSION The proposed novel crossover technique for training the Convolutional Neural Network (CNN) is painless to implement by applying two-point crossover on two images to form new images. The method would go a long way in tackling the challenges of limited datasets and problems of class imbalances in medical image analysis. Our code is available at https://github.com/rishiraj-cs/Crossover-augmentation.
Collapse
Affiliation(s)
- Rishi Raj
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, India.
| | - Jimson Mathew
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, India
| | - Santhosh Kumar Kannath
- Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Kerala, India
| | - Jeny Rajan
- Department of Computer Science and Engineering, National Institute of Technology Karnataka, India
| |
Collapse
|
58
|
Morishita T, Muramatsu C, Seino Y, Takahashi R, Hayashi T, Nishiyama W, Zhou X, Hara T, Katsumata A, Fujita H. Tooth recognition of 32 tooth types by branched single shot multibox detector and integration processing in panoramic radiographs. J Med Imaging (Bellingham) 2022; 9:034503. [PMID: 35756973 DOI: 10.1117/1.jmi.9.3.034503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 06/02/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: The purpose of our study was to analyze dental panoramic radiographs and contribute to dentists' diagnosis by automatically extracting the information necessary for reading them. As the initial step, we detected teeth and classified their tooth types in this study. Approach: We propose single-shot multibox detector (SSD) networks with a side branch for 1-class detection without distinguishing the tooth type and for 16-class detection (i.e., the central incisor, lateral incisor, canine, first premolar, second premolar, first molar, second molar, and third molar, distinguished by the upper and lower jaws). In addition, post-processing was conducted to integrate the results of the two networks and categorize them into 32 classes, differentiating between the left and right teeth. The proposed method was applied to 950 dental panoramic radiographs obtained at multiple facilities, including a university hospital and dental clinics. Results: The recognition performance of the SSD with a side branch was better than that of the original SSD. In addition, the detection rate was improved by the integration process. As a result, the detection rate was 99.03%, the number of false detections was 0.29 per image, and the classification rate was 96.79% for 32 tooth types. Conclusions: We propose a method for tooth recognition using object detection and post-processing. The results show the effectiveness of network branching on the recognition performance and the usefulness of post-processing for neural network output.
Collapse
Affiliation(s)
- Takumi Morishita
- Gifu University, Graduate School of Natural Science and Technology, Department of Intelligence Science and Engineering, Gifu, Japan
| | | | - Yuta Seino
- Gifu University, Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu, Japan
| | | | | | - Wataru Nishiyama
- Asahi University, School of Dentistry, Department of Oral Radiology, Mizuho, Japan
| | - Xiangrong Zhou
- Gifu University, Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu, Japan
| | - Takeshi Hara
- Gifu University, Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu, Japan
| | - Akitoshi Katsumata
- Asahi University, School of Dentistry, Department of Oral Radiology, Mizuho, Japan
| | - Hiroshi Fujita
- Gifu University, Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu, Japan
| |
Collapse
|
59
|
AL-Ghamdi ASALM, Ragab M, AlGhamdi SA, Asseri AH, Mansour RF, Koundal D. Detection of Dental Diseases through X-Ray Images Using Neural Search Architecture Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3500552. [PMID: 35535186 PMCID: PMC9078756 DOI: 10.1155/2022/3500552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 03/24/2022] [Accepted: 04/12/2022] [Indexed: 11/18/2022]
Abstract
An important aspect of the diagnosis procedure in daily clinical practice is the analysis of dental radiographs. This is because the dentist must interpret different types of problems related to teeth, including the tooth numbers and related diseases during the diagnostic process. For panoramic radiographs, this paper proposes a convolutional neural network (CNN) that can do multitask classification by classifying the X-ray images into three classes: cavity, filling, and implant. In this paper, convolutional neural networks are taken in the form of a NASNet model consisting of different numbers of max-pooling layers, dropout layers, and activation functions. Initially, the data will be augmented and preprocessed, and then, the construction of a multioutput model will be done. Finally, the model will compile and train the model; the evaluation parameters used for the analysis of the model are loss and the accuracy curves. The model has achieved an accuracy of greater than 96% such that it has outperformed other existing algorithms.
Collapse
Affiliation(s)
- Abdullah S. AL-Malaise AL-Ghamdi
- Information Systems Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Information Systems Department, HECI School, Dar Alhekma University, Jeddah, Saudi Arabia
| | - Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Centre for Artificial Intelligence in Precision Medicines, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Mathematics Department, Faculty of Science, Al-Azhar University, Naser City 11884, Cairo, Egypt
| | | | - Amer H. Asseri
- Centre for Artificial Intelligence in Precision Medicines, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Biochemistry Department, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Romany F. Mansour
- Department of Mathematics, Faculty of Science, New Valley University, El-Kharga, 72511, Egypt
| | - Deepika Koundal
- School of Computer Science, University of Petroleum & Energy Studies, Dehradun, India
| |
Collapse
|
60
|
Oh O, Kim Y, Kim D, Hussey DS, Lee SW. Phase retrieval based on deep learning in grating interferometer. Sci Rep 2022; 12:6739. [PMID: 35469034 PMCID: PMC9038759 DOI: 10.1038/s41598-022-10551-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 03/14/2022] [Indexed: 11/09/2022] Open
Abstract
Grating interferometry is a promising technique to obtain differential phase contrast images with illumination source of low intrinsic transverse coherence. However, retrieving the phase contrast image from the differential phase contrast image is difficult due to the accumulated noise and artifacts from the differential phase contrast image (DPCI) reconstruction. In this paper, we implemented a deep learning-based phase retrieval method to suppress these artifacts. Conventional deep learning based denoising requires noise/clean image pair, but it is not feasible to obtain sufficient number of clean images for grating interferometry. In this paper, we apply a recently developed neural network called Noise2Noise (N2N) that uses noise/noise image pairs for training. We obtained many DPCIs through combination of phase stepping images, and these were used as input/target pairs for N2N training. The application of the N2N network to simulated and measured DPCI showed that the phase contrast images were retrieved with strongly suppressed phase retrieval artifacts. These results can be used in grating interferometer applications which uses phase stepping method.
Collapse
Affiliation(s)
- Ohsung Oh
- School of Mechanical Engineering, Pusan National University, Busan, 46241, Republic of Korea
| | - Youngju Kim
- Department of Chemistry and Biochemistry, University of Maryland, College Park, MD, 20742, USA.,Neutron Physics Group, National Institute of Standards and Technology, Gaithersburg, MD, 20899, USA
| | - Daeseung Kim
- School of Mechanical Engineering, Pusan National University, Busan, 46241, Republic of Korea
| | - Daniel S Hussey
- Neutron Physics Group, National Institute of Standards and Technology, Gaithersburg, MD, 20899, USA
| | - Seung Wook Lee
- School of Mechanical Engineering, Pusan National University, Busan, 46241, Republic of Korea.
| |
Collapse
|
61
|
Lin X, Hong D, Zhang D, Huang M, Yu H. Detecting Proximal Caries on Periapical Radiographs Using Convolutional Neural Networks with Different Training Strategies on Small Datasets. Diagnostics (Basel) 2022; 12:diagnostics12051047. [PMID: 35626203 PMCID: PMC9139265 DOI: 10.3390/diagnostics12051047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 04/14/2022] [Accepted: 04/19/2022] [Indexed: 02/05/2023] Open
Abstract
The present study aimed to evaluate the performance of convolutional neural networks (CNNs) that were trained with small datasets using different strategies in the detection of proximal caries at different levels of severity on periapical radiographs. Small datasets containing 800 periapical radiographs were randomly categorized into a training and validation dataset (n = 600) and a test dataset (n = 200). A pretrained Cifar-10Net CNN was used in the present study. Different training strategies were used to train the CNN model independently; these strategies were defined as image recognition (IR), edge extraction (EE), and image segmentation (IS). Different metrics, such as sensitivity and area under the receiver operating characteristic curve (AUC), for the trained CNN and human observers were analysed to evaluate the performance in detecting proximal caries. IR, EE, and IS recognition modes and human eyes achieved AUCs of 0.805, 0.860, 0.549, and 0.767, respectively, with the EE recognition mode having the highest values (p all < 0.05). The EE recognition mode was significantly more sensitive in detecting both enamel and dentin caries than human eyes (p all < 0.05). The CNN trained with the EE strategy, the best performer in the present study, showed potential utility in detecting proximal caries on periapical radiographs when using small datasets.
Collapse
Affiliation(s)
- Xiujiao Lin
- Fujian Provincial Engineering Research Center of Oral Biomaterial, School and Hospital of Stomatology, Fujian Medical University, Fuzhou 350005, China; (X.L.); (D.H.)
- Department of Prosthodontics, School and Hospital of Stomatology, Fujian Medical University, Fuzhou 350005, China
| | - Dengwei Hong
- Fujian Provincial Engineering Research Center of Oral Biomaterial, School and Hospital of Stomatology, Fujian Medical University, Fuzhou 350005, China; (X.L.); (D.H.)
- Department of Prosthodontics, School and Hospital of Stomatology, Fujian Medical University, Fuzhou 350005, China
| | - Dong Zhang
- College of Computer and Data Science, Fuzhou University, Fuzhou 350025, China; (D.Z.); (M.H.)
| | - Mingyi Huang
- College of Computer and Data Science, Fuzhou University, Fuzhou 350025, China; (D.Z.); (M.H.)
| | - Hao Yu
- Fujian Provincial Engineering Research Center of Oral Biomaterial, School and Hospital of Stomatology, Fujian Medical University, Fuzhou 350005, China; (X.L.); (D.H.)
- Department of Prosthodontics, School and Hospital of Stomatology, Fujian Medical University, Fuzhou 350005, China
- Department of Applied Prosthodontics, Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki 852-8521, Japan
- Correspondence:
| |
Collapse
|
62
|
The role of neural artificial intelligence for diagnosis and treatment planning in endodontics: A qualitative review. Saudi Dent J 2022; 34:270-281. [PMID: 35692236 PMCID: PMC9177869 DOI: 10.1016/j.sdentj.2022.04.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/12/2022] [Accepted: 04/13/2022] [Indexed: 11/23/2022] Open
Abstract
Introduction The role of artificial intelligence (AI) is currently increasing in terms of diagnosing diseases and planning treatment in endodontics. However, findings from individual research studies are not systematically reviewed and compiled together. Hence, this study aimed to systematically review, appraise, and evaluate neural AI algorithms employed and their comparative efficacy to conventional methods in endodontic diagnosis and treatment planning. Methods The present research question focused on the literature search about different AI algorithms and models of AI assisted endodontic diagnosis and treatment planning. The search engine included databases such as Google Scholar, PubMed, and Science Direct with search criteria of primary research paper, published in English, and analyzed data on AI and its role in the field of endodontics. Results The initial search resulted in 785 articles, exclusion based on abstract relevance, animal studies, grey literature and letter to editors narrowed down the scope of selected articles to 11 accepted for review. The review data supported the findings that AI can play a crucial role in the area of endodontics, such as identification of apical lesions, classifying and numbering teeth, detecting dental caries, periodontitis and periapical disease, diagnosing different dental problems, helping dentists make referrals, and also helping them make plans for treatment of dental disorders in a timely and effective manner with greater accuracy. Conclusion AI with different models or frameworks and algorithms can help dentists to diagnose and manage endodontic problems with greater accuracy. However, endodontic fraternity needs to provide more emphasis on the utilization of AI, provision of evidence based guidelines and implementation of the AI models.
Collapse
|
63
|
Automated detection and labelling of teeth and small edentulous regions on Cone-Beam Computed Tomography using Convolutional Neural Networks. J Dent 2022; 122:104139. [DOI: 10.1016/j.jdent.2022.104139] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 04/04/2022] [Accepted: 04/20/2022] [Indexed: 12/30/2022] Open
|
64
|
DCP: Prediction of Dental Caries Using Machine Learning in Personalized Medicine. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12063043] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Dental caries is an infectious disease that deteriorates the tooth structure, with tooth cavities as the most common result. Classified as one of the most prevalent oral health issues, research on dental caries has been carried out for early detection due to pain and cost of treatment. Medical research in oral healthcare has shown limitations such as considerable funds and time required; therefore, artificial intelligence has been used in recent years to develop models that can predict the risk of dental caries. The data used in our study were collected from a children’s oral health survey conducted in 2018 by the Korean Center for Disease Control and Prevention. Several Machine Learning algorithms were applied to this data, and their performances were evaluated using accuracy, F1-score, precision, and recall. Random forest has achieved the highest performance compared to other machine learnings methods, with an accuracy of 92%, F1-score of 90%, precision of 94%, and recall of 87%. The results of the proposed paper show that ML is highly recommended for dental professionals in assisting them in decision making for the early detection and treatment of dental caries.
Collapse
|
65
|
Saric R, Kevric J, Hadziabdic N, Osmanovic A, Kadic M, Saracevic M, Jokic D, Rajs V. Dental Age Assessment based on CBCT Images using Machine Learning Algorithms. Forensic Sci Int 2022; 334:111245. [DOI: 10.1016/j.forsciint.2022.111245] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 01/31/2022] [Accepted: 03/01/2022] [Indexed: 11/26/2022]
|
66
|
Görürgöz C, Orhan K, Bayrakdar IS, Çelik Ö, Bilgir E, Odabaş A, Aslan AF, Jagtap R. Performance of a convolutional neural network algorithm for tooth detection and numbering on periapical radiographs. Dentomaxillofac Radiol 2022; 51:20210246. [PMID: 34623893 PMCID: PMC8925875 DOI: 10.1259/dmfr.20210246] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES The present study aimed to evaluate the performance of a Faster Region-based Convolutional Neural Network (R-CNN) algorithm for tooth detection and numbering on periapical images. METHODS The data sets of 1686 randomly selected periapical radiographs of patients were collected retrospectively. A pre-trained model (GoogLeNet Inception v3 CNN) was employed for pre-processing, and transfer learning techniques were applied for data set training. The algorithm consisted of: (1) the Jaw classification model, (2) Region detection models, and (3) the Final algorithm using all models. Finally, an analysis of the latest model has been integrated alongside the others. The sensitivity, precision, true-positive rate, and false-positive/negative rate were computed to analyze the performance of the algorithm using a confusion matrix. RESULTS An artificial intelligence algorithm (CranioCatch, Eskisehir-Turkey) was designed based on R-CNN inception architecture to automatically detect and number the teeth on periapical images. Of 864 teeth in 156 periapical radiographs, 668 were correctly numbered in the test data set. The F1 score, precision, and sensitivity were 0.8720, 0.7812, and 0.9867, respectively. CONCLUSION The study demonstrated the potential accuracy and efficiency of the CNN algorithm for detecting and numbering teeth. The deep learning-based methods can help clinicians reduce workloads, improve dental records, and reduce turnaround time for urgent cases. This architecture might also contribute to forensic science.
Collapse
Affiliation(s)
- Cansu Görürgöz
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Bursa Uludağ University, Bursa, Turkey
| | | | | | | | - Elif Bilgir
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Alper Odabaş
- Department of Mathematics and Computer Science, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Ahmet Faruk Aslan
- Department of Mathematics and Computer Science, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Rohan Jagtap
- Division of Oral & Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, Mississippi, USA
| |
Collapse
|
67
|
Ying S, Wang B, Zhu H, Liu W, Huang F. Caries Segmentation on Tooth X-ray Images with a Deep Network. J Dent 2022; 119:104076. [PMID: 35218876 DOI: 10.1016/j.jdent.2022.104076] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 12/09/2021] [Accepted: 02/22/2022] [Indexed: 01/18/2023] Open
Abstract
OBJECTIVES Deep learning has been a promising technology in many biomedical applications. In this study, a deep network was proposed aiming for caries segmentation on the clinically collected tooth X-ray images. METHODS The proposed network inherited the skip connection characteristic from the widely used U-shaped network, and creatively adopted vision Transformer, dilated convolution, and feature pyramid fusion methods to enhance the multi-scale and global feature extraction capability. It was then trained on the clinically self-collected and augmented tooth X-ray image dataset, and the dice similarity and pixel classification precision were calculated for the network's performance evaluation. RESULTS Experimental results revealed an average dice similarity of 0.7487 and an average pixel classification precision of 0.7443 on the test dataset, which outperformed the compared networks such as UNet, Trans-UNet, and Swin-UNet, demonstrating the remarkable improvement of the proposed network. CONCLUSIONS This study contributed to the automatic caries segmentation by using a deep network, and highlighted the potential clinical utility value.
Collapse
Affiliation(s)
- Shunv Ying
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China.
| | - Benwu Wang
- College of Metrology & Measurement Engineering, China Jiliang University, Hangzhou, 310018, China
| | - Haihua Zhu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China
| | - Wei Liu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, 310006, China
| | - Feng Huang
- School of Mechanical & Energy Engineering, Zhejiang University of Science & Technology, Hangzhou, 310023, China; College of Metrology & Measurement Engineering, China Jiliang University, Hangzhou, 310018, China.
| |
Collapse
|
68
|
Artificial Intelligence: A New Diagnostic Software in Dentistry: A Preliminary Performance Diagnostic Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19031728. [PMID: 35162751 PMCID: PMC8835112 DOI: 10.3390/ijerph19031728] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 12/18/2021] [Accepted: 01/27/2022] [Indexed: 02/01/2023]
Abstract
Background: Artificial intelligence (AI) has taken hold in public health because more and more people are looking to make a diagnosis using technology that allows them to work faster and more accurately, reducing costs and the number of medical errors. Methods: In the present study, 120 panoramic X-rays (OPGs) were randomly selected from the Department of Oral and Maxillofacial Sciences of Sapienza University of Rome, Italy. The OPGs were acquired and analyzed using Apox, which takes a panoramic X-rayand automatically returns the dental formula, the presence of dental implants, prosthetic crowns, fillings and root remnants. A descriptive analysis was performed presenting the categorical variables as absolute and relative frequencies. Results: In total, the number of true positive (TP) values was 2.195 (19.06%); true negative (TN), 8.908 (77.34%); false positive (FP), 132 (1.15%); and false negative (FN), 283 (2.46%). The overall sensitivity was 0.89, while the overall specificity was 0.98. Conclusions: The present study shows the latest achievements in dentistry, analyzing the application and credibility of a new diagnostic method to improve the work of dentists and the patients’ care.
Collapse
|
69
|
Automatic detection and segmentation of morphological changes of the maxillary sinus mucosa on cone-beam computed tomography images using a three-dimensional convolutional neural network. Clin Oral Investig 2022; 26:3987-3998. [DOI: 10.1007/s00784-021-04365-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 12/29/2021] [Indexed: 02/07/2023]
|
70
|
A U-Net Approach to Apical Lesion Segmentation on Panoramic Radiographs. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7035367. [PMID: 35075428 PMCID: PMC8783705 DOI: 10.1155/2022/7035367] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 12/13/2021] [Accepted: 12/16/2021] [Indexed: 01/21/2023]
Abstract
The purpose of the paper was the assessment of the success of an artificial intelligence (AI) algorithm formed on a deep-convolutional neural network (D-CNN) model for the segmentation of apical lesions on dental panoramic radiographs. A total of 470 anonymized panoramic radiographs were used to progress the D-CNN AI model based on the U-Net algorithm (CranioCatch, Eskisehir, Turkey) for the segmentation of apical lesions. The radiographs were obtained from the Radiology Archive of the Department of Oral and Maxillofacial Radiology of the Faculty of Dentistry of Eskisehir Osmangazi University. A U-Net implemented with PyTorch model (version 1.4.0) was used for the segmentation of apical lesions. In the test data set, the AI model segmented 63 periapical lesions on 47 panoramic radiographs. The sensitivity, precision, and F1-score for segmentation of periapical lesions at 70% IoU values were 0.92, 0.84, and 0.88, respectively. AI systems have the potential to overcome clinical problems. AI may facilitate the assessment of periapical pathology based on panoramic radiographs.
Collapse
|
71
|
Wen H, Wu W, Fan F, Liao P, Chen H, Zhang Y, Deng Z, Lv W. Human identification performed with skull's sphenoid sinus based on deep learning. Int J Legal Med 2022; 136:1067-1074. [PMID: 35022840 DOI: 10.1007/s00414-021-02761-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 12/02/2021] [Indexed: 11/29/2022]
Abstract
Human identification plays a significant role in the investigations of disasters and criminal cases. Human identification could be achieved quickly and efficiently via 3D sphenoid sinus models by customized convolutional neural networks. In this retrospective study, a deep learning neural network was proposed to achieve human identification of 1475 noncontrast thin-slice CT scans. A total of 732 patients were retrieved and studied (82% for model training and 18% for testing). By establishing an individual recognition framework, the anonymous sphenoid sinus model was matched and cross-tested, and the performance of the framework also was evaluated on the test set using the recognition rate, ROC curve and identification speed. Finally, manual matching was performed based on the framework results in the test set. Out of a total of 732 subjects (mean age 46.45 years ± 14.92 (SD); 349 women), 600 subjects were trained, and 132 subjects were tested. The present automatic human identification has achieved Rank 1 and Rank 5 accuracy values of 93.94% and 99.24%, respectively, in the test set. In addition, all the identifications were completed within 55 s, which manifested the inference speed of the test set. We used the comparison results of the MVSS-Net to exclude sphenoid sinus models with low similarity and carried out traditional visual comparisons of the CT anatomical aspects of the sphenoid sinus of 132 individuals with an accuracy of 100%. The customized deep learning framework achieves reliable and fast human identification based on a 3D sphenoid sinus and can assist forensic radiologists in human identification accuracy.
Collapse
Affiliation(s)
- Hanjie Wen
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Wei Wu
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Fei Fan
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Peixi Liao
- Department of Scientific Research and Education, The Sixth People's 3. Hospital of Chengdu, Chengdu, 610065, People's Republic of China
| | - Hu Chen
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China.
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Zhenhua Deng
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China.
| | - Weiqiang Lv
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| |
Collapse
|
72
|
Development of an artificial intelligence-based algorithm to classify images acquired with an intraoral scanner of individual molar teeth into three categories. PLoS One 2022; 17:e0261870. [PMID: 34995298 PMCID: PMC8741029 DOI: 10.1371/journal.pone.0261870] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 12/10/2021] [Indexed: 11/19/2022] Open
Abstract
Background
Forensic dentistry identifies deceased individuals by comparing postmortem dental charts, oral-cavity pictures and dental X-ray images with antemortem records. However, conventional forensic dentistry methods are time-consuming and thus unable to rapidly identify large numbers of victims following a large-scale disaster.
Objective
Our goal is to automate the dental filing process by using intraoral scanner images. In this study, we generated and evaluated an artificial intelligence-based algorithm that classified images of individual molar teeth into three categories: (1) full metallic crown (FMC); (2) partial metallic restoration (In); or (3) sound tooth, carious tooth or non-metallic restoration (CNMR).
Methods
A pre-trained model was created using oral-cavity pictures from patients. Then, the algorithm was generated through transfer learning and training with images acquired from cadavers by intraoral scanning. Cross-validation was performed to reduce bias. The ability of the model to classify molar teeth into the three categories (FMC, In or CNMR) was evaluated using four criteria: precision, recall, F-measure and overall accuracy.
Results
The average value (variance) was 0.952 (0.000140) for recall, 0.957 (0.0000614) for precision, 0.952 (0.000145) for F-measure, and 0.952 (0.000142) for overall accuracy when the algorithm was used to classify images of molar teeth acquired from cadavers by intraoral scanning.
Conclusion
We have created an artificial intelligence-based algorithm that analyzes images acquired with an intraoral scanner and classifies molar teeth into one of three types (FMC, In or CNMR) based on the presence/absence of metallic restorations. Furthermore, the accuracy of the algorithm reached about 95%. This algorithm was constructed as a first step toward the development of an automated system that generates dental charts from images acquired by an intraoral scanner. The availability of such a system would greatly increase the efficiency of personal identification in the event of a major disaster.
Collapse
|
73
|
do Nascimento Gerhardt M, Shujaat S, Jacobs R. AIM in Dentistry. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
74
|
Putra RH, Doi C, Yoda N, Astuti ER, Sasaki K. Current applications and development of artificial intelligence for digital dental radiography. Dentomaxillofac Radiol 2022; 51:20210197. [PMID: 34233515 PMCID: PMC8693331 DOI: 10.1259/dmfr.20210197] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
In the last few years, artificial intelligence (AI) research has been rapidly developing and emerging in the field of dental and maxillofacial radiology. Dental radiography, which is commonly used in daily practices, provides an incredibly rich resource for AI development and attracted many researchers to develop its application for various purposes. This study reviewed the applicability of AI for dental radiography from the current studies. Online searches on PubMed and IEEE Xplore databases, up to December 2020, and subsequent manual searches were performed. Then, we categorized the application of AI according to similarity of the following purposes: diagnosis of dental caries, periapical pathologies, and periodontal bone loss; cyst and tumor classification; cephalometric analysis; screening of osteoporosis; tooth recognition and forensic odontology; dental implant system recognition; and image quality enhancement. Current development of AI methodology in each aforementioned application were subsequently discussed. Although most of the reviewed studies demonstrated a great potential of AI application for dental radiography, further development is still needed before implementation in clinical routine due to several challenges and limitations, such as lack of datasets size justification and unstandardized reporting format. Considering the current limitations and challenges, future AI research in dental radiography should follow standardized reporting formats in order to align the research designs and enhance the impact of AI development globally.
Collapse
Affiliation(s)
| | - Chiaki Doi
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Nobuhiro Yoda
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Eha Renwi Astuti
- Department of Dentomaxillofacial Radiology, Faculty of Dental Medicine, Universitas Airlangga, Jl. Mayjen Prof. Dr. Moestopo no 47, Surabaya, Indonesia
| | - Keiichi Sasaki
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| |
Collapse
|
75
|
Kato A, Hori M, Hori T, Jincho M, Sekine H, Kawai T. Generating Training Data Using Python Scripts for Automatic Extraction of Landmarks from Tooth Models. J HARD TISSUE BIOL 2022. [DOI: 10.2485/jhtb.31.95] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Akiko Kato
- Department of Oral Anatomy, School of Dentistry, Aichi Gakuin University
| | - Miki Hori
- Department of Dental Materials Science, School of Dentistry, Aichi Gakuin University
| | - Tadasuke Hori
- Center for Advanced Oral Science, School of Dentistry, Aichi Gakuin University
| | - Makoto Jincho
- Center for Advanced Oral Science, School of Dentistry, Aichi Gakuin University
| | - Hironao Sekine
- Center for Advanced Oral Science, School of Dentistry, Aichi Gakuin University
| | - Tatsushi Kawai
- Department of Dental Materials Science, School of Dentistry, Aichi Gakuin University
| |
Collapse
|
76
|
Nino-Barrera J, Alzate-Mendoza D, Olaya-Abril C, Gamboa-Martinez LF, Guamán-Laverde M, Lagos-Rosero N, Romero-Diaz AC, Duran N, Vanegas-Hoyose L. Atypical Radicular Anatomy in Permanent Human Teeth: A Systematic Review. Crit Rev Biomed Eng 2022; 50:19-34. [PMID: 35997108 DOI: 10.1615/critrevbiomedeng.2022043742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The aim of the present study is to classify and quantify the anatomical variations of teeth in terms of form and number of root canals reported in human teeth employing the classification systems proposed previously. An electronic (PubMed) and manual search were performed to identify case reports noting any of the anatomical variations. Each alteration was studied independently. The electronic search was performed using the following keywords: anatomical aberration, root canal, permanent Dentition, case report, c-shaped canal, dens invaginatus, palato-radicular groove, palato-radicular groove, palato-gingival groove, radix entomolaris, dental fusion, dental gemination, taurodontism, dilaceration. The initial search revealed 1497 papers, of which 938 were excluded after analyzing the titles and abstracts. Therefore, 559 potential papers were considered. Of those, 140 articles did not meet the inclusion criteria. For the final revision, 419 papers were considered. We found that the mandibular first premolar had the highest prevalence of C-shaped canals. Dens invaginatus was more frequently found in the mandibular lateral incisor. Taurodontism was more prevalent in the maxillary first molar and in the mandibular first molar. Dilaceration was not clearly associated with a particular tooth. The classifications systems used in this review allowed for the better understanding and analysis of the many anatomical variations present in teeth. The variations in shape most found were dens invaginatus and radix entomolaris. The most frequently reported anatomical variation was in the number of canals.
Collapse
Affiliation(s)
- Javier Nino-Barrera
- Faculty of Dentistry, School of Endodontics, Universidad Nacional de Colombia, Bogota, Colombia; Department of Endodontics, Universidad El Bosque, School of Dentistry, Bogota, Colombia; Research Group on Biomechanics, Universidad Nacional de Colombia, Bogotá, Colombia
| | - Diana Alzate-Mendoza
- Program Director, Department of Endodontics, Universidad El Bosque, School of Dentistry, Bogota, Colombia
| | - Carolina Olaya-Abril
- Professor, Department of Endodontics, Universidad El Bosque, School of Dentistry, Bogota, Colombia
| | | | - Mishell Guamán-Laverde
- Professor, Department of Endodontics, Universidad El Bosque, School of Dentistry, Bogota, Colombia
| | | | | | | | | |
Collapse
|
77
|
Carrillo-Perez F, Pecho OE, Morales JC, Paravina RD, Della Bona A, Ghinea R, Pulgar R, Pérez MDM, Herrera LJ. Applications of artificial intelligence in dentistry: A comprehensive review. J ESTHET RESTOR DENT 2021; 34:259-280. [PMID: 34842324 DOI: 10.1111/jerd.12844] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 09/30/2021] [Accepted: 11/09/2021] [Indexed: 12/25/2022]
Abstract
OBJECTIVE To perform a comprehensive review of the use of artificial intelligence (AI) and machine learning (ML) in dentistry, providing the community with a broad insight on the different advances that these technologies and tools have produced, paying special attention to the area of esthetic dentistry and color research. MATERIALS AND METHODS The comprehensive review was conducted in MEDLINE/PubMed, Web of Science, and Scopus databases, for papers published in English language in the last 20 years. RESULTS Out of 3871 eligible papers, 120 were included for final appraisal. Study methodologies included deep learning (DL; n = 76), fuzzy logic (FL; n = 12), and other ML techniques (n = 32), which were mainly applied to disease identification, image segmentation, image correction, and biomimetic color analysis and modeling. CONCLUSIONS The insight provided by the present work has reported outstanding results in the design of high-performance decision support systems for the aforementioned areas. The future of digital dentistry goes through the design of integrated approaches providing personalized treatments to patients. In addition, esthetic dentistry can benefit from those advances by developing models allowing a complete characterization of tooth color, enhancing the accuracy of dental restorations. CLINICAL SIGNIFICANCE The use of AI and ML has an increasing impact on the dental profession and is complementing the development of digital technologies and tools, with a wide application in treatment planning and esthetic dentistry procedures.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Oscar E Pecho
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Juan Carlos Morales
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Rade D Paravina
- Department of Restorative Dentistry and Prosthodontics, School of Dentistry, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Alvaro Della Bona
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Razvan Ghinea
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Rosa Pulgar
- Department of Stomatology, Campus Cartuja, University of Granada, Granada, Spain
| | - María Del Mar Pérez
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| |
Collapse
|
78
|
Chen J, Zeb A, Yang S, Zhang D, Nanehkaran YA. Automatic identification of commodity label images using lightweight attention network. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06081-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
79
|
Başaran M, Çelik Ö, Bayrakdar IS, Bilgir E, Orhan K, Odabaş A, Aslan AF, Jagtap R. Diagnostic charting of panoramic radiography using deep-learning artificial intelligence system. Oral Radiol 2021; 38:363-369. [PMID: 34611840 DOI: 10.1007/s11282-021-00572-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 09/06/2021] [Indexed: 12/11/2022]
Abstract
OBJECTIVES The goal of this study was to develop and evaluate the performance of a new deep-learning (DL) artificial intelligence (AI) model for diagnostic charting in panoramic radiography. METHODS One thousand eighty-four anonymous dental panoramic radiographs were labeled by two dento-maxillofacial radiologists for ten different dental situations: crown, pontic, root-canal treated tooth, implant, implant-supported crown, impacted tooth, residual root, filling, caries, and dental calculus. AI Model CranioCatch, developed in Eskişehir, Turkey and based on a deep CNN method, was proposed to be evaluated. A Faster R-CNN Inception v2 (COCO) model implemented with the TensorFlow library was used for model development. The assessment of AI model performance was evaluated with sensitivity, precision, and F1 scores. RESULTS When the performance of the proposed AI model for detecting dental conditions in panoramic radiographs was evaluated, the best sensitivity values were obtained from the crown, implant, and impacted tooth as 0.9674, 0.9615, and 0.9658, respectively. The worst sensitivity values were obtained from the pontic, caries, and dental calculus, as 0.7738, 0.3026, and 0.0934, respectively. The best precision values were obtained from pontic, implant, implant-supported crown as 0.8783, 0.9259, and 0.8947, respectively. The worst precision values were obtained from residual root, caries, and dental calculus, as 0.6764, 0.5096, and 0.1923, respectively. The most successful F1 Scores were obtained from the implant, crown, and implant-supported crown, as 0.9433, 0.9122, and 0.8947, respectively. CONCLUSION The proposed AI model has promising results at detecting dental conditions in panoramic radiographs, except for caries and dental calculus. Thanks to the improvement of AI models in all areas of dental radiology, we predict that they will help physicians in panoramic diagnosis and treatment planning, as well as digital-based student education, especially during the pandemic period.
Collapse
Affiliation(s)
- Melike Başaran
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kütahya Health Science University, Kütahya, Turkey
| | - Özer Çelik
- Department of Mathematics-Computer, Eskisehir Osmangazi University Faculty of Science, Eskişehir, Turkey.,Eskisehir Osmangazi University Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskişehir, Turkey
| | - Ibrahim Sevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, 26240, Eskişehir, Turkey. .,Eskisehir Osmangazi University Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskişehir, Turkey.
| | - Elif Bilgir
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey.,Ankara University Medical Design Application and Research Center (MEDITAM), Ankara, Turkey
| | - Alper Odabaş
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Ahmet Faruk Aslan
- Department of Mathematics-Computer, Eskisehir Osmangazi University Faculty of Science, Eskişehir, Turkey
| | - Rohan Jagtap
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS, USA
| |
Collapse
|
80
|
Kumar A, Bhadauria HS, Singh A. Descriptive analysis of dental X-ray images using various practical methods: A review. PeerJ Comput Sci 2021; 7:e620. [PMID: 34616881 PMCID: PMC8459782 DOI: 10.7717/peerj-cs.620] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 06/09/2021] [Indexed: 06/13/2023]
Abstract
In dentistry, practitioners interpret various dental X-ray imaging modalities to identify tooth-related problems, abnormalities, or teeth structure changes. Another aspect of dental imaging is that it can be helpful in the field of biometrics. Human dental image analysis is a challenging and time-consuming process due to the unspecified and uneven structures of various teeth, and hence the manual investigation of dental abnormalities is at par excellence. However, automation in the domain of dental image segmentation and examination is essentially the need of the hour in order to ensure error-free diagnosis and better treatment planning. In this article, we have provided a comprehensive survey of dental image segmentation and analysis by investigating more than 130 research works conducted through various dental imaging modalities, such as various modes of X-ray, CT (Computed Tomography), CBCT (Cone Beam Computed Tomography), etc. Overall state-of-the-art research works have been classified into three major categories, i.e., image processing, machine learning, and deep learning approaches, and their respective advantages and limitations are identified and discussed. The survey presents extensive details of the state-of-the-art methods, including image modalities, pre-processing applied for image enhancement, performance measures, and datasets utilized.
Collapse
|
81
|
Duan W, Chen Y, Zhang Q, Lin X, Yang X. Refined tooth and pulp segmentation using U-Net in CBCT image. Dentomaxillofac Radiol 2021; 50:20200251. [PMID: 33444070 PMCID: PMC8404523 DOI: 10.1259/dmfr.20200251] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Revised: 12/30/2020] [Accepted: 01/06/2021] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVES The aim of this study was extracting any single tooth from a CBCT scan and performing tooth and pulp cavity segmentation to visualize and to have knowledge of internal anatomy relationships before undertaking endodontic therapy. METHODS We propose a two-phase deep learning solution for accurate tooth and pulp cavity segmentation. First, the single tooth bounding box is extracted automatically for both single-rooted tooth (ST) and multirooted tooth (MT). It is achieved by using the Region Proposal Network (RPN) with Feature Pyramid Network (FPN) method from the perspective of panorama. Second, U-Net model is iteratively performed for refined tooth and pulp segmentation against two types of tooth ST and MT, respectively. In light of rough data and annotation problems for dental pulp, we design a loss function with a smoothness penalty in the network. Furthermore, the multi-view data enhancement is proposed to solve the small data challenge and morphology structural problems. RESULTS The experimental results show that the proposed method can obtain an average dice 95.7% for ST, 96.2% for MT and 88.6% for pulp of ST, 87.6% for pulp of MT. CONCLUSIONS This study proposed a two-phase deep learning solution for fast and accurately extracting any single tooth from a CBCT scan and performing accurate tooth and pulp cavity segmentation. The 3D reconstruction results can completely show the morphology of teeth and pulps, it also provides valuable data for further research and clinical practice.
Collapse
Affiliation(s)
- Wei Duan
- College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China
| | - Yufei Chen
- College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China
| | - Qi Zhang
- Department of Endodontics, School and Hospital of Stomatology, Tongji University, Shanghai Engineering Research Center of Tooth Restoration and Regeneration, Shanghai 200072, China
| | - Xiang Lin
- Department of Endodontics, School and Hospital of Stomatology, Tongji University, Shanghai Engineering Research Center of Tooth Restoration and Regeneration, Shanghai 200072, China
| | - Xiaoyu Yang
- College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China
| |
Collapse
|
82
|
Chen Q, Huang J, Salehi HS, Zhu H, Lian L, Lai X, Wei K. Hierarchical CNN-based occlusal surface morphology analysis for classifying posterior tooth type using augmented images from 3D dental surface models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106295. [PMID: 34329895 DOI: 10.1016/j.cmpb.2021.106295] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 07/15/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE 3D Digitization of dental model is growing in popularity for dental application. Classification of tooth type from single 3D point cloud model without assist of relative position among teeth is still a challenging task. METHODS In this paper, 8-class posterior tooth type classification (first premolar, second premolar, first molar, second molar in maxilla and mandible respectively) was investigated by convolutional neural network (CNN)-based occlusal surface morphology analysis. 3D occlusal surface was transformed to depth image for basic CNN-based classification. Considering the logical hierarchy of tooth categories, a hierarchical classification structure was proposed to decompose 8-class classification task into two-stage cascaded classification subtasks. Image augmentations including traditional geometrical transformation and deep convolutional generative adversarial networks (DCGANs) were applied for each subnetworks and cascaded network. RESULTS Results indicate that combing traditional and DCGAN-based augmented images to train CNN models can improve classification performance. In the paper, we achieve overall accuracy 91.35%, macro precision 91.49%, macro-recall 91.29%, and macro-F1 0.9139 for the 8-class posterior tooth type classification, which outperform other deep learning models. Meanwhile, Grad-cam results demonstrate that CNN model trained by our augmented images will focus on smaller important region for better generality. And anatomic landmarks of cusp, fossa, and groove work as important regions for cascaded classification model. CONCLUSION The reported work has proved that using basic CNN to construct two-stage hierarchical structure can achieve the best classification performance of posterior tooth type in 3D model without assistance of relative position information. The proposed method has advantages of easy training, great ability to learn discriminative features from small image region.
Collapse
Affiliation(s)
- Qingguang Chen
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China.
| | - Junchao Huang
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China
| | - Hassan S Salehi
- Department of Electrical and Computer Engineering, California State University, Chico, 95929, United States
| | - Haihua Zhu
- Hospital of Stomatology of Zhejiang University, Hangzhou, 310018, China
| | - Luya Lian
- Hospital of Stomatology of Zhejiang University, Hangzhou, 310018, China
| | - Xiaomin Lai
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China
| | - Kaihua Wei
- School of Automation, Hangzhou Dianzi University, 310018, Hangzhou, China
| |
Collapse
|
83
|
Bilgir E, Bayrakdar İŞ, Çelik Ö, Orhan K, Akkoca F, Sağlam H, Odabaş A, Aslan AF, Ozcetin C, Kıllı M, Rozylo-Kalinowska I. An artifıcial ıntelligence approach to automatic tooth detection and numbering in panoramic radiographs. BMC Med Imaging 2021; 21:124. [PMID: 34388975 PMCID: PMC8361658 DOI: 10.1186/s12880-021-00656-7] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 06/30/2021] [Indexed: 11/18/2022] Open
Abstract
Background Panoramic radiography is an imaging method for displaying maxillary and mandibular teeth together with their supporting structures. Panoramic radiography is frequently used in dental imaging due to its relatively low radiation dose, short imaging time, and low burden to the patient. We verified the diagnostic performance of an artificial intelligence (AI) system based on a deep convolutional neural network method to detect and number teeth on panoramic radiographs. Methods The data set included 2482 anonymized panoramic radiographs from adults from the archive of Eskisehir Osmangazi University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology. A Faster R-CNN Inception v2 model was used to develop an AI algorithm (CranioCatch, Eskisehir, Turkey) to automatically detect and number teeth on panoramic radiographs. Human observation and AI methods were compared on a test data set consisting of 249 panoramic radiographs. True positive, false positive, and false negative rates were calculated for each quadrant of the jaws. The sensitivity, precision, and F-measure values were estimated using a confusion matrix. Results The total numbers of true positive, false positive, and false negative results were 6940, 250, and 320 for all quadrants, respectively. Consequently, the estimated sensitivity, precision, and F-measure were 0.9559, 0.9652, and 0.9606, respectively. Conclusions The deep convolutional neural network system was successful in detecting and numbering teeth. Clinicians can use AI systems to detect and number teeth on panoramic radiographs, which may eventually replace evaluation by human observers and support decision making.
Collapse
Affiliation(s)
- Elif Bilgir
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - İbrahim Şevki Bayrakdar
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Özer Çelik
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| | - Fatma Akkoca
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Hande Sağlam
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Alper Odabaş
- Department of Mathematics-Computer, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Ahmet Faruk Aslan
- Department of Mathematics-Computer, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| | | | - Musa Kıllı
- Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Ingrid Rozylo-Kalinowska
- Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, ul. Doktora Witolda Chodźki 6, 20-093, Lublin, Poland.
| |
Collapse
|
84
|
Gender classification on digital dental x-ray images using deep convolutional neural network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102939] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
85
|
Tsujimoto M, Teramoto A, Dosho M, Tanahashi S, Fukushima A, Ota S, Inui Y, Matsukiyo R, Obama Y, Toyama H. Automated classification of increased uptake regions in bone single-photon emission computed tomography/computed tomography images using three-dimensional deep convolutional neural network. Nucl Med Commun 2021; 42:877-883. [PMID: 33741850 DOI: 10.1097/mnm.0000000000001409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE This study proposes an automated classification of benign and malignant in highly integrated regions in bone single-photon emission computed tomography/computed tomography (SPECT/CT) using a three-dimensional deep convolutional neural network (3D-DCNN). METHODS We examined 100 regions of 35 patients with bone SPECT/CT classified as benign and malignant by other examinations and follow-ups. First, SPECT and CT images were extracted at the same coordinates in a cube, with a long side two times the diameter of a high concentration in SPECT images. Next, we inputted the extracted image to DCNN and obtained the probability of benignity and malignancy. Integrating the output from DCNN of each SPECT and CT image provided the overall result. To validate the efficacy of the proposed method, the malignancy of all images was assessed using the leave-one-out cross-validation method; besides, the overall classification accuracy was evaluated. Furthermore, we compared the analysis results of SPECT/CT, SPECT alone, CT alone, and whole-body planar scintigraphy in the highly integrated region of the same site. RESULTS The extracted volume of interest was 50 benign and malignant regions, respectively. The overall classification accuracy of SPECT alone and CT alone was 73% and 68%, respectively, while that of the whole-body planar analysis at the same site was 74%. When SPECT/CT images were used, the overall classification accuracy was the highest (80%), while the classification accuracy of malignant and benign was 82 and 78%, respectively. CONCLUSIONS This study suggests that DCNN could be used for the direct classification of benign and malignant regions without extracting the features of SPECT/CT accumulation patterns.
Collapse
Affiliation(s)
| | | | | | | | | | - Seiichiro Ota
- Department of Radiology, School of Medicine, Fujita Health University, Toyoake, Japan
| | - Yoshitaka Inui
- Department of Radiology, School of Medicine, Fujita Health University, Toyoake, Japan
| | - Ryo Matsukiyo
- Department of Radiology, School of Medicine, Fujita Health University, Toyoake, Japan
| | - Yuuki Obama
- Department of Radiology, School of Medicine, Fujita Health University, Toyoake, Japan
| | - Hiroshi Toyama
- Department of Radiology, School of Medicine, Fujita Health University, Toyoake, Japan
| |
Collapse
|
86
|
Deep learning-based evaluation of the relationship between mandibular third molar and mandibular canal on CBCT. Clin Oral Investig 2021; 26:981-991. [PMID: 34312683 DOI: 10.1007/s00784-021-04082-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 07/13/2021] [Indexed: 10/20/2022]
Abstract
OBJECTIVES The objective of our study was to develop and validate a deep learning approach based on convolutional neural networks (CNNs) for automatic detection of the mandibular third molar (M3) and the mandibular canal (MC) and evaluation of the relationship between them on CBCT. MATERIALS AND METHODS A dataset of 254 CBCT scans with annotations by radiologists was used for the training, the validation, and the test. The proposed approach consisted of two modules: (1) detection and pixel-wise segmentation of M3 and MC based on U-Nets; (2) M3-MC relation classification based on ResNet-34. The performances were evaluated with the test set. The classification performance of our approach was compared with two residents in oral and maxillofacial radiology. RESULTS For segmentation performance, the M3 had a mean Dice similarity coefficient (mDSC) of 0.9730 and a mean intersection over union (mIoU) of 0.9606; the MC had a mDSC of 0.9248 and a mIoU of 0.9003. The classification models achieved a mean sensitivity of 90.2%, a mean specificity of 95.0%, and a mean accuracy of 93.3%, which was on par with the residents. CONCLUSIONS Our approach based on CNNs demonstrated an encouraging performance for the automatic detection and evaluation of the M3 and MC on CBCT. Clinical relevance An automated approach based on CNNs for detection and evaluation of M3 and MC on CBCT has been established, which can be utilized to improve diagnostic efficiency and facilitate the precision diagnosis and treatment of M3.
Collapse
|
87
|
Ezhov M, Gusarev M, Golitsyna M, Yates JM, Kushnerev E, Tamimi D, Aksoy S, Shumilov E, Sanders A, Orhan K. Clinically applicable artificial intelligence system for dental diagnosis with CBCT. Sci Rep 2021; 11:15006. [PMID: 34294759 PMCID: PMC8298426 DOI: 10.1038/s41598-021-94093-9] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Accepted: 07/05/2021] [Indexed: 11/08/2022] Open
Abstract
In this study, a novel AI system based on deep learning methods was evaluated to determine its real-time performance of CBCT imaging diagnosis of anatomical landmarks, pathologies, clinical effectiveness, and safety when used by dentists in a clinical setting. The system consists of 5 modules: ROI-localization-module (segmentation of teeth and jaws), tooth-localization and numeration-module, periodontitis-module, caries-localization-module, and periapical-lesion-localization-module. These modules use CNN based on state-of-the-art architectures. In total, 1346 CBCT scans were used to train the modules. After annotation and model development, the AI system was tested for diagnostic capabilities of the Diagnocat AI system. 24 dentists participated in the clinical evaluation of the system. 30 CBCT scans were examined by two groups of dentists, where one group was aided by Diagnocat and the other was unaided. The results for the overall sensitivity and specificity for aided and unaided groups were calculated as an aggregate of all conditions. The sensitivity values for aided and unaided groups were 0.8537 and 0.7672 while specificity was 0.9672 and 0.9616 respectively. There was a statistically significant difference between the groups (p = 0.032). This study showed that the proposed AI system significantly improved the diagnostic capabilities of dentists.
Collapse
Affiliation(s)
| | | | | | - Julian M Yates
- Division of Dentistry, School of Medical Sciences, The University of Manchester, Manchester, UK
| | - Evgeny Kushnerev
- Division of Dentistry, School of Medical Sciences, The University of Manchester, Manchester, UK
| | | | - Secil Aksoy
- Department of DentoMaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | | | | | - Kaan Orhan
- Department of DentoMaxillofacial Radiology, Faculty of Dentistry, Ankara University, 06500, Ankara, Turkey.
- Medical Design Application and Research Center (MEDITAM), Ankara University, Ankara, Turkey.
| |
Collapse
|
88
|
Ortiz AG, Soares GH, da Rosa GC, Biazevic MGH, Michel-Crosato E. A pilot study of an automated personal identification process: Applying machine learning to panoramic radiographs. Imaging Sci Dent 2021; 51:187-193. [PMID: 34235064 PMCID: PMC8219452 DOI: 10.5624/isd.20200324] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 02/25/2021] [Accepted: 03/05/2021] [Indexed: 11/18/2022] Open
Abstract
Purpose This study aimed to assess the usefulness of machine learning and automation techniques to match pairs of panoramic radiographs for personal identification. Materials and Methods Two hundred panoramic radiographs from 100 patients (50 males and 50 females) were randomly selected from a private radiological service database. Initially, 14 linear and angular measurements of the radiographs were made by an expert. Eight ratio indices derived from the original measurements were applied to a statistical algorithm to match radiographs from the same patients, simulating a semi-automated personal identification process. Subsequently, measurements were automatically generated using a deep neural network for image recognition, simulating a fully automated personal identification process. Results Approximately 85% of the radiographs were correctly matched by the automated personal identification process. In a limited number of cases, the image recognition algorithm identified 2 potential matches for the same individual. No statistically significant differences were found between measurements performed by the expert on panoramic radiographs from the same patients. Conclusion Personal identification might be performed with the aid of image recognition algorithms and machine learning techniques. This approach will likely facilitate the complex task of personal identification by performing an initial screening of radiographs and matching ante-mortem and post-mortem images from the same individuals.
Collapse
Affiliation(s)
- Adrielly Garcia Ortiz
- Department of Community Dentistry, School of Dentistry, University of de São Paulo, São Paulo, Brazil
| | - Gustavo Hermes Soares
- Department of Community Dentistry, School of Dentistry, University of de São Paulo, São Paulo, Brazil
| | - Gabriela Cauduro da Rosa
- Department of Community Dentistry, School of Dentistry, University of de São Paulo, São Paulo, Brazil
| | | | - Edgard Michel-Crosato
- Department of Community Dentistry, School of Dentistry, University of de São Paulo, São Paulo, Brazil
| |
Collapse
|
89
|
Kim D, Choi J, Ahn S, Park E. A smart home dental care system: integration of deep learning, image sensors, and mobile controller. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2021; 14:1123-1131. [PMID: 34249170 PMCID: PMC8259098 DOI: 10.1007/s12652-021-03366-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 06/22/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED In this study, a home dental care system consisting of an oral image acquisition device and deep learning models for maxillary and mandibular teeth images is proposed. The presented method not only classifies tooth diseases, but also determines whether a professional dental treatment (NPDT) is required. Additionally, a specially designed oral image acquisition device was developed to perform image acquisition of maxillary and mandibular teeth. Two evaluation metrics, namely, tooth disease and NPDT classifications, were examined using 610 compounded and 5251 tooth images annotated by an experienced dentist with a Doctor of Dental Surgery and another dentist with a Doctor of Dental Medicine. In the tooth disease and NPDT classifications, the proposed system showed accuracies greater than 96% and 89%, respectively. Based on these results, we believe that the proposed system will allow users to effectively manage their dental health by detecting tooth diseases by providing information on the need for dental treatment. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s12652-021-03366-8.
Collapse
Affiliation(s)
- Dogun Kim
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
| | - Jaeho Choi
- Department of Dental Biomaterials Science, Dental Research Institute, Seoul National University, Seoul, Republic of Korea
| | - Sangyoon Ahn
- School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, USA
| | - Eunil Park
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
- Department of Interaction Science, Sungkyunkwan University, Seoul, Republic of Korea
- Raon Data, Seoul, Republic of Korea
| |
Collapse
|
90
|
Kurt Bayrakdar S, Orhan K, Bayrakdar IS, Bilgir E, Ezhov M, Gusarev M, Shumilov E. A deep learning approach for dental implant planning in cone-beam computed tomography images. BMC Med Imaging 2021; 21:86. [PMID: 34011314 PMCID: PMC8132372 DOI: 10.1186/s12880-021-00618-z] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 05/05/2021] [Indexed: 12/20/2022] Open
Abstract
Background The aim of this study was to evaluate the success of the artificial intelligence (AI) system in implant planning using three-dimensional cone-beam computed tomography (CBCT) images. Methods Seventy-five CBCT images were included in this study. In these images, bone height and thickness in 508 regions where implants were required were measured by a human observer with manual assessment method using InvivoDental 6.0 (Anatomage Inc. San Jose, CA, USA). Also, canals/sinuses/fossae associated with alveolar bones and missing tooth regions were detected. Following, all evaluations were repeated using the deep convolutional neural network (Diagnocat, Inc., San Francisco, USA) The jaws were separated as mandible/maxilla and each jaw was grouped as anterior/premolar/molar teeth region. The data obtained from manual assessment and AI methods were compared using Bland–Altman analysis and Wilcoxon signed rank test. Results In the bone height measurements, there were no statistically significant differences between AI and manual measurements in the premolar region of mandible and the premolar and molar regions of the maxilla (p > 0.05). In the bone thickness measurements, there were statistically significant differences between AI and manual measurements in all regions of maxilla and mandible (p < 0.001). Also, the percentage of right detection was 72.2% for canals, 66.4% for sinuses/fossae and 95.3% for missing tooth regions. Conclusions Development of AI systems and their using in future for implant planning will both facilitate the work of physicians and will be a support mechanism in implantology practice to physicians.
Collapse
Affiliation(s)
- Sevda Kurt Bayrakdar
- Department of Periodontology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, 06500, Ankara, Turkey. .,Medical Design Application and Research Center (MEDITAM), Ankara University, Ankara, Turkey.
| | - Ibrahim Sevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey.,Eskisehir Osmangazi University Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir, Turkey
| | - Elif Bilgir
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | | | | | | |
Collapse
|
91
|
Yasa Y, Çelik Ö, Bayrakdar IS, Pekince A, Orhan K, Akarsu S, Atasoy S, Bilgir E, Odabaş A, Aslan AF. An artificial intelligence proposal to automatic teeth detection and numbering in dental bite-wing radiographs. Acta Odontol Scand 2021; 79:275-281. [PMID: 33176533 DOI: 10.1080/00016357.2020.1840624] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
OBJECTIVES Radiological examination has an important place in dental practice, and it is frequently used in intraoral imaging. The correct numbering of teeth on radiographs is a routine practice that takes time for the dentist. This study aimed to propose an automatic detection system for the numbering of teeth in bitewing images using a faster Region-based Convolutional Neural Networks (R-CNN) method. METHODS The study included 1125 bite-wing radiographs of patients who attended the Faculty of Dentistry of Ordu University from 2018 to 2019. A faster R-CNN an advanced object identification method was used to identify the teeth. The confusion matrix was used as a metric and to evaluate the success of the model. RESULTS The deep CNN system (CranioCatch, Eskisehir, Turkey) was used to detect and number teeth in bitewing radiographs. Of 715 teeth in 109 bite-wing images, 697 were correctly numbered in the test data set. The F1 score, precision and sensitivity were 0.9515, 0.9293 and 0.9748, respectively. CONCLUSIONS A CNN approach for the analysis of bitewing images shows promise for detecting and numbering teeth. This method can save dentists time by automatically preparing dental charts.
Collapse
Affiliation(s)
- Yasin Yasa
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ordu University, Ordu, Turkey
| | - Özer Çelik
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Ibrahim Sevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Adem Pekince
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Karabuk University, Karabuk, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
- Ankara University Medical Design Application and Research Center (MEDITAM), Ankara, Turkey
| | - Serdar Akarsu
- Department of Restorative Dentistry, Faculty of Dentistry, Ordu University, Ordu, Turkey
| | - Samet Atasoy
- Department of Restorative Dentistry, Faculty of Dentistry, Ordu University, Ordu, Turkey
| | - Elif Bilgir
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Alper Odabaş
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Ahmet Faruk Aslan
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| |
Collapse
|
92
|
Prados-Privado M, García Villalón J, Blázquez Torres A, Martínez-Martínez CH, Ivorra C. A Validation Employing Convolutional Neural Network for the Radiographic Detection of Absence or Presence of Teeth. J Clin Med 2021; 10:jcm10061186. [PMID: 33809045 PMCID: PMC8001963 DOI: 10.3390/jcm10061186] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 03/09/2021] [Accepted: 03/10/2021] [Indexed: 11/22/2022] Open
Abstract
Dental radiography plays an important role in clinical diagnosis, treatment and making decisions. In recent years, efforts have been made on developing techniques to detect objects in images. The aim of this study was to detect the absence or presence of teeth using an effective convolutional neural network, which reduces calculation times and has success rates greater than 95%. A total of 8000 dental panoramic images were collected. Each image and each tooth was categorized, independently and manually, by two experts with more than three years of experience in general dentistry. The neural network used consists of two main layers: object detection and classification, which is the support of the previous one. A Matterport Mask RCNN was employed in the object detection. A ResNet (Atrous Convolution) was employed in the classification layer. The neural model achieved a total loss of 0.76% (accuracy of 99.24%). The architecture used in the present study returned an almost perfect accuracy in detecting teeth on images from different devices and different pathologies and ages.
Collapse
Affiliation(s)
- María Prados-Privado
- Asisa Dental, Research Department, C/José Abascal, 32, 28003 Madrid, Spain; (J.G.V.); (A.B.T.); (C.H.M.-M.); (C.I.)
- Department of Signal Theory and Communications, Higher Polytechnic School, Universidad de Alcalá de Henares, Ctra. Madrid-Barcelona, Km. 33,600, 28805 Alcala de Henares, Spain
- Department Continuum Mechanics and Structural Analysis, Higher Polytechnic School, Carlos III University, Avenida de la Universidad 30, Leganés, 28911 Madrid, Spain
- Correspondence:
| | - Javier García Villalón
- Asisa Dental, Research Department, C/José Abascal, 32, 28003 Madrid, Spain; (J.G.V.); (A.B.T.); (C.H.M.-M.); (C.I.)
| | - Antonio Blázquez Torres
- Asisa Dental, Research Department, C/José Abascal, 32, 28003 Madrid, Spain; (J.G.V.); (A.B.T.); (C.H.M.-M.); (C.I.)
- SysOnline, 30001 Murcia, Spain
| | - Carlos Hugo Martínez-Martínez
- Asisa Dental, Research Department, C/José Abascal, 32, 28003 Madrid, Spain; (J.G.V.); (A.B.T.); (C.H.M.-M.); (C.I.)
- Faculty of Medicine, Universidad Complutense de Madrid, Plaza de Ramón y Cajal, s/n, 28040 Madrid, Spain
| | - Carlos Ivorra
- Asisa Dental, Research Department, C/José Abascal, 32, 28003 Madrid, Spain; (J.G.V.); (A.B.T.); (C.H.M.-M.); (C.I.)
| |
Collapse
|
93
|
Kılıc MC, Bayrakdar IS, Çelik Ö, Bilgir E, Orhan K, Aydın OB, Kaplan FA, Sağlam H, Odabaş A, Aslan AF, Yılmaz AB. Artificial intelligence system for automatic deciduous tooth detection and numbering in panoramic radiographs. Dentomaxillofac Radiol 2021; 50:20200172. [PMID: 33661699 DOI: 10.1259/dmfr.20200172] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
OBJECTIVE This study evaluated the use of a deep-learning approach for automated detection and numbering of deciduous teeth in children as depicted on panoramic radiographs. METHODS AND MATERIALS An artificial intelligence (AI) algorithm (CranioCatch, Eskisehir-Turkey) using Faster R-CNN Inception v2 (COCO) models were developed to automatically detect and number deciduous teeth as seen on pediatric panoramic radiographs. The algorithm was trained and tested on a total of 421 panoramic images. System performance was assessed using a confusion matrix. RESULTS The AI system was successful in detecting and numbering the deciduous teeth of children as depicted on panoramic radiographs. The sensitivity and precision rates were high. The estimated sensitivity, precision, and F1 score were 0.9804, 0.9571, and 0.9686, respectively. CONCLUSION Deep-learning-based AI models are a promising tool for the automated charting of panoramic dental radiographs from children. In addition to serving as a time-saving measure and an aid to clinicians, AI plays a valuable role in forensic identification.
Collapse
Affiliation(s)
- Münevver Coruh Kılıc
- Department of Paediatric Dentistry, Faculty of Dentistry, Ataturk University, Erzurum, Turkey
| | - Ibrahim Sevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Özer Çelik
- Department of Mathematics-Computer, Eskisehir Osmangazi University Faculty of Science, Eskisehir, Turkey
| | - Elif Bilgir
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| | - Ozan Barıs Aydın
- Department of Paediatric Dentistry, Faculty of Dentistry, Ataturk University, Erzurum, Turkey
| | - Fatma Akkoca Kaplan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Hande Sağlam
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Alper Odabaş
- Department of Mathematics-Computer, Eskisehir Osmangazi University Faculty of Science, Eskisehir, Turkey
| | - Ahmet Faruk Aslan
- Department of Mathematics-Computer, Eskisehir Osmangazi University Faculty of Science, Eskisehir, Turkey
| | - Ahmet Berhan Yılmaz
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ataturk University, Erzurum, Turkey, Turkey
| |
Collapse
|
94
|
Heo MS, Kim JE, Hwang JJ, Han SS, Kim JS, Yi WJ, Park IW. Artificial intelligence in oral and maxillofacial radiology: what is currently possible? Dentomaxillofac Radiol 2021; 50:20200375. [PMID: 33197209 PMCID: PMC7923066 DOI: 10.1259/dmfr.20200375] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 08/28/2020] [Accepted: 08/28/2020] [Indexed: 12/13/2022] Open
Abstract
Artificial intelligence, which has been actively applied in a broad range of industries in recent years, is an active area of interest for many researchers. Dentistry is no exception to this trend, and the applications of artificial intelligence are particularly promising in the field of oral and maxillofacial (OMF) radiology. Recent researches on artificial intelligence in OMF radiology have mainly used convolutional neural networks, which can perform image classification, detection, segmentation, registration, generation, and refinement. Artificial intelligence systems in this field have been developed for the purposes of radiographic diagnosis, image analysis, forensic dentistry, and image quality improvement. Tremendous amounts of data are needed to achieve good results, and involvement of OMF radiologist is essential for making accurate and consistent data sets, which is a time-consuming task. In order to widely use artificial intelligence in actual clinical practice in the future, there are lots of problems to be solved, such as building up a huge amount of fine-labeled open data set, understanding of the judgment criteria of artificial intelligence, and DICOM hacking threats using artificial intelligence. If solutions to these problems are presented with the development of artificial intelligence, artificial intelligence will develop further in the future and is expected to play an important role in the development of automatic diagnosis systems, the establishment of treatment plans, and the fabrication of treatment tools. OMF radiologists, as professionals who thoroughly understand the characteristics of radiographic images, will play a very important role in the development of artificial intelligence applications in this field.
Collapse
Affiliation(s)
- Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Republic of Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, Republic of Korea
| | - Jae-Joon Hwang
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Yangsan, Republic of Korea
| | - Sang-Sun Han
- Department of Oral and Maxillofacial Radiology, College of Dentistry, Yonsei University, Seoul, Republic of Korea
| | - Jin-Soo Kim
- Department of Oral and Maxillofacial Radiology, College of Dentistry, Chosun University, Gwangju, Republic of Korea
| | - Won-Jin Yi
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Republic of Korea
| | - In-Woo Park
- Department of Oral and Maxillofacial Radiology, College of Dentistry, Gangneung-Wonju National University, Gangneung, Republic of Korea
| |
Collapse
|
95
|
Pethani F. Promises and perils of artificial intelligence in dentistry. Aust Dent J 2021; 66:124-135. [PMID: 33340123 DOI: 10.1111/adj.12812] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/25/2020] [Indexed: 12/11/2022]
Abstract
Artificial intelligence (AI) is a subdiscipline of computer science that has made substantial progress in medicine and there is a growing body of AI research in dentistry. Dentists should have an understanding of the foundational concepts and the ability to critically evaluate dental research in AI. Machine learning (ML) is a subfield of AI that most dental AI research is dedicated to. The most prolific area of ML research is automated interpretation of dental imaging. Other areas include providing treatment recommendations, predicting future disease and treatment outcomes. The research impact is limited by small datasets that do not harness the positive correlation between very large datasets and ML performance. There is also a need to standardize research methodologies and utilize performance metrics that are appropriate for the clinical context. In addition to research challenges, this article discusses the ethical, legal and logistical considerations associated with implementation in clinical practice. This includes explainable AI, model bias, data privacy and security. The future implications of AI in dentistry involve a promise for a novel form of practicing dentistry however, the effect of AI on patient outcomes is yet to be determined.
Collapse
Affiliation(s)
- F Pethani
- Sydney Dental School, Faculty of Health and Medicine, The University of Sydney, Camperdown, Australia
| |
Collapse
|
96
|
Kishimoto T, Goto T, Matsuda T, Iwawaki Y, Ichikawa T. Application of artificial intelligence in the dental field: A literature review. J Prosthodont Res 2021; 66:19-28. [PMID: 33441504 DOI: 10.2186/jpr.jpr_d_20_00139] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
PURPOSE The purpose of this study was to comprehensively review the literature regarding the application of artificial intelligence (AI) in the dental field,focusing on the evaluation criteria and architecture types. STUDY SELECTION Electronic databases (PubMed, Cochrane Library, Scopus) were searched. Full-text articles describing the clinical application of AI for the detection, diagnosis, and treatment of lesions and the AI method/architecture were included. RESULTS The primary search presented 422 studies from 1996 to 2019, and 58 studies were finally selected. Regarding the year of publication, the oldest study, which was reported in 1996, focused on "oral and maxillofacial surgery." Machine-learning architectures were employed in the selected studies, while approximately half of them (29/58) employed neural networks. Regarding the evaluation criteria, eight studies compared the results obtained by AI with the diagnoses formulated by dentists, while several studies compared two or more architectures in terms of performance. The following parameters were employed for evaluating the AI performance: accuracy, sensitivity, specificity, mean absolute error, root mean squared error, and area under the receiver operating characteristic curve. CONCLUSIONS Application of AI in the dental field has progressed; however, the criteria for evaluating the efficacy of AI have not been clarified. It is necessary to obtain better quality data for machine learning to achieve the effective diagnosis of lesions and suitable treatment planning.
Collapse
Affiliation(s)
- Takahiro Kishimoto
- Department of Prosthodontics & Oral Rehabilitation, Tokushima University Graduate School of Biomedical Sciences
| | - Takaharu Goto
- Department of Prosthodontics & Oral Rehabilitation, Tokushima University Graduate School of Biomedical Sciences
| | - Takashi Matsuda
- Department of Prosthodontics & Oral Rehabilitation, Tokushima University Graduate School of Biomedical Sciences
| | - Yuki Iwawaki
- Department of Prosthodontics & Oral Rehabilitation, Tokushima University Graduate School of Biomedical Sciences
| | - Tetsuo Ichikawa
- Department of Prosthodontics & Oral Rehabilitation, Tokushima University Graduate School of Biomedical Sciences
| |
Collapse
|
97
|
AIM in Dentistry. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_319-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
98
|
Saleem HN, Sheikh UU, Khalid SA. Classification of Chest Diseases from X-ray Images on the CheXpert Dataset. LECTURE NOTES IN ELECTRICAL ENGINEERING 2021:837-850. [DOI: 10.1007/978-981-16-0749-3_64] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
99
|
Evaluation of artificial intelligence for detecting impacted third molars on cone-beam computed tomography scans. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2020; 122:333-337. [PMID: 33346145 DOI: 10.1016/j.jormas.2020.12.006] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 12/11/2020] [Accepted: 12/14/2020] [Indexed: 12/24/2022]
Abstract
PURPOSE The aim of this study was to evaluate the diagnostic performance of artificial intelligence (AI) application evaluating of the impacted third molar teeth in Cone-beam Computed Tomography (CBCT) images. MATERIAL AND METHODS In total, 130 third molar teeth (65 patients) were included in this retrospective study. Impaction detection, Impacted tooth numbers, root/canal numbers of teeth, relationship with adjacent anatomical structures (inferior alveolar canal and maxillary sinus) were compared between the human observer and AI application. Recorded parameters agreement between the human observer and AI application based on the deep-CNN system was evaluated using the Kappa analysis. RESULTS In total, 112 teeth (86.2%) were detected as impacted by AI. The number of roots was correctly determined in 99 teeth (78.6%) and the number of canals in 82 teeth (68.1%). There was a good agreement in the determination of the inferior alveolar canal in relation to the mandibular impacted third molars (kappa: 0.762) as well as the number of roots detection (kappa: 0.620). Similarly, there was an excellent agreement in relation to maxillary impacted third molar and the maxillary sinus (kappa: 0.860). For the maxillary molar canal number detection, a moderate agreement was found between the human observer and AI examinations (kappa: 0.424). CONCLUSIONS Artificial Intelligence (AI) application showed high accuracy values in the detection of impacted third molar teeth and their relationship to anatomical structures.
Collapse
|
100
|
Zhang X, Liang Y, Li W, Liu C, Gu D, Sun W, Miao L. Development and evaluation of deep learning for screening dental caries from oral photographs. Oral Dis 2020; 28:173-181. [PMID: 33244805 DOI: 10.1111/odi.13735] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 11/02/2020] [Accepted: 11/17/2020] [Indexed: 01/19/2023]
Affiliation(s)
- Xuan Zhang
- Department of Periodontology Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| | - Yuan Liang
- University of California Los Angeles CA USA
| | - Wen Li
- Department of Endodontics Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| | - Chao Liu
- Department of Orthodontics Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| | - Deao Gu
- Department of Orthodontics Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| | - Weibin Sun
- Department of Periodontology Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| | - Leiying Miao
- Department of Endodontics Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| |
Collapse
|