1
|
Yang S, Kim KD, Ariji E, Kise Y. Generative adversarial networks in dental imaging: a systematic review. Oral Radiol 2024; 40:93-108. [PMID: 38001347 DOI: 10.1007/s11282-023-00719-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/27/2023] [Indexed: 11/26/2023]
Abstract
OBJECTIVES This systematic review on generative adversarial network (GAN) architectures for dental image analysis provides a comprehensive overview to readers regarding current GAN trends in dental imagery and potential future applications. METHODS Electronic databases (PubMed/MEDLINE, Scopus, Embase, and Cochrane Library) were searched to identify studies involving GANs for dental image analysis. Eighteen full-text articles describing the applications of GANs in dental imagery were reviewed. Risk of bias and applicability concerns were assessed using the QUADAS-2 tool. RESULTS GANs were used for various imaging modalities, including two-dimensional and three-dimensional images. In dental imaging, GANs were utilized for tasks such as artifact reduction, denoising, and super-resolution, domain transfer, image generation for augmentation, outcome prediction, and identification. The generated images were incorporated into tasks such as landmark detection, object detection and classification. Because of heterogeneity among the studies, a meta-analysis could not be conducted. Most studies (72%) had a low risk of bias in all four domains. However, only three (17%) studies had a low risk of applicability concerns. CONCLUSIONS This extensive analysis of GANs in dental imaging highlighted their broad application potential within the dental field. Future studies should address limitations related to the stability, repeatability, and overall interpretability of GAN architectures. By overcoming these challenges, the applicability of GANs in dentistry can be enhanced, ultimately benefiting the dental field in its use of GANs and artificial intelligence.
Collapse
Affiliation(s)
- Sujin Yang
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Kee-Deog Kim
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aichi Gakuin University, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aichi Gakuin University, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651, Japan.
| |
Collapse
|
2
|
Kazimierczak W, Kędziora K, Janiszewska-Olszowska J, Kazimierczak N, Serafin Z. Noise-Optimized CBCT Imaging of Temporomandibular Joints-The Impact of AI on Image Quality. J Clin Med 2024; 13:1502. [PMID: 38592413 PMCID: PMC10932444 DOI: 10.3390/jcm13051502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 02/28/2024] [Accepted: 03/04/2024] [Indexed: 04/10/2024] Open
Abstract
Background: Temporomandibular joint disorder (TMD) is a common medical condition. Cone beam computed tomography (CBCT) is effective in assessing TMD-related bone changes, but image noise may impair diagnosis. Emerging deep learning reconstruction algorithms (DLRs) could minimize noise and improve CBCT image clarity. This study compares standard and deep learning-enhanced CBCT images for image quality in detecting osteoarthritis-related degeneration in TMJs (temporomandibular joints). This study analyzed CBCT images of patients with suspected temporomandibular joint degenerative joint disease (TMJ DJD). Methods: The DLM reconstructions were performed with ClariCT.AI software. Image quality was evaluated objectively via CNR in target areas and subjectively by two experts using a five-point scale. Both readers also assessed TMJ DJD lesions. The study involved 50 patients with a mean age of 28.29 years. Results: Objective analysis revealed a significantly better image quality in DLM reconstructions (CNR levels; p < 0.001). Subjective assessment showed high inter-reader agreement (κ = 0.805) but no significant difference in image quality between the reconstruction types (p = 0.055). Lesion counts were not significantly correlated with the reconstruction type (p > 0.05). Conclusions: The analyzed DLM reconstruction notably enhanced the objective image quality in TMJ CBCT images but did not significantly alter the subjective quality or DJD lesion diagnosis. However, the readers favored DLM images, indicating the potential for better TMD diagnosis with CBCT, meriting more study.
Collapse
Affiliation(s)
- Wojciech Kazimierczak
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
- Department of Interdisciplinary Dentistry, Pomeranian Medical University in Szczecin, 70-111 Szczecin, Poland
| | - Kamila Kędziora
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| | | | - Natalia Kazimierczak
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
| | - Zbigniew Serafin
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| |
Collapse
|
3
|
Katsumata A. Deep learning and artificial intelligence in dental diagnostic imaging. JAPANESE DENTAL SCIENCE REVIEW 2023; 59:329-333. [PMID: 37811196 PMCID: PMC10551806 DOI: 10.1016/j.jdsr.2023.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 09/04/2023] [Accepted: 09/25/2023] [Indexed: 10/10/2023] Open
Abstract
The application of artificial intelligence (AI) based on deep learning in dental diagnostic imaging is increasing. Several popular deep learning tasks have been applied to dental diagnostic images. Classification tasks are used to classify images with and without positive abnormal findings or to evaluate the progress of a lesion based on imaging findings. Region (object) detection and segmentation tasks have been used for tooth identification in panoramic radiographs. This technique is useful for automatically creating a patient's dental chart. Deep learning methods can also be used for detecting and evaluating anatomical structures of interest from images. Furthermore, generative AI based on natural language processing can automatically create written reports from the findings of diagnostic imaging.
Collapse
|
4
|
Yang S, Kim KD, Ariji E, Takata N, Kise Y. Evaluating the performance of generative adversarial network-synthesized periapical images in classifying C-shaped root canals. Sci Rep 2023; 13:18038. [PMID: 37865655 PMCID: PMC10590373 DOI: 10.1038/s41598-023-45290-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 10/18/2023] [Indexed: 10/23/2023] Open
Abstract
This study evaluated the performance of generative adversarial network (GAN)-synthesized periapical images for classifying C-shaped root canals, which are challenging to diagnose because of their complex morphology. GANs have emerged as a promising technique for generating realistic images, offering a potential solution for data augmentation in scenarios with limited training datasets. Periapical images were synthesized using the StyleGAN2-ADA framework, and their quality was evaluated based on the average Frechet inception distance (FID) and the visual Turing test. The average FID was found to be 35.353 (± 4.386) for synthesized C-shaped canal images and 25.471 (± 2.779) for non C-shaped canal images. The visual Turing test conducted by two radiologists on 100 randomly selected images revealed that distinguishing between real and synthetic images was difficult. These results indicate that GAN-synthesized images exhibit satisfactory visual quality. The classification performance of the neural network, when augmented with GAN data, showed improvements compared with using real data alone, and could be advantageous in addressing data conditions with class imbalance. GAN-generated images have proven to be an effective data augmentation method, addressing the limitations of limited training data and computational resources in diagnosing dental anomalies.
Collapse
Affiliation(s)
- Sujin Yang
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Kee-Deog Kim
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, 2-11 Seuemori-Dori, Chikusa-Ku, Nagoya, 464-8651, Japan
| | - Natsuho Takata
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, 2-11 Seuemori-Dori, Chikusa-Ku, Nagoya, 464-8651, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, 2-11 Seuemori-Dori, Chikusa-Ku, Nagoya, 464-8651, Japan.
| |
Collapse
|
5
|
Zhang J, Wang X, Liu J, Zhang D, Lu Y, Zhou Y, Sun L, Hou S, Fan X, Shen S, Zhao J. Multispectral Drone Imagery and SRGAN for Rapid Phenotypic Mapping of Individual Chinese Cabbage Plants. PLANT PHENOMICS (WASHINGTON, D.C.) 2022; 2022:0007. [PMID: 37266137 PMCID: PMC10230957 DOI: 10.34133/plantphenomics.0007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 11/07/2022] [Indexed: 06/03/2023]
Abstract
The phenotypic parameters of crop plants can be evaluated accurately and quickly using an unmanned aerial vehicle (UAV) equipped with imaging equipment. In this study, hundreds of images of Chinese cabbage (Brassica rapa L. ssp. pekinensis) germplasm resources were collected with a low-cost UAV system and used to estimate cabbage width, length, and relative chlorophyll content (soil plant analysis development [SPAD] value). The super-resolution generative adversarial network (SRGAN) was used to improve the resolution of the original image, and the semantic segmentation network Unity Networking (UNet) was used to process images for the segmentation of each individual Chinese cabbage. Finally, the actual length and width were calculated on the basis of the pixel value of the individual cabbage and the ground sampling distance. The SPAD value of Chinese cabbage was also analyzed on the basis of an RGB image of a single cabbage after background removal. After comparison of various models, the model in which visible images were enhanced with SRGAN showed the best performance. With the validation set and the UNet model, the segmentation accuracy was 94.43%. For Chinese cabbage dimensions, the model was better at estimating length than width. The R2 of the visible-band model with images enhanced using SRGAN was greater than 0.84. For SPAD prediction, the R2 of the model with images enhanced with SRGAN was greater than 0.78. The root mean square errors of the 3 semantic segmentation network models were all less than 2.18. The results showed that the width, length, and SPAD value of Chinese cabbage predicted using UAV imaging were comparable to those obtained from manual measurements in the field. Overall, this research demonstrates not only that UAVs are useful for acquiring quantitative phenotypic data on Chinese cabbage but also that a regression model can provide reliable SPAD predictions. This approach offers a reliable and convenient phenotyping tool for the investigation of Chinese cabbage breeding traits.
Collapse
Affiliation(s)
- Jun Zhang
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, 071000 Baoding, China
| | - Xinxin Wang
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- Mountain Area Research Institute, Hebei Agricultural University, 071001 Baoding, China
| | - Jingyan Liu
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, 071000 Baoding, China
| | - Dongfang Zhang
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- College of Horticulture, Hebei Agricultural University, 071000 Baoding, China
| | - Yin Lu
- College of Horticulture, Hebei Agricultural University, 071000 Baoding, China
| | - Yuhong Zhou
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, 071000 Baoding, China
| | - Lei Sun
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, 071000 Baoding, China
| | - Shenglin Hou
- Hebei Academy of Agriculture and Forestry Sciences, 050000 Shijiazhuang, China
| | - Xiaofei Fan
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, 071000 Baoding, China
| | - Shuxing Shen
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- College of Horticulture, Hebei Agricultural University, 071000 Baoding, China
| | - Jianjun Zhao
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- College of Horticulture, Hebei Agricultural University, 071000 Baoding, China
| |
Collapse
|
6
|
Limited-Angle CT Reconstruction with Generative Adversarial Network Sinogram Inpainting and Unsupervised Artifact Removal. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12126268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
High-quality limited-angle computed tomography (CT) reconstruction is in high demand in the medical field. Being unlimited by the pairing of sinogram and the reconstructed image, unsupervised methods have attracted wide attention from researchers. The reconstruction limit of the existing unsupervised reconstruction methods, however, is to use [0°, 120°] of projection data, and the quality of the reconstruction still has room for improvement. In this paper, we propose a limited-angle CT reconstruction generative adversarial network based on sinogram inpainting and unsupervised artifact removal to further reduce the angle range limit and to improve the image quality. We collected a large number of CT lung and head images and Radon transformed them into missing sinograms. Sinogram inpainting network is developed to complete missing sinograms, based on which the filtered back projection algorithm can output images with most artifacts removed; then, these images are mapped to artifact-free images by using artifact removal network. Finally, we generated reconstruction results sized 512×512 that are comparable to full-scan reconstruction using only [0°, 90°] of limited sinogram projection data. Compared with the current unsupervised methods, the proposed method can reconstruct images of higher quality.
Collapse
|
7
|
Cone-Beam Angle Dependency of 3D Models Computed from Cone-Beam CT Images. SENSORS 2022; 22:s22031253. [PMID: 35162003 PMCID: PMC8837983 DOI: 10.3390/s22031253] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 02/03/2022] [Accepted: 02/03/2022] [Indexed: 12/26/2022]
Abstract
Cone-beam dental CT can provide high-precision 3D images of the teeth and surrounding bones. From the 3D CT images, 3D models, also called digital impressions, can be computed for CAD/CAM-based fabrication of dental restorations or orthodontic devices. However, the cone-beam angle-dependent artifacts, mostly caused by the incompleteness of the projection data acquired in the circular cone-beam scan geometry, can induce significant errors in the 3D models. Using a micro-CT, we acquired CT projection data of plaster cast models at several different cone-beam angles, and we investigated the dependency of the model errors on the cone-beam angle in comparison with the reference models obtained from the optical scanning of the plaster models. For the 3D CT image reconstruction, we used the conventional Feldkamp algorithm and the combined half-scan image reconstruction algorithm to investigate the dependency of the model errors on the image reconstruction algorithm. We analyzed the mean of positive deviations and the mean of negative deviations of the surface points on the CT-image-derived 3D models from the reference model, and we compared them between the two image reconstruction algorithms. It has been found that the model error increases as the cone-beam angle increases in both algorithms. However, the model errors are smaller in the combined half-scan image reconstruction when the cone-beam angle is as large as 10 degrees.
Collapse
|