1
|
Zhang R, Mo H, Hu W, Jie B, Xu L, He Y, Ke J, Wang J. Super-resolution landmark detection networks for medical images. Comput Biol Med 2024; 182:109095. [PMID: 39236661 DOI: 10.1016/j.compbiomed.2024.109095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 08/06/2024] [Accepted: 08/30/2024] [Indexed: 09/07/2024]
Abstract
Craniomaxillofacial (CMF) and nasal landmark detection are fundamental components in computer-assisted surgery. Medical landmark detection method includes regression-based and heatmap-based methods, and heatmap-based methods are among the main methodology branches. The method relies on high-resolution (HR) features containing more location information to reduce the network error caused by sub-pixel location. Previous studies extracted HR patches around each landmark from downsampling images via object detection and subsequently input them into the network to obtain HR features. Complex multistage tasks affect accuracy. The network error caused by downsampling and upsampling operations during training, which interpolates low-resolution features to generate HR features or predicted heatmap, is still significant. We propose standard super-resolution landmark detection networks (SRLD-Net) and super-resolution UNet (SR-UNet) to reduce network error effectively. SRLD-Net used Pyramid pooling block, Pyramid fusion block and super-resolution fusion block to combine global prior knowledge and multi-scale local features, similarly, SR-UNet adopts Pyramid pooling block and super-resolution block. They can obviously improve representation learning ability of our proposed methods. Then the super-resolution upsampling layer is utilized to generate detail predicted heatmap. Our proposed networks were compared to state-of-the-art methods using the craniomaxillofacial, nasal, and mandibular molar datasets, demonstrating better performance. The mean errors of 18 CMF, 6 nasal and 14 mandibular landmarks are 1.39 ± 1.04, 1.31 ± 1.09, 2.01 ± 4.33 mm. These results indicate that the super-resolution methods have great potential in medical landmark detection tasks. This paper provides two effective heatmap-based landmark detection networks and the code is released in https://github.com/Runshi-Zhang/SRLD-Net.
Collapse
Affiliation(s)
- Runshi Zhang
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
| | - Hao Mo
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
| | - Weini Hu
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Bimeng Jie
- Peking University School and Hospital of Stomatology, Weigong Village, Haidian District, 100081, Beijing, China
| | - Lin Xu
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Yang He
- Peking University School and Hospital of Stomatology, Weigong Village, Haidian District, 100081, Beijing, China
| | - Jia Ke
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China.
| |
Collapse
|
2
|
Hendrickx J, Gracea RS, Vanheers M, Winderickx N, Preda F, Shujaat S, Jacobs R. Can artificial intelligence-driven cephalometric analysis replace manual tracing? A systematic review and meta-analysis. Eur J Orthod 2024; 46:cjae029. [PMID: 38895901 PMCID: PMC11185929 DOI: 10.1093/ejo/cjae029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
OBJECTIVES This systematic review and meta-analysis aimed to investigate the accuracy and efficiency of artificial intelligence (AI)-driven automated landmark detection for cephalometric analysis on two-dimensional (2D) lateral cephalograms and three-dimensional (3D) cone-beam computed tomographic (CBCT) images. SEARCH METHODS An electronic search was conducted in the following databases: PubMed, Web of Science, Embase, and grey literature with search timeline extending up to January 2024. SELECTION CRITERIA Studies that employed AI for 2D or 3D cephalometric landmark detection were included. DATA COLLECTION AND ANALYSIS The selection of studies, data extraction, and quality assessment of the included studies were performed independently by two reviewers. The risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. A meta-analysis was conducted to evaluate the accuracy of the 2D landmarks identification based on both mean radial error and standard error. RESULTS Following the removal of duplicates, title and abstract screening, and full-text reading, 34 publications were selected. Amongst these, 27 studies evaluated the accuracy of AI-driven automated landmarking on 2D lateral cephalograms, while 7 studies involved 3D-CBCT images. A meta-analysis, based on the success detection rate of landmark placement on 2D images, revealed that the error was below the clinically acceptable threshold of 2 mm (1.39 mm; 95% confidence interval: 0.85-1.92 mm). For 3D images, meta-analysis could not be conducted due to significant heterogeneity amongst the study designs. However, qualitative synthesis indicated that the mean error of landmark detection on 3D images ranged from 1.0 to 5.8 mm. Both automated 2D and 3D landmarking proved to be time-efficient, taking less than 1 min. Most studies exhibited a high risk of bias in data selection (n = 27) and reference standard (n = 29). CONCLUSION The performance of AI-driven cephalometric landmark detection on both 2D cephalograms and 3D-CBCT images showed potential in terms of accuracy and time efficiency. However, the generalizability and robustness of these AI systems could benefit from further improvement. REGISTRATION PROSPERO: CRD42022328800.
Collapse
Affiliation(s)
- Julie Hendrickx
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Rellyca Sola Gracea
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia
| | - Michiel Vanheers
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Nicolas Winderickx
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Flavia Preda
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh 14611, Kingdom of Saudi Arabia
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- Department of Dental Medicine, Karolinska Institutet, 141 04 Stockholm, Sweden
| |
Collapse
|
3
|
Tao L, Zhang X, Yang Y, Cheng M, Zhang R, Qian H, Wen Y, Yu H. Craniomaxillofacial landmarks detection in CT scans with limited labeled data via semi-supervised learning. Heliyon 2024; 10:e34583. [PMID: 39130473 PMCID: PMC11315087 DOI: 10.1016/j.heliyon.2024.e34583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 05/21/2024] [Accepted: 07/11/2024] [Indexed: 08/13/2024] Open
Abstract
Background Three-dimensional cephalometric analysis is crucial in craniomaxillofacial assessment, with landmarks detection in craniomaxillofacial (CMF) CT scans being a key component. However, creating robust deep learning models for this task typically requires extensive CMF CT datasets annotated by experienced medical professionals, a process that is time-consuming and labor-intensive. Conversely, acquiring large volume of unlabeled CMF CT data is relatively straightforward. Thus, semi-supervised learning (SSL), leveraging limited labeled data supplemented by sufficient unlabeled dataset, could be a viable solution to this challenge. Method We developed an SSL model, named CephaloMatch, based on a strong-weak perturbation consistency framework. The proposed SSL model incorporates a head position rectification technique through coarse detection to enhance consistency between labeled and unlabeled datasets and a multilayers perturbation method which is employed to expand the perturbation space. The proposed SSL model was assessed using 362 CMF CT scans, divided into a training set (60 scans), a validation set (14 scans), and an unlabeled set (288 scans). Result The proposed SSL model attained a detection error of 1.60 ± 0.87 mm, significantly surpassing the performance of conventional fully supervised learning model (1.94 ± 1.12 mm). Notably, the proposed SSL model achieved equivalent detection accuracy (1.91 ± 1.00 mm) with only half the labeled dataset, compared to the fully supervised learning model. Conclusions The proposed SSL model demonstrated exceptional performance in landmarks detection using a limited labeled CMF CT dataset, significantly reducing the workload of medical professionals and enhances the accuracy of 3D cephalometric analysis.
Collapse
Affiliation(s)
- Leran Tao
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Xu Zhang
- Mechanical College, Shanghai Dianji University, Shanghai, 201306, China
| | - Yang Yang
- Shanghai Lanhui Medical Technology Co., Ltd, Shanghai, 200333, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Mengjia Cheng
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Rongbin Zhang
- College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200125, China
| | | | - Yaofeng Wen
- Shanghai Lanhui Medical Technology Co., Ltd, Shanghai, 200333, China
| | - Hongbo Yu
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| |
Collapse
|
4
|
Lee HS, Yang S, Han JY, Kang JH, Kim JE, Huh KH, Yi WJ, Heo MS, Lee SS. Automatic detection and classification of nasopalatine duct cyst and periapical cyst on panoramic radiographs using deep convolutional neural networks. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:184-195. [PMID: 38158267 DOI: 10.1016/j.oooo.2023.09.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 08/01/2023] [Accepted: 09/15/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE The aim of this study was to evaluate a deep convolutional neural network (DCNN) method for the detection and classification of nasopalatine duct cysts (NPDC) and periapical cysts (PAC) on panoramic radiographs. STUDY DESIGN A total of 1,209 panoramic radiographs with 606 NPDC and 603 PAC were labeled with a bounding box and divided into training, validation, and test sets with an 8:1:1 ratio. The networks used were EfficientDet-D3, Faster R-CNN, YOLO v5, RetinaNet, and SSD. Mean average precision (mAP) was used to assess performance. Sixty images with no lesion in the anterior maxilla were added to the previous test set and were tested on 2 dentists with no training in radiology (GP) and on EfficientDet-D3. The performances were comparatively examined. RESULTS The mAP for each DCNN was EfficientDet-D3 93.8%, Faster R-CNN 90.8%, YOLO v5 89.5%, RetinaNet 79.4%, and SSD 60.9%. The classification performance of EfficientDet-D3 was higher than that of the GPs' with accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 94.4%, 94.4%, 97.2%, 94.6%, and 97.2%, respectively. CONCLUSIONS The proposed method achieved high performance for the detection and classification of NPDC and PAC compared with the GPs and presented promising prospects for clinical application.
Collapse
Affiliation(s)
- Han-Sol Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | - Ji-Yong Han
- Interdisciplinary Program in Bioengineering, College of Engineering, Seoul National University, Seoul, South Korea
| | - Ju-Hee Kang
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Won-Jin Yi
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea; Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea; Interdisciplinary Program in Bioengineering, College of Engineering, Seoul National University, Seoul, South Korea.
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea.
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| |
Collapse
|
5
|
Sahlsten J, Järnstedt J, Jaskari J, Naukkarinen H, Mahasantipiya P, Charuakkra A, Vasankari K, Hietanen A, Sundqvist O, Lehtinen A, Kaski K. Deep learning for 3D cephalometric landmarking with heterogeneous multi-center CBCT dataset. PLoS One 2024; 19:e0305947. [PMID: 38917161 PMCID: PMC11198780 DOI: 10.1371/journal.pone.0305947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 06/07/2024] [Indexed: 06/27/2024] Open
Abstract
Cephalometric analysis is critically important and common procedure prior to orthodontic treatment and orthognathic surgery. Recently, deep learning approaches have been proposed for automatic 3D cephalometric analysis based on landmarking from CBCT scans. However, these approaches have relied on uniform datasets from a single center or imaging device but without considering patient ethnicity. In addition, previous works have considered a limited number of clinically relevant cephalometric landmarks and the approaches were computationally infeasible, both impairing integration into clinical workflow. Here our aim is to analyze the clinical applicability of a light-weight deep learning neural network for fast localization of 46 clinically significant cephalometric landmarks with multi-center, multi-ethnic, and multi-device data consisting of 309 CBCT scans from Finnish and Thai patients. The localization performance of our approach resulted in the mean distance of 1.99 ± 1.55 mm for the Finnish cohort and 1.96 ± 1.25 mm for the Thai cohort. This performance turned out to be clinically significant i.e., ≤ 2 mm with 61.7% and 64.3% of the landmarks with Finnish and Thai cohorts, respectively. Furthermore, the estimated landmarks were used to measure cephalometric characteristics successfully i.e., with ≤ 2 mm or ≤ 2° error, on 85.9% of the Finnish and 74.4% of the Thai cases. Between the two patient cohorts, 33 of the landmarks and all cephalometric characteristics had no statistically significant difference (p < 0.05) measured by the Mann-Whitney U test with Benjamini-Hochberg correction. Moreover, our method is found to be computationally light, i.e., providing the predictions with the mean duration of 0.77 s and 2.27 s with single machine GPU and CPU computing, respectively. Our findings advocate for the inclusion of this method into clinical settings based on its technical feasibility and robustness across varied clinical datasets.
Collapse
Affiliation(s)
- Jaakko Sahlsten
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Jorma Järnstedt
- Department of Radiology, Tampere University Hospital, Wellbeing Services County of Pirkanmaa, Tampere, Finland
- Faculty of Medicine and Health Technology, University of Tampere, Tampere, Finland
| | - Joel Jaskari
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | | | - Phattaranant Mahasantipiya
- Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
- Division of Oral and Maxillofacial Radiology, Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
| | - Arnon Charuakkra
- Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
- Division of Oral and Maxillofacial Radiology, Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
| | - Krista Vasankari
- Department of Oral Diseases, Tampere University Hospital, Tampere, Finland
| | | | | | - Antti Lehtinen
- Department of Radiology, Tampere University Hospital, Wellbeing Services County of Pirkanmaa, Tampere, Finland
- Faculty of Medicine and Health Technology, University of Tampere, Tampere, Finland
| | - Kimmo Kaski
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
- The Alan Turing Institute, British Library, London, United Kingdom
| |
Collapse
|
6
|
Kim J, Jeung D, Cho R, Yang B, Hong J. A Proof of Concept: Optimized Jawbone-Reduction Model for Mandibular Fracture Surgery. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1151-1159. [PMID: 38332406 DOI: 10.1007/s10278-024-01014-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/19/2023] [Accepted: 12/26/2023] [Indexed: 02/10/2024]
Abstract
Previous research on computer-assisted jawbone reduction for mandibular fracture surgery has only focused on the relationship between fractured sections disregarding proper dental occlusion with the maxilla. To overcome malocclusion caused by overlooking dental articulation, this study aims to provide a model for jawbone reduction based on dental occlusion. After dental landmarks and fracture sectional features are extracted, the maxilla and two mandible segments are aligned first using the extracted dental landmarks. A swarm-based optimization is subsequently performed by simultaneously observing the fracture section fitting and the dental occlusion condition. The proposed method was evaluated using jawbone data of 12 subjects with simulated and real mandibular fractures. Results showed that the optimized model achieved both accurate jawbone reduction and desired dental occlusion, which may not be possible by existing methods.
Collapse
Affiliation(s)
- Jinmin Kim
- DIGITRACK. Inc., Daegu, Republic of Korea
| | - Deokgi Jeung
- Department of Robotics and Mechatronics Engineering, DGIST, 333 Techno Jungang-Daero, Daegu, 42988, Republic of Korea
- Department of Medical Robotics, Korea Institute of Machinery and Materials, Daegu, Republic of Korea
| | - Ranyeong Cho
- Division of Oral & Maxillofacial Surgery, Hallym University Sacred Heart Hospital, 22 Gwanpyeong-Ro 170Beon-Gil, Gyeonggi-Do, 14068, Republic of Korea
| | - Byoungeun Yang
- Division of Oral & Maxillofacial Surgery, Hallym University Sacred Heart Hospital, 22 Gwanpyeong-Ro 170Beon-Gil, Gyeonggi-Do, 14068, Republic of Korea.
| | - Jaesung Hong
- Department of Robotics and Mechatronics Engineering, DGIST, 333 Techno Jungang-Daero, Daegu, 42988, Republic of Korea.
| |
Collapse
|
7
|
Hartoonian S, Hosseini M, Yousefi I, Mahdian M, Ghazizadeh Ahsaie M. Applications of artificial intelligence in dentomaxillofacial imaging-a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024:S2212-4403(23)01566-3. [PMID: 38637235 DOI: 10.1016/j.oooo.2023.12.790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/02/2023] [Accepted: 12/22/2023] [Indexed: 04/20/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has been increasingly developed in oral and maxillofacial imaging. The aim of this systematic review was to assess the applications and performance of the developed algorithms in different dentomaxillofacial imaging modalities. STUDY DESIGN A systematic search of PubMed and Scopus databases was performed. The search strategy was set as a combination of the following keywords: "Artificial Intelligence," "Machine Learning," "Deep Learning," "Neural Networks," "Head and Neck Imaging," and "Maxillofacial Imaging." Full-text screening and data extraction were independently conducted by two independent reviewers; any mismatch was resolved by discussion. The risk of bias was assessed by one reviewer and validated by another. RESULTS The search returned a total of 3,392 articles. After careful evaluation of the titles, abstracts, and full texts, a total number of 194 articles were included. Most studies focused on AI applications for tooth and implant classification and identification, 3-dimensional cephalometric landmark detection, lesion detection (periapical, jaws, and bone), and osteoporosis detection. CONCLUSION Despite the AI models' limitations, they showed promising results. Further studies are needed to explore specific applications and real-world scenarios before confidently integrating these models into dental practice.
Collapse
Affiliation(s)
- Serlie Hartoonian
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Matine Hosseini
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Iman Yousefi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mina Mahdian
- Department of Prosthodontics and Digital Technology, Stony Brook University School of Dental Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Mitra Ghazizadeh Ahsaie
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
8
|
Tao L, Li M, Zhang X, Cheng M, Yang Y, Fu Y, Zhang R, Qian D, Yu H. Automatic craniomaxillofacial landmarks detection in CT images of individuals with dentomaxillofacial deformities by a two-stage deep learning model. BMC Oral Health 2023; 23:876. [PMID: 37978486 PMCID: PMC10657133 DOI: 10.1186/s12903-023-03446-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 09/22/2023] [Indexed: 11/19/2023] Open
Abstract
BACKGROUND Accurate cephalometric analysis plays a vital role in the diagnosis and subsequent surgical planning in orthognathic and orthodontics treatment. However, manual digitization of anatomical landmarks in computed tomography (CT) is subject to limitations such as low accuracy, poor repeatability and excessive time consumption. Furthermore, the detection of landmarks has more difficulties on individuals with dentomaxillofacial deformities than normal individuals. Therefore, this study aims to develop a deep learning model to automatically detect landmarks in CT images of patients with dentomaxillofacial deformities. METHODS Craniomaxillofacial (CMF) CT data of 80 patients with dentomaxillofacial deformities were collected for model development. 77 anatomical landmarks digitized by experienced CMF surgeons in each CT image were set as the ground truth. 3D UX-Net, the cutting-edge medical image segmentation network, was adopted as the backbone of model architecture. Moreover, a new region division pattern for CMF structures was designed as a training strategy to optimize the utilization of computational resources and image resolution. To evaluate the performance of this model, several experiments were conducted to make comparison between the model and manual digitization approach. RESULTS The training set and the validation set included 58 and 22 samples respectively. The developed model can accurately detect 77 landmarks on bone, soft tissue and teeth with a mean error of 1.81 ± 0.89 mm. Removal of region division before training significantly increased the error of prediction (2.34 ± 1.01 mm). In terms of manual digitization, the inter-observer and intra-observer variations were 1.27 ± 0.70 mm and 1.01 ± 0.74 mm respectively. In all divided regions except Teeth Region (TR), our model demonstrated equivalent performance to experienced CMF surgeons in landmarks detection (p > 0.05). CONCLUSIONS The developed model demonstrated excellent performance in detecting craniomaxillofacial landmarks when considering manual digitization work of expertise as benchmark. It is also verified that the region division pattern designed in this study remarkably improved the detection accuracy.
Collapse
Affiliation(s)
- Leran Tao
- Department of Oral and Cranio-maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Meng Li
- Department of Oral and Cranio-maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Xu Zhang
- Mechanical college, Shanghai Dianji University, Shanghai, 201306, China
| | - Mengjia Cheng
- Department of Oral and Cranio-maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Yang Yang
- Shanghai Lanhui Medical Technology Co., Ltd, Shanghai, 200333, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Yijiao Fu
- College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200125, China
| | - Rongbin Zhang
- College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200125, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China.
| | - Hongbo Yu
- Department of Oral and Cranio-maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China.
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China.
- Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai, 200011, China.
| |
Collapse
|
9
|
Gu Z, Wu Z, Dai N. Image generation technology for functional occlusal pits and fissures based on a conditional generative adversarial network. PLoS One 2023; 18:e0291728. [PMID: 37725620 PMCID: PMC10508633 DOI: 10.1371/journal.pone.0291728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 09/02/2023] [Indexed: 09/21/2023] Open
Abstract
The occlusal surfaces of natural teeth have complex features of functional pits and fissures. These morphological features directly affect the occlusal state of the upper and lower teeth. An image generation technology for functional occlusal pits and fissures is proposed to address the lack of local detailed crown surface features in existing dental restoration methods. First, tooth depth image datasets were constructed using an orthogonal projection method. Second, the optimization and improvement of the model parameters were guided by introducing the jaw position spatial constraint, the L1 loss and the perceptual loss functions. Finally, two image quality evaluation metrics were applied to evaluate the quality of the generated images, and deform the dental crown by using the generated occlusal pits and fissures as constraints to compare with expert data. The results showed that the images generated using the network constructed in this study had high quality, and the detailed pit and fissure features on the crown were effectively restored, with a standard deviation of 0.1802mm compared to the expert-designed tooth crown models.
Collapse
Affiliation(s)
- Zhaodan Gu
- Jiangsu Automation Research Institute, Lianyungang, P.R. China
| | - Zhilei Wu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, P.R. China
| | - Ning Dai
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, P.R. China
| |
Collapse
|
10
|
Fan W, Zhang J, Wang N, Li J, Hu L. The Application of Deep Learning on CBCT in Dentistry. Diagnostics (Basel) 2023; 13:2056. [PMID: 37370951 DOI: 10.3390/diagnostics13122056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/06/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
Cone beam computed tomography (CBCT) has become an essential tool in modern dentistry, allowing dentists to analyze the relationship between teeth and the surrounding tissues. However, traditional manual analysis can be time-consuming and its accuracy depends on the user's proficiency. To address these limitations, deep learning (DL) systems have been integrated into CBCT analysis to improve accuracy and efficiency. Numerous DL models have been developed for tasks such as automatic diagnosis, segmentation, classification of teeth, inferior alveolar nerve, bone, airway, and preoperative planning. All research articles summarized were from Pubmed, IEEE, Google Scholar, and Web of Science up to December 2022. Many studies have demonstrated that the application of deep learning technology in CBCT examination in dentistry has achieved significant progress, and its accuracy in radiology image analysis has reached the level of clinicians. However, in some fields, its accuracy still needs to be improved. Furthermore, ethical issues and CBCT device differences may prohibit its extensive use. DL models have the potential to be used clinically as medical decision-making aids. The combination of DL and CBCT can highly reduce the workload of image reading. This review provides an up-to-date overview of the current applications of DL on CBCT images in dentistry, highlighting its potential and suggesting directions for future research.
Collapse
Affiliation(s)
- Wenjie Fan
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jiaqi Zhang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Nan Wang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jia Li
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Li Hu
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| |
Collapse
|
11
|
Serafin M, Baldini B, Cabitza F, Carrafiello G, Baselli G, Del Fabbro M, Sforza C, Caprioglio A, Tartaglia GM. Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: systematic review and meta-analysis. LA RADIOLOGIA MEDICA 2023; 128:544-555. [PMID: 37093337 PMCID: PMC10181977 DOI: 10.1007/s11547-023-01629-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/28/2023] [Indexed: 04/25/2023]
Abstract
OBJECTIVES The aim of the present systematic review and meta-analysis is to assess the accuracy of automated landmarking using deep learning in comparison with manual tracing for cephalometric analysis of 3D medical images. METHODS PubMed/Medline, IEEE Xplore, Scopus and ArXiv electronic databases were searched. Selection criteria were: ex vivo and in vivo volumetric data images suitable for 3D landmarking (Problem), a minimum of five automated landmarking performed by deep learning method (Intervention), manual landmarking (Comparison), and mean accuracy, in mm, between manual and automated landmarking (Outcome). QUADAS-2 was adapted for quality analysis. Meta-analysis was performed on studies that reported as outcome mean values and standard deviation of the difference (error) between manual and automated landmarking. Linear regression plots were used to analyze correlations between mean accuracy and year of publication. RESULTS The initial electronic screening yielded 252 papers published between 2020 and 2022. A total of 15 studies were included for the qualitative synthesis, whereas 11 studies were used for the meta-analysis. Overall random effect model revealed a mean value of 2.44 mm, with a high heterogeneity (I2 = 98.13%, τ2 = 1.018, p-value < 0.001); risk of bias was high due to the presence of issues for several domains per study. Meta-regression indicated a significant relation between mean error and year of publication (p value = 0.012). CONCLUSION Deep learning algorithms showed an excellent accuracy for automated 3D cephalometric landmarking. In the last two years promising algorithms have been developed and improvements in landmarks annotation accuracy have been done.
Collapse
Affiliation(s)
- Marco Serafin
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Benedetta Baldini
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy.
| | - Federico Cabitza
- Department of Informatics, System and Communication, University of Milano-Bicocca, Viale Sarca 336, 20126, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Belgioioso 173, 20157, Milan, Italy
| | - Gianpaolo Carrafiello
- Department of Oncology and Hematology-Oncology, University of Milan, Via Sforza 35, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Giuseppe Baselli
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy
| | - Massimo Del Fabbro
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Chiarella Sforza
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Alberto Caprioglio
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Gianluca M Tartaglia
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| |
Collapse
|
12
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|