1
|
Li H, Yang J, Xuan Z, Qu M, Wang Y, Feng C. A spatio-temporal graph convolutional network for ultrasound echocardiographic landmark detection. Med Image Anal 2024; 97:103272. [PMID: 39024972 DOI: 10.1016/j.media.2024.103272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 07/07/2024] [Accepted: 07/08/2024] [Indexed: 07/20/2024]
Abstract
Landmark detection is a crucial task in medical image analysis, with applications across various fields. However, current methods struggle to accurately locate landmarks in medical images with blurred tissue boundaries due to low image quality. In particular, in echocardiography, sparse annotations make it challenging to predict landmarks with position stability and temporal consistency. In this paper, we propose a spatio-temporal graph convolutional network tailored for echocardiography landmark detection. We specifically sample landmark labels from the left ventricular endocardium and pre-calculate their correlations to establish structural priors. Our approach involves a graph convolutional neural network that learns the interrelationships among landmarks, significantly enhancing landmark accuracy within ambiguous tissue contexts. Additionally, we integrate gate recurrent units to grasp the temporal consistency of landmarks across consecutive images, augmenting the model's resilience against unlabeled data. Through validation across three echocardiography datasets, our method demonstrates superior accuracy when contrasted with alternative landmark detection models.
Collapse
Affiliation(s)
- Honghe Li
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| | - Jinzhu Yang
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China.
| | - Zhanfeng Xuan
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| | - Mingjun Qu
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| | - Yonghuai Wang
- Department of Cardiovascular Ultrasound, The First Hospital of China Medical University, China
| | - Chaolu Feng
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| |
Collapse
|
2
|
Zhang R, Mo H, Hu W, Jie B, Xu L, He Y, Ke J, Wang J. Super-resolution landmark detection networks for medical images. Comput Biol Med 2024; 182:109095. [PMID: 39236661 DOI: 10.1016/j.compbiomed.2024.109095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 08/06/2024] [Accepted: 08/30/2024] [Indexed: 09/07/2024]
Abstract
Craniomaxillofacial (CMF) and nasal landmark detection are fundamental components in computer-assisted surgery. Medical landmark detection method includes regression-based and heatmap-based methods, and heatmap-based methods are among the main methodology branches. The method relies on high-resolution (HR) features containing more location information to reduce the network error caused by sub-pixel location. Previous studies extracted HR patches around each landmark from downsampling images via object detection and subsequently input them into the network to obtain HR features. Complex multistage tasks affect accuracy. The network error caused by downsampling and upsampling operations during training, which interpolates low-resolution features to generate HR features or predicted heatmap, is still significant. We propose standard super-resolution landmark detection networks (SRLD-Net) and super-resolution UNet (SR-UNet) to reduce network error effectively. SRLD-Net used Pyramid pooling block, Pyramid fusion block and super-resolution fusion block to combine global prior knowledge and multi-scale local features, similarly, SR-UNet adopts Pyramid pooling block and super-resolution block. They can obviously improve representation learning ability of our proposed methods. Then the super-resolution upsampling layer is utilized to generate detail predicted heatmap. Our proposed networks were compared to state-of-the-art methods using the craniomaxillofacial, nasal, and mandibular molar datasets, demonstrating better performance. The mean errors of 18 CMF, 6 nasal and 14 mandibular landmarks are 1.39 ± 1.04, 1.31 ± 1.09, 2.01 ± 4.33 mm. These results indicate that the super-resolution methods have great potential in medical landmark detection tasks. This paper provides two effective heatmap-based landmark detection networks and the code is released in https://github.com/Runshi-Zhang/SRLD-Net.
Collapse
Affiliation(s)
- Runshi Zhang
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
| | - Hao Mo
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
| | - Weini Hu
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Bimeng Jie
- Peking University School and Hospital of Stomatology, Weigong Village, Haidian District, 100081, Beijing, China
| | - Lin Xu
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Yang He
- Peking University School and Hospital of Stomatology, Weigong Village, Haidian District, 100081, Beijing, China
| | - Jia Ke
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China.
| |
Collapse
|
3
|
Spangenberg GW, Uddin F, Faber KJ, Langohr GDG. Automatic bicipital groove identification in arthritic humeri for preoperative planning: A Random Forest Classifier approach. Comput Biol Med 2024; 178:108653. [PMID: 38861894 DOI: 10.1016/j.compbiomed.2024.108653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/06/2024] [Accepted: 05/22/2024] [Indexed: 06/13/2024]
Abstract
The bicipital groove is an important anatomical feature of the proximal humerus that needs to be identified during surgical planning for procedures such as shoulder arthroplasty and proximal humeral fracture reconstruction. Current algorithms for automatic identification prove ineffective in arthritic humeri due to the presence of osteophytes, reducing their usefulness for total shoulder arthroplasty. Our methodology involves the use of a Random Forest Classifier (RFC) to automatically detect the bicipital groove on segmented computed tomography scans of humeri. We evaluated our model on two distinct test datasets: one comprising non-arthritic humeri and another with arthritic humeri characterized by significant osteophytes. Our model detected the bicipital groove with a mean absolute error of less than 1mm on arthritic humeri, demonstrating a significant improvement over the previous gold standard approach. Successful identification of the bicipital groove with a high degree of accuracy even in arthritic humeri was accomplished. This model is open source and included in the python package shoulder.
Collapse
Affiliation(s)
- Gregory W Spangenberg
- Department of Mechanical Engineering, Western University, London, ON, Canada; The Roth McFarlane Hand and Upper Limb Centre, St. Joseph's Hospital, London, ON, Canada.
| | - Fares Uddin
- The Roth McFarlane Hand and Upper Limb Centre, St. Joseph's Hospital, London, ON, Canada; Department of Surgery, Western University, London, ON, Canada
| | - Kenneth J Faber
- The Roth McFarlane Hand and Upper Limb Centre, St. Joseph's Hospital, London, ON, Canada; Department of Surgery, Western University, London, ON, Canada
| | - G Daniel G Langohr
- Department of Mechanical Engineering, Western University, London, ON, Canada; The Roth McFarlane Hand and Upper Limb Centre, St. Joseph's Hospital, London, ON, Canada
| |
Collapse
|
4
|
Huang Z, Zhao R, Leung FHF, Banerjee S, Lam KM, Zheng YP, Ling SH. Landmark Localization From Medical Images With Generative Distribution Prior. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2679-2692. [PMID: 38421850 DOI: 10.1109/tmi.2024.3371948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
In medical image analysis, anatomical landmarks usually contain strong prior knowledge of their structural information. In this paper, we propose to promote medical landmark localization by modeling the underlying landmark distribution via normalizing flows. Specifically, we introduce the flow-based landmark distribution prior as a learnable objective function into a regression-based landmark localization framework. Moreover, we employ an integral operation to make the mapping from heatmaps to coordinates differentiable to further enhance heatmap-based localization with the learned distribution prior. Our proposed Normalizing Flow-based Distribution Prior (NFDP) employs a straightforward backbone and non-problem-tailored architecture (i.e., ResNet18), which delivers high-fidelity outputs across three X-ray-based landmark localization datasets. Remarkably, the proposed NFDP can do the job with minimal additional computational burden as the normalizing flows module is detached from the framework on inferencing. As compared to existing techniques, our proposed NFDP provides a superior balance between prediction accuracy and inference speed, making it a highly efficient and effective approach. The source code of this paper is available at https://github.com/jacksonhzx95/NFDP.
Collapse
|
5
|
Sahlsten J, Järnstedt J, Jaskari J, Naukkarinen H, Mahasantipiya P, Charuakkra A, Vasankari K, Hietanen A, Sundqvist O, Lehtinen A, Kaski K. Deep learning for 3D cephalometric landmarking with heterogeneous multi-center CBCT dataset. PLoS One 2024; 19:e0305947. [PMID: 38917161 PMCID: PMC11198780 DOI: 10.1371/journal.pone.0305947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 06/07/2024] [Indexed: 06/27/2024] Open
Abstract
Cephalometric analysis is critically important and common procedure prior to orthodontic treatment and orthognathic surgery. Recently, deep learning approaches have been proposed for automatic 3D cephalometric analysis based on landmarking from CBCT scans. However, these approaches have relied on uniform datasets from a single center or imaging device but without considering patient ethnicity. In addition, previous works have considered a limited number of clinically relevant cephalometric landmarks and the approaches were computationally infeasible, both impairing integration into clinical workflow. Here our aim is to analyze the clinical applicability of a light-weight deep learning neural network for fast localization of 46 clinically significant cephalometric landmarks with multi-center, multi-ethnic, and multi-device data consisting of 309 CBCT scans from Finnish and Thai patients. The localization performance of our approach resulted in the mean distance of 1.99 ± 1.55 mm for the Finnish cohort and 1.96 ± 1.25 mm for the Thai cohort. This performance turned out to be clinically significant i.e., ≤ 2 mm with 61.7% and 64.3% of the landmarks with Finnish and Thai cohorts, respectively. Furthermore, the estimated landmarks were used to measure cephalometric characteristics successfully i.e., with ≤ 2 mm or ≤ 2° error, on 85.9% of the Finnish and 74.4% of the Thai cases. Between the two patient cohorts, 33 of the landmarks and all cephalometric characteristics had no statistically significant difference (p < 0.05) measured by the Mann-Whitney U test with Benjamini-Hochberg correction. Moreover, our method is found to be computationally light, i.e., providing the predictions with the mean duration of 0.77 s and 2.27 s with single machine GPU and CPU computing, respectively. Our findings advocate for the inclusion of this method into clinical settings based on its technical feasibility and robustness across varied clinical datasets.
Collapse
Affiliation(s)
- Jaakko Sahlsten
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Jorma Järnstedt
- Department of Radiology, Tampere University Hospital, Wellbeing Services County of Pirkanmaa, Tampere, Finland
- Faculty of Medicine and Health Technology, University of Tampere, Tampere, Finland
| | - Joel Jaskari
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | | | - Phattaranant Mahasantipiya
- Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
- Division of Oral and Maxillofacial Radiology, Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
| | - Arnon Charuakkra
- Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
- Division of Oral and Maxillofacial Radiology, Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
| | - Krista Vasankari
- Department of Oral Diseases, Tampere University Hospital, Tampere, Finland
| | | | | | - Antti Lehtinen
- Department of Radiology, Tampere University Hospital, Wellbeing Services County of Pirkanmaa, Tampere, Finland
- Faculty of Medicine and Health Technology, University of Tampere, Tampere, Finland
| | - Kimmo Kaski
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
- The Alan Turing Institute, British Library, London, United Kingdom
| |
Collapse
|
6
|
Elkhill C, Liu J, Linguraru MG, LeBeau S, Khechoyan D, French B, Porras AR. Geometric learning and statistical modeling for surgical outcomes evaluation in craniosynostosis using 3D photogrammetry. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107689. [PMID: 37393741 PMCID: PMC10527531 DOI: 10.1016/j.cmpb.2023.107689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 05/11/2023] [Accepted: 06/22/2023] [Indexed: 07/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate and repeatable detection of craniofacial landmarks is crucial for automated quantitative evaluation of head development anomalies. Since traditional imaging modalities are discouraged in pediatric patients, 3D photogrammetry has emerged as a popular and safe imaging alternative to evaluate craniofacial anomalies. However, traditional image analysis methods are not designed to operate on unstructured image data representations such as 3D photogrammetry. METHODS We present a fully automated pipeline to identify craniofacial landmarks in real time, and we use it to assess the head shape of patients with craniosynostosis using 3D photogrammetry. To detect craniofacial landmarks, we propose a novel geometric convolutional neural network based on Chebyshev polynomials to exploit the point connectivity information in 3D photogrammetry and quantify multi-resolution spatial features. We propose a landmark-specific trainable scheme that aggregates the multi-resolution geometric and texture features quantified at every vertex of a 3D photogram. Then, we embed a new probabilistic distance regressor module that leverages the integrated features at every point to predict landmark locations without assuming correspondences with specific vertices in the original 3D photogram. Finally, we use the detected landmarks to segment the calvaria from the 3D photograms of children with craniosynostosis, and we derive a new statistical index of head shape anomaly to quantify head shape improvements after surgical treatment. RESULTS We achieved an average error of 2.74 ± 2.70 mm identifying Bookstein Type I craniofacial landmarks, which is a significant improvement compared to other state-of-the-art methods. Our experiments also demonstrated a high robustness to spatial resolution variability in the 3D photograms. Finally, our head shape anomaly index quantified a significant reduction of head shape anomalies as a consequence of surgical treatment. CONCLUSION Our fully automated framework provides real-time craniofacial landmark detection from 3D photogrammetry with state-of-the-art accuracy. In addition, our new head shape anomaly index can quantify significant head phenotype changes and can be used to quantitatively evaluate surgical treatment in patients with craniosynostosis.
Collapse
Affiliation(s)
- Connor Elkhill
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA; Department of Pediatric Plastic and Reconstructive Surgery, Children's Hospital Colorado, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA.
| | - Jiawei Liu
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, 7144 13th Pl NW, Washington, DC 20012, USA; Departments of Radiology and Pediatrics, George Washington University School of Medicine and Health Sciences, Ross Hall, 2300 Eye Street, NW, Washington, DC 20037, USA
| | - Scott LeBeau
- Department of Pediatric Plastic and Reconstructive Surgery, Children's Hospital Colorado, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA
| | - David Khechoyan
- Department of Pediatric Plastic and Reconstructive Surgery, Children's Hospital Colorado, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA; Department of Surgery, School of Medicine, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA
| | - Brooke French
- Department of Pediatric Plastic and Reconstructive Surgery, Children's Hospital Colorado, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA; Department of Surgery, School of Medicine, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA
| | - Antonio R Porras
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA; Department of Pediatric Plastic and Reconstructive Surgery, Children's Hospital Colorado, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA; Department of Surgery, School of Medicine, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA; Department of Biomedical Informatics, School of Medicine, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA; Department of Pediatrics and Department of Neurosurgery, School of Medicine, University of Colorado Anschutz Medical Campus, 13123 E 16th Ave, Aurora, CO 80045, USA
| |
Collapse
|
7
|
Bonny T, Al Nassan W, Obaideen K, Al Mallahi MN, Mohammad Y, El-damanhoury HM. Contemporary Role and Applications of Artificial Intelligence in Dentistry. F1000Res 2023; 12:1179. [PMID: 37942018 PMCID: PMC10630586 DOI: 10.12688/f1000research.140204.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/24/2023] [Indexed: 11/10/2023] Open
Abstract
Artificial Intelligence (AI) technologies play a significant role and significantly impact various sectors, including healthcare, engineering, sciences, and smart cities. AI has the potential to improve the quality of patient care and treatment outcomes while minimizing the risk of human error. Artificial Intelligence (AI) is transforming the dental industry, just like it is revolutionizing other sectors. It is used in dentistry to diagnose dental diseases and provide treatment recommendations. Dental professionals are increasingly relying on AI technology to assist in diagnosis, clinical decision-making, treatment planning, and prognosis prediction across ten dental specialties. One of the most significant advantages of AI in dentistry is its ability to analyze vast amounts of data quickly and accurately, providing dental professionals with valuable insights to enhance their decision-making processes. The purpose of this paper is to identify the advancement of artificial intelligence algorithms that have been frequently used in dentistry and assess how well they perform in terms of diagnosis, clinical decision-making, treatment, and prognosis prediction in ten dental specialties; dental public health, endodontics, oral and maxillofacial surgery, oral medicine and pathology, oral & maxillofacial radiology, orthodontics and dentofacial orthopedics, pediatric dentistry, periodontics, prosthodontics, and digital dentistry in general. We will also show the pros and cons of using AI in all dental specialties in different ways. Finally, we will present the limitations of using AI in dentistry, which made it incapable of replacing dental personnel, and dentists, who should consider AI a complimentary benefit and not a threat.
Collapse
Affiliation(s)
- Talal Bonny
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Wafaa Al Nassan
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Khaled Obaideen
- Sustainable Energy and Power Systems Research Centre, RISE, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Maryam Nooman Al Mallahi
- Department of Mechanical and Aerospace Engineering, United Arab Emirates University, Al Ain City, Abu Dhabi, 27272, United Arab Emirates
| | - Yara Mohammad
- College of Engineering and Information Technology, Ajman University, Ajman University, Ajman, Ajman, United Arab Emirates
| | - Hatem M. El-damanhoury
- Department of Preventive and Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, 27272, United Arab Emirates
| |
Collapse
|
8
|
Yao K, Xie Y, Xia L, Wei S, Yu W, Shen G. The Reliability of Three-Dimensional Landmark-Based Craniomaxillofacial and Airway Cephalometric Analysis. Diagnostics (Basel) 2023; 13:2360. [PMID: 37510103 PMCID: PMC10377994 DOI: 10.3390/diagnostics13142360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 06/25/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
Cephalometric analysis is a standard diagnostic tool in orthodontics and craniofacial surgery. Today, as conventional 2D cephalometry is limited and susceptible to analysis bias, a more reliable and user-friendly three-dimensional system that includes hard tissue, soft tissue, and airways is demanded in clinical practice. We launched our study to develop such a system based on CT data and landmarks. This study aims to determine whether the data labeled through our process is highly qualified and whether the soft tissue and airway data derived from CT scans are reliable. We enrolled 15 patients (seven males, eight females, 26.47 ± 3.44 years old) diagnosed with either non-syndromic dento-maxillofacial deformities or OSDB in this study to evaluate the intra- and inter-examiner reliability of our system. A total of 126 landmarks were adopted and divided into five sets by region: 28 cranial points, 25 mandibular points, 20 teeth points, 48 soft tissue points, and 6 airway points. All the landmarks were labeled by two experienced clinical practitioners, either of whom had labeled all the data twice at least one month apart. Furthermore, 78 parameters of three sets were calculated in this study: 42 skeletal parameters (23 angular and 19 linear), 27 soft tissue parameters (9 angular and 18 linear), and 9 upper airway parameters (2 linear, 4 areal, and 3 voluminal). Intraclass correlation coefficient (ICC) was used to evaluate the inter-examiner and intra-examiner reliability of landmark coordinate values and measurement parameters. The overwhelming majority of the landmarks showed excellent intra- and inter-examiner reliability. For skeletal parameters, angular parameters indicated better reliability, while linear parameters performed better for soft tissue parameters. The intra- and inter-examiner ICCs of airway parameters referred to excellent reliability. In summary, the data labeled through our process are qualified, and the soft tissue and airway data derived from CT scans are reliable. Landmarks that are not commonly used in clinical practice may require additional attention while labeling as they are prone to poor reliability. Measurement parameters with values close to 0 tend to have low reliability. We believe this three-dimensional cephalometric system would reach clinical application.
Collapse
Affiliation(s)
- Kan Yao
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai 200011, China
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Yilun Xie
- Department of Stomatology, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Liang Xia
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai 200011, China
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Silong Wei
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai 200011, China
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Wenwen Yu
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai 200011, China
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Guofang Shen
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai 200011, China
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai 200011, China
| |
Collapse
|
9
|
Fan W, Zhang J, Wang N, Li J, Hu L. The Application of Deep Learning on CBCT in Dentistry. Diagnostics (Basel) 2023; 13:2056. [PMID: 37370951 DOI: 10.3390/diagnostics13122056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/06/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
Cone beam computed tomography (CBCT) has become an essential tool in modern dentistry, allowing dentists to analyze the relationship between teeth and the surrounding tissues. However, traditional manual analysis can be time-consuming and its accuracy depends on the user's proficiency. To address these limitations, deep learning (DL) systems have been integrated into CBCT analysis to improve accuracy and efficiency. Numerous DL models have been developed for tasks such as automatic diagnosis, segmentation, classification of teeth, inferior alveolar nerve, bone, airway, and preoperative planning. All research articles summarized were from Pubmed, IEEE, Google Scholar, and Web of Science up to December 2022. Many studies have demonstrated that the application of deep learning technology in CBCT examination in dentistry has achieved significant progress, and its accuracy in radiology image analysis has reached the level of clinicians. However, in some fields, its accuracy still needs to be improved. Furthermore, ethical issues and CBCT device differences may prohibit its extensive use. DL models have the potential to be used clinically as medical decision-making aids. The combination of DL and CBCT can highly reduce the workload of image reading. This review provides an up-to-date overview of the current applications of DL on CBCT images in dentistry, highlighting its potential and suggesting directions for future research.
Collapse
Affiliation(s)
- Wenjie Fan
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jiaqi Zhang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Nan Wang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jia Li
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Li Hu
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| |
Collapse
|
10
|
Serafin M, Baldini B, Cabitza F, Carrafiello G, Baselli G, Del Fabbro M, Sforza C, Caprioglio A, Tartaglia GM. Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: systematic review and meta-analysis. LA RADIOLOGIA MEDICA 2023; 128:544-555. [PMID: 37093337 PMCID: PMC10181977 DOI: 10.1007/s11547-023-01629-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/28/2023] [Indexed: 04/25/2023]
Abstract
OBJECTIVES The aim of the present systematic review and meta-analysis is to assess the accuracy of automated landmarking using deep learning in comparison with manual tracing for cephalometric analysis of 3D medical images. METHODS PubMed/Medline, IEEE Xplore, Scopus and ArXiv electronic databases were searched. Selection criteria were: ex vivo and in vivo volumetric data images suitable for 3D landmarking (Problem), a minimum of five automated landmarking performed by deep learning method (Intervention), manual landmarking (Comparison), and mean accuracy, in mm, between manual and automated landmarking (Outcome). QUADAS-2 was adapted for quality analysis. Meta-analysis was performed on studies that reported as outcome mean values and standard deviation of the difference (error) between manual and automated landmarking. Linear regression plots were used to analyze correlations between mean accuracy and year of publication. RESULTS The initial electronic screening yielded 252 papers published between 2020 and 2022. A total of 15 studies were included for the qualitative synthesis, whereas 11 studies were used for the meta-analysis. Overall random effect model revealed a mean value of 2.44 mm, with a high heterogeneity (I2 = 98.13%, τ2 = 1.018, p-value < 0.001); risk of bias was high due to the presence of issues for several domains per study. Meta-regression indicated a significant relation between mean error and year of publication (p value = 0.012). CONCLUSION Deep learning algorithms showed an excellent accuracy for automated 3D cephalometric landmarking. In the last two years promising algorithms have been developed and improvements in landmarks annotation accuracy have been done.
Collapse
Affiliation(s)
- Marco Serafin
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Benedetta Baldini
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy.
| | - Federico Cabitza
- Department of Informatics, System and Communication, University of Milano-Bicocca, Viale Sarca 336, 20126, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Belgioioso 173, 20157, Milan, Italy
| | - Gianpaolo Carrafiello
- Department of Oncology and Hematology-Oncology, University of Milan, Via Sforza 35, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Giuseppe Baselli
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy
| | - Massimo Del Fabbro
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Chiarella Sforza
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Alberto Caprioglio
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Gianluca M Tartaglia
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| |
Collapse
|
11
|
Xu M, Liu B, Luo Z, Ma H, Sun M, Wang Y, Yin N, Tang X, Song T. Using a New Deep Learning Method for 3D Cephalometry in Patients With Cleft Lip and Palate. J Craniofac Surg 2023:00001665-990000000-00651. [PMID: 36944601 DOI: 10.1097/scs.0000000000009299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 12/28/2022] [Indexed: 03/23/2023] Open
Abstract
Deep learning algorithms based on automatic 3-dimensional (D) cephalometric marking points about people without craniomaxillofacial deformities has achieved good results. However, there has been no previous report about cleft lip and palate. The purpose of this study is to apply a new deep learning method based on a 3D point cloud graph convolutional neural network to predict and locate landmarks in patients with cleft lip and palate based on the relationships between points. The authors used the PointNet++ model to investigate the automatic 3D cephalometric marking points. And the mean distance error of the center coordinate position and the success detection rate (SDR) were used to evaluate the accuracy of systematic labeling. A total of 150 patients were enrolled. The mean distance error for all 27 landmarks was 1.33 mm, and 9 landmarks (30%) showed SDRs at 2 mm over 90%, and 3 landmarks (35%) showed SDRs at 2 mm under 70%. The automatic 3D cephalometric marking points take 16 seconds per dataset. In summary, our training sets were derived from the cleft lip with/without palate computed tomography to achieve accurate results. The 3D cephalometry system based on the graph convolutional neural network algorithm may be suitable for 3D cephalometry system in cleft lip and palate cases. More accurate results may be obtained if the cleft lip and palate training set is expanded in the future.
Collapse
Affiliation(s)
- Meng Xu
- Cleft Lip and Palate Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College
| | - Bingyang Liu
- Maxillofacial Surgery Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing
| | - Zhaoyang Luo
- HaiChuang Future Medical Technology Co. Ltd, Hangzhou
| | - Hengyuan Ma
- Digital Technology Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Min Sun
- Cleft Lip and Palate Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College
| | - Yongqian Wang
- Cleft Lip and Palate Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College
| | - Ningbei Yin
- Cleft Lip and Palate Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College
| | - Xiaojun Tang
- Maxillofacial Surgery Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing
| | - Tao Song
- Cleft Lip and Palate Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College
| |
Collapse
|
12
|
Torosdagli N, Anwar S, Verma P, Liberton DK, Lee JS, Han WW, Bagci U. Relational reasoning network for anatomical landmarking. J Med Imaging (Bellingham) 2023; 10:024002. [PMID: 36891503 PMCID: PMC9986769 DOI: 10.1117/1.jmi.10.2.024002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023] Open
Abstract
Purpose We perform anatomical landmarking for craniomaxillofacial (CMF) bones without explicitly segmenting them. Toward this, we propose a simple, yet efficient, deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations among the landmarks in CMF bones; specifically, mandible, maxilla, and nasal bones. Approach The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units. For a given few landmarks as input, RRN treats the landmarking process similar to a data imputation problem where predicted landmarks are considered missing. Results We applied RRN to cone-beam computed tomography scans obtained from 250 patients. With a fourfold cross-validation technique, we obtained an average root mean squared error of < 2 mm per landmark. Our proposed RRN has revealed unique relationships among the landmarks that help us in inferring informativeness of the landmark points. The proposed system identifies the missing landmark locations accurately even when severe pathology or deformations are present in the bones. Conclusions Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for CMF surgeries. Achieving this goal without the need for explicit bone segmentation addresses a major limitation of segmentation-based approaches, where segmentation failure (as often is the case in bones with severe pathology or deformation) could easily lead to incorrect landmarking. To the best of our knowledge, this is the first-of-its-kind algorithm finding anatomical relations of the objects using deep learning.
Collapse
Affiliation(s)
| | - Syed Anwar
- University of Central Florida, Orlando, Florida, United States
- Children’s National Hospital, Sheikh Zayed Institute, Washington, District of Columbia, United States
- George Washington University, Washington, District of Columbia, United States
| | - Payal Verma
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Denise K. Liberton
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Janice S. Lee
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Wade W. Han
- Boston Children’s Hospital, Harvard Medical School, Department of Otolaryngology - Head and Neck Surgery, Boston, Maryland, United States
- Ther-AI, LLC, Kissimmee, Florida, United States
| | - Ulas Bagci
- University of Central Florida, Orlando, Florida, United States
- Ther-AI, LLC, Kissimmee, Florida, United States
- Northwestern University, Departments of Radiology, BME, and ECE, Machine and Hybrid Intelligence Lab, Chicago, Illinois, United States
| |
Collapse
|
13
|
Lo M, Mariconti E, Nakhaeizadeh S, Morgan RM. Preparing computed tomography images for machine learning in forensic and virtual anthropology. Forensic Sci Int Synerg 2023; 6:100319. [PMID: 36852172 PMCID: PMC9958428 DOI: 10.1016/j.fsisyn.2023.100319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 02/02/2023] [Accepted: 02/06/2023] [Indexed: 02/11/2023]
Affiliation(s)
- Martin Lo
- UCL Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK,UCL Centre for the Forensic Sciences, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK,Corresponding author. UCL Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK.
| | - Enrico Mariconti
- UCL Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK
| | - Sherry Nakhaeizadeh
- UCL Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK,UCL Centre for the Forensic Sciences, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK
| | - Ruth M. Morgan
- UCL Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK,UCL Centre for the Forensic Sciences, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK
| |
Collapse
|
14
|
Huang Y, Jones CK, Zhang X, Johnston A, Waktola S, Aygun N, Witham TF, Bydon A, Theodore N, Helm PA, Siewerdsen JH, Uneri A. Multi-perspective region-based CNNs for vertebrae labeling in intraoperative long-length images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 227:107222. [PMID: 36370597 DOI: 10.1016/j.cmpb.2022.107222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 10/31/2022] [Accepted: 11/02/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE Effective aggregation of intraoperative x-ray images that capture the patient anatomy from multiple view-angles has the potential to enable and improve automated image analysis that can be readily performed during surgery. We present multi-perspective region-based neural networks that leverage knowledge of the imaging geometry for automatic vertebrae labeling in Long-Film images - a novel tomographic imaging modality with an extended field-of-view for spine imaging. METHOD A multi-perspective network architecture was designed to exploit small view-angle disparities produced by a multi-slot collimator and consolidate information from overlapping image regions. A second network incorporates large view-angle disparities to jointly perform labeling on images from multiple views (viz., AP and lateral). A recurrent module incorporates contextual information and enforce anatomical order for the detected vertebrae. The three modules are combined to form the multi-view multi-slot (MVMS) network for labeling vertebrae using images from all available perspectives. The network was trained on images synthesized from 297 CT images and tested on 50 AP and 50 lateral Long-Film images acquired from 13 cadaveric specimens. Labeling performance of the multi-perspective networks was evaluated with respect to the number of vertebrae appearances and presence of surgical instrumentation. RESULTS The MVMS network achieved an F1 score of >96% and an average vertebral localization error of 3.3 mm, with 88.3% labeling accuracy on both AP and lateral images - (15.5% and 35.0% higher than conventional Faster R-CNN on AP and lateral views, respectively). Aggregation of multiple appearances of the same vertebra using the multi-slot network significantly improved the labeling accuracy (p < 0.05). Using the multi-view network, labeling accuracy on the more challenging lateral views was improved to the same level as that of the AP views. The approach demonstrated robustness to the presence of surgical instrumentation, commonly encountered in intraoperative images, and achieved comparable performance in images with and without instrumentation (88.9% vs. 91.2% labeling accuracy). CONCLUSION The MVMS network demonstrated effective multi-perspective aggregation, providing means for accurate, automated vertebrae labeling during spine surgery. The algorithms may be generalized to other imaging tasks and modalities that involve multiple views with view-angle disparities (e.g., bi-plane radiography). Predicted labels can help avoid adverse events during surgery (e.g., wrong-level surgery), establish correspondence with labels in preoperative modalities to facilitate image registration, and enable automated measurement of spinal alignment metrics for intraoperative assessment of spinal curvature.
Collapse
Affiliation(s)
- Y Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - C K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore MD, United States
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - A Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - S Waktola
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - N Aygun
- Department of Radiology, Johns Hopkins Medicine, Baltimore MD, United States
| | - T F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - A Bydon
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - N Theodore
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - P A Helm
- Medtronic, Littleton MA, United States
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States; Department of Computer Science, Johns Hopkins University, Baltimore MD, United States; Department of Radiology, Johns Hopkins Medicine, Baltimore MD, United States; Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston TX, United States
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States.
| |
Collapse
|
15
|
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, Chen X. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Collapse
|
16
|
Dot G, Schouman T, Chang S, Rafflenbeul F, Kerbrat A, Rouch P, Gajny L. Automatic 3-Dimensional Cephalometric Landmarking via Deep Learning. J Dent Res 2022; 101:1380-1387. [PMID: 35982646 DOI: 10.1177/00220345221112333] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The increasing use of 3-dimensional (3D) imaging by orthodontists and maxillofacial surgeons to assess complex dentofacial deformities and plan orthognathic surgeries implies a critical need for 3D cephalometric analysis. Although promising methods were suggested to localize 3D landmarks automatically, concerns about robustness and generalizability restrain their clinical use. Consequently, highly trained operators remain needed to perform manual landmarking. In this retrospective diagnostic study, we aimed to train and evaluate a deep learning (DL) pipeline based on SpatialConfiguration-Net for automatic localization of 3D cephalometric landmarks on computed tomography (CT) scans. A retrospective sample of consecutive presurgical CT scans was randomly distributed between a training/validation set (n = 160) and a test set (n = 38). The reference data consisted of 33 landmarks, manually localized once by 1 operator(n = 178) or twice by 3 operators (n = 20, test set only). After inference on the test set, 1 CT scan showed "very low" confidence level predictions; we excluded it from the overall analysis but still assessed and discussed the corresponding results. The model performance was evaluated by comparing the predictions with the reference data; the outcome set included localization accuracy, cephalometric measurements, and comparison to manual landmarking reproducibility. On the hold-out test set, the mean localization error was 1.0 ± 1.3 mm, while success detection rates for 2.0, 2.5, and 3.0 mm were 90.4%, 93.6%, and 95.4%, respectively. Mean errors were -0.3 ± 1.3° and -0.1 ± 0.7 mm for angular and linear measurements, respectively. When compared to manual reproducibility, the measurements were within the Bland-Altman 95% limits of agreement for 91.9% and 71.8% of skeletal and dentoalveolar variables, respectively. To conclude, while our DL method still requires improvement, it provided highly accurate 3D landmark localization on a challenging test set, with a reliability for skeletal evaluation on par with what clinicians obtain.
Collapse
Affiliation(s)
- G Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Universite Paris Cite, AP-HP, Hopital Pitie Salpetriere, Service de Medecine Bucco-Dentaire, Paris, France
| | - T Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - S Chang
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - F Rafflenbeul
- Department of Dentofacial Orthopedics, Faculty of Dental Surgery, Strasbourg University, Strasbourg, France
| | - A Kerbrat
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - P Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - L Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| |
Collapse
|
17
|
Thurzo A, Kosnáčová HS, Kurilová V, Kosmeľ S, Beňuš R, Moravanský N, Kováč P, Kuracinová KM, Palkovič M, Varga I. Use of Advanced Artificial Intelligence in Forensic Medicine, Forensic Anthropology and Clinical Anatomy. Healthcare (Basel) 2021; 9:1545. [PMID: 34828590 PMCID: PMC8619074 DOI: 10.3390/healthcare9111545] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 11/10/2021] [Accepted: 11/10/2021] [Indexed: 12/11/2022] Open
Abstract
Three-dimensional convolutional neural networks (3D CNN) of artificial intelligence (AI) are potent in image processing and recognition using deep learning to perform generative and descriptive tasks. Compared to its predecessor, the advantage of CNN is that it automatically detects the important features without any human supervision. 3D CNN is used to extract features in three dimensions where input is a 3D volume or a sequence of 2D pictures, e.g., slices in a cone-beam computer tomography scan (CBCT). The main aim was to bridge interdisciplinary cooperation between forensic medical experts and deep learning engineers, emphasizing activating clinical forensic experts in the field with possibly basic knowledge of advanced artificial intelligence techniques with interest in its implementation in their efforts to advance forensic research further. This paper introduces a novel workflow of 3D CNN analysis of full-head CBCT scans. Authors explore the current and design customized 3D CNN application methods for particular forensic research in five perspectives: (1) sex determination, (2) biological age estimation, (3) 3D cephalometric landmark annotation, (4) growth vectors prediction, (5) facial soft-tissue estimation from the skull and vice versa. In conclusion, 3D CNN application can be a watershed moment in forensic medicine, leading to unprecedented improvement of forensic analysis workflows based on 3D neural networks.
Collapse
Affiliation(s)
- Andrej Thurzo
- Department of Stomatology and Maxillofacial Surgery, Faculty of Medicine, Comenius University in Bratislava, 81250 Bratislava, Slovakia
- Department of Simulation and Virtual Medical Education, Faculty of Medicine, Comenius University, Sasinkova 4, 81272 Bratislava, Slovakia;
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
| | - Helena Svobodová Kosnáčová
- Department of Simulation and Virtual Medical Education, Faculty of Medicine, Comenius University, Sasinkova 4, 81272 Bratislava, Slovakia;
- Department of Genetics, Cancer Research Institute, Biomedical Research Center, Slovak Academy Sciences, Dúbravská Cesta 9, 84505 Bratislava, Slovakia
| | - Veronika Kurilová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 81219 Bratislava, Slovakia;
| | - Silvester Kosmeľ
- Deep Learning Engineering Department at Cognexa, Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovičova 2, 84216 Bratislava, Slovakia;
| | - Radoslav Beňuš
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Department of Anthropology, Faculty of Natural Sciences, Comenius University in Bratislava, Mlynská dolina Ilkovičova 6, 84215 Bratislava, Slovakia
| | - Norbert Moravanský
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Institute of Forensic Medicine, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia
| | - Peter Kováč
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Department of Criminal Law and Criminology, Faculty of Law Trnava University, Kollárova 10, 91701 Trnava, Slovakia
| | - Kristína Mikuš Kuracinová
- Institute of Pathological Anatomy, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia; (K.M.K.); (M.P.)
| | - Michal Palkovič
- Institute of Pathological Anatomy, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia; (K.M.K.); (M.P.)
- Forensic Medicine and Pathological Anatomy Department, Health Care Surveillance Authority (HCSA), Sasinkova 4, 81108 Bratislava, Slovakia
| | - Ivan Varga
- Institute of Histology and Embryology, Faculty of Medicine, Comenius University in Bratislava, 81372 Bratislava, Slovakia;
| |
Collapse
|