1
|
Fang X, Deng HH, Kuang T, Xu X, Lee J, Gateno J, Yan P. Patient-specific reference model estimation for orthognathic surgical planning. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03123-0. [PMID: 38869779 DOI: 10.1007/s11548-024-03123-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 03/22/2024] [Indexed: 06/14/2024]
Abstract
PURPOSE Accurate estimation of reference bony shape models is fundamental for orthognathic surgical planning. Existing methods to derive this model are of two types: one determines the reference model by estimating the deformation field to correct the patient's deformed jaw, often introducing distortions in the predicted reference model; The other derives the reference model using a linear combination of their landmarks/vertices but overlooks the intricate nonlinear relationship between the subjects, compromising the model's precision and quality. METHODS We have created a self-supervised learning framework to estimate the reference model. The core of this framework is a deep query network, which estimates the similarity scores between the patient's midface and those of the normal subjects in a high-dimensional space. Subsequently, it aggregates high-dimensional features of these subjects and projects these features back to 3D structures, ultimately achieving a patient-specific reference model. RESULTS Our approach was trained using a dataset of 51 normal subjects and tested on 30 patient subjects to estimate their reference models. Performance assessment against the actual post-operative bone revealed a mean Chamfer distance error of 2.25 mm and an average surface distance error of 2.30 mm across the patient subjects. CONCLUSION Our proposed method emphasizes the correlation between the patients and the normal subjects in a high-dimensional space, facilitating the generation of the patient-specific reference model. Both qualitative and quantitative results demonstrate its superiority over current state-of-the-art methods in reference model estimation.
Collapse
Affiliation(s)
- Xi Fang
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Hannah H Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, 77030, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, 77030, USA
| | - Xuanang Xu
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Jungwook Lee
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, 77030, USA.
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NewYork, NY, 10021, USA.
| | - Pingkun Yan
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA.
| |
Collapse
|
2
|
Ma C, Gu Y, Wang Z. TriConvUNeXt: A Pure CNN-Based Lightweight Symmetrical Network for Biomedical Image Segmentation. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01116-8. [PMID: 38653912 DOI: 10.1007/s10278-024-01116-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 03/15/2024] [Accepted: 03/25/2024] [Indexed: 04/25/2024]
Abstract
Biomedical image segmentation is essential in clinical practices, offering critical insights for accurate diagnosis and strategic treatment approaches. Nowadays, self-attention-based networks have achieved competitive performance in both natural language processing and computer vision, but the computational cost has reduced their popularity in practical applications. The recent study of Convolutional Neural Network (CNN) explores linear functions within modified CNN layer demonstrating pure CNN-based networks can still achieve competitive results against Vision Transformer (ViT) in biomedical image segmentation, with fewer parameters. The modified CNN, i.e., Depthwise CNN, however, leaves the features cleaved off in the channel dimension and prevents the extraction of features in the perspective of channel interaction. To effectively further explore the feature learning power of modified CNN with biomedical image segmentation, we design a lightweight multi-convolutional multi-scale convolutional network block (MSConvNeXt) for U-shape symmetrical network. Specifically, a network block consisting of a depthwise CNN, a deformable CNN, and a dilated CNN is proposed to capture semantic feature information effectively while with low computational cost. Furthermore, channel shuffling operation is proposed to dynamically promote an efficient feature fusion among the feature maps. The network block hereby is properly deployed within U-shape symmetrical encoder-decoder style network, named TriConvUNeXt. The proposed network is validated on a public benchmark dataset with a comprehensive evaluation in both computational cost and segmentation performance against 13 baseline methods. Specifically, TriConvUNeXt achieves 1% higher than UNet and TransUNet in Dice-Coefficient while 81% and 97% lower in computational cost. The implementation of TriConvUNeXt is made publicly accessible via https://github.com/ziyangwang007/TriConvUNeXt .
Collapse
Affiliation(s)
- Chao Ma
- Mianyang Visual Object Detection and Recognition Engineering Center, Mianyang, China
| | - Yuan Gu
- School of Medicine, Stanford University, Stanford, USA
| | - Ziyang Wang
- Department of Computer Science, University of Oxford, Oxford, UK.
| |
Collapse
|
3
|
Liu J, Xing F, Shaikh A, French B, Linguraru MG, Porras AR. Joint Cranial Bone Labeling and Landmark Detection in Pediatric CT Images Using Context Encoding. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3117-3126. [PMID: 37216247 PMCID: PMC10760565 DOI: 10.1109/tmi.2023.3278493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Image segmentation, labeling, and landmark detection are essential tasks for pediatric craniofacial evaluation. Although deep neural networks have been recently adopted to segment cranial bones and locate cranial landmarks from computed tomography (CT) or magnetic resonance (MR) images, they may be hard to train and provide suboptimal results in some applications. First, they seldom leverage global contextual information that can improve object detection performance. Second, most methods rely on multi-stage algorithm designs that are inefficient and prone to error accumulation. Third, existing methods often target simple segmentation tasks and have shown low reliability in more challenging scenarios such as multiple cranial bone labeling in highly variable pediatric datasets. In this paper, we present a novel end-to-end neural network architecture based on DenseNet that incorporates context regularization to jointly label cranial bone plates and detect cranial base landmarks from CT images. Specifically, we designed a context-encoding module that encodes global context information as landmark displacement vector maps and uses it to guide feature learning for both bone labeling and landmark identification. We evaluated our model on a highly diverse pediatric CT image dataset of 274 normative subjects and 239 patients with craniosynostosis (age 0.63 ± 0.54 years, range 0-2 years). Our experiments demonstrate improved performance compared to state-of-the-art approaches.
Collapse
|
4
|
Weingart JV, Schlager S, Metzger MC, Brandenburg LS, Hein A, Schmelzeisen R, Bamberg F, Kim S, Kellner E, Reisert M, Russe MF. Automated detection of cephalometric landmarks using deep neural patchworks. Dentomaxillofac Radiol 2023; 52:20230059. [PMID: 37427585 PMCID: PMC10461263 DOI: 10.1259/dmfr.20230059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/25/2023] [Accepted: 05/13/2023] [Indexed: 07/11/2023] Open
Abstract
OBJECTIVES This study evaluated the accuracy of deep neural patchworks (DNPs), a deep learning-based segmentation framework, for automated identification of 60 cephalometric landmarks (bone-, soft tissue- and tooth-landmarks) on CT scans. The aim was to determine whether DNP could be used for routine three-dimensional cephalometric analysis in diagnostics and treatment planning in orthognathic surgery and orthodontics. METHODS Full skull CT scans of 30 adult patients (18 female, 12 male, mean age 35.6 years) were randomly divided into a training and test data set (each n = 15). Clinician A annotated 60 landmarks in all 30 CT scans. Clinician B annotated 60 landmarks in the test data set only. The DNP was trained using spherical segmentations of the adjacent tissue for each landmark. Automated landmark predictions in the separate test data set were created by calculating the center of mass of the predictions. The accuracy of the method was evaluated by comparing these annotations to the manual annotations. RESULTS The DNP was successfully trained to identify all 60 landmarks. The mean error of our method was 1.94 mm (SD 1.45 mm) compared to a mean error of 1.32 mm (SD 1.08 mm) for manual annotations. The minimum error was found for landmarks ANS 1.11 mm, SN 1.2 mm, and CP_R 1.25 mm. CONCLUSION The DNP-algorithm was able to accurately identify cephalometric landmarks with mean errors <2 mm. This method could improve the workflow of cephalometric analysis in orthodontics and orthognathic surgery. Low training requirements while still accomplishing high precision make this method particularly promising for clinical use.
Collapse
Affiliation(s)
- Julia Vera Weingart
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Stefan Schlager
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marc Christian Metzger
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Leonard Simon Brandenburg
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Anna Hein
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Rainer Schmelzeisen
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Suam Kim
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Elias Kellner
- Department of Medical Physics, Faculty of Medicine, Medical Center – University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Department of Medical Physics, Faculty of Medicine, Medical Center – University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
5
|
Serafin M, Baldini B, Cabitza F, Carrafiello G, Baselli G, Del Fabbro M, Sforza C, Caprioglio A, Tartaglia GM. Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: systematic review and meta-analysis. LA RADIOLOGIA MEDICA 2023; 128:544-555. [PMID: 37093337 PMCID: PMC10181977 DOI: 10.1007/s11547-023-01629-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/28/2023] [Indexed: 04/25/2023]
Abstract
OBJECTIVES The aim of the present systematic review and meta-analysis is to assess the accuracy of automated landmarking using deep learning in comparison with manual tracing for cephalometric analysis of 3D medical images. METHODS PubMed/Medline, IEEE Xplore, Scopus and ArXiv electronic databases were searched. Selection criteria were: ex vivo and in vivo volumetric data images suitable for 3D landmarking (Problem), a minimum of five automated landmarking performed by deep learning method (Intervention), manual landmarking (Comparison), and mean accuracy, in mm, between manual and automated landmarking (Outcome). QUADAS-2 was adapted for quality analysis. Meta-analysis was performed on studies that reported as outcome mean values and standard deviation of the difference (error) between manual and automated landmarking. Linear regression plots were used to analyze correlations between mean accuracy and year of publication. RESULTS The initial electronic screening yielded 252 papers published between 2020 and 2022. A total of 15 studies were included for the qualitative synthesis, whereas 11 studies were used for the meta-analysis. Overall random effect model revealed a mean value of 2.44 mm, with a high heterogeneity (I2 = 98.13%, τ2 = 1.018, p-value < 0.001); risk of bias was high due to the presence of issues for several domains per study. Meta-regression indicated a significant relation between mean error and year of publication (p value = 0.012). CONCLUSION Deep learning algorithms showed an excellent accuracy for automated 3D cephalometric landmarking. In the last two years promising algorithms have been developed and improvements in landmarks annotation accuracy have been done.
Collapse
Affiliation(s)
- Marco Serafin
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Benedetta Baldini
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy.
| | - Federico Cabitza
- Department of Informatics, System and Communication, University of Milano-Bicocca, Viale Sarca 336, 20126, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Belgioioso 173, 20157, Milan, Italy
| | - Gianpaolo Carrafiello
- Department of Oncology and Hematology-Oncology, University of Milan, Via Sforza 35, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Giuseppe Baselli
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy
| | - Massimo Del Fabbro
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Chiarella Sforza
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Alberto Caprioglio
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Gianluca M Tartaglia
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| |
Collapse
|
6
|
Ma L, Xiao D, Kim D, Lian C, Kuang T, Liu Q, Deng H, Yang E, Liebschner MAK, Gateno J, Xia JJ, Yap PT. Simulation of Postoperative Facial Appearances via Geometric Deep Learning for Efficient Orthognathic Surgical Planning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:336-345. [PMID: 35657829 PMCID: PMC10037541 DOI: 10.1109/tmi.2022.3180078] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Orthognathic surgery corrects jaw deformities to improve aesthetics and functions. Due to the complexity of the craniomaxillofacial (CMF) anatomy, orthognathic surgery requires precise surgical planning, which involves predicting postoperative changes in facial appearance. To this end, most conventional methods involve simulation with biomechanical modeling methods, which are labor intensive and computationally expensive. Here we introduce a learning-based framework to speed up the simulation of postoperative facial appearances. Specifically, we introduce a facial shape change prediction network (FSC-Net) to learn the nonlinear mapping from bony shape changes to facial shape changes. FSC-Net is a point transform network weakly-supervised by paired preoperative and postoperative data without point-wise correspondence. In FSC-Net, a distance-guided shape loss places more emphasis on the jaw region. A local point constraint loss restricts point displacements to preserve the topology and smoothness of the surface mesh after point transformation. Evaluation results indicate that FSC-Net achieves 15× speedup with accuracy comparable to a state-of-the-art (SOTA) finite-element modeling (FEM) method.
Collapse
|
7
|
Deng H, Liu Q, Chen A, Kuang T, Yuan P, Gateno J, Kim D, Barber J, Xiong K, Yu P, Gu K, Xu X, Yan P, Shen D, Xia J. Clinical feasibility of deep learning-based automatic head CBCT image segmentation and landmark detection in computer-aided surgical simulation for orthognathic surgery. Int J Oral Maxillofac Surg 2022:S0901-5027(22)00425-8. [PMID: 36372697 PMCID: PMC10169531 DOI: 10.1016/j.ijom.2022.10.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/14/2022] [Accepted: 10/17/2022] [Indexed: 11/11/2022]
Abstract
The purpose of this ambispective study was to investigate whether deep learning-based automatic segmentation and landmark detection, the SkullEngine, could be used for orthognathic surgical planning. Sixty-one sets of cone beam computed tomography (CBCT) images were automatically inferred for midface, mandible, upper and lower teeth, and 68 landmarks. The experimental group included automatic segmentation and landmarks, while the control group included manual ones that were previously used to plan orthognathic surgery. The qualitative analysis of segmentation showed that all of the automatic results could be used for computer-aided surgical simulation. Among these, 98.4% of midface, 70.5% of mandible, 98.4% of upper teeth, and 93.4% of lower teeth could be directly used without manual revision. The Dice similarity coefficient was 96% and the average symmetric surface distance was 0.1 mm for all four structures. With SkullEngine, it took 4 minutes to complete the automatic segmentation and an additional 10 minutes for a manual touchup. The results also showed the overall mean difference between the two groups was 2.3 mm for the midface and 2.4 mm for the mandible. In summary, the authors believe that automatic segmentation using SkullEngine is ready for daily practice. However, the accuracy of automatic landmark digitization needs to be improved.
Collapse
|
8
|
Dot G, Schouman T, Chang S, Rafflenbeul F, Kerbrat A, Rouch P, Gajny L. Automatic 3-Dimensional Cephalometric Landmarking via Deep Learning. J Dent Res 2022; 101:1380-1387. [PMID: 35982646 DOI: 10.1177/00220345221112333] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The increasing use of 3-dimensional (3D) imaging by orthodontists and maxillofacial surgeons to assess complex dentofacial deformities and plan orthognathic surgeries implies a critical need for 3D cephalometric analysis. Although promising methods were suggested to localize 3D landmarks automatically, concerns about robustness and generalizability restrain their clinical use. Consequently, highly trained operators remain needed to perform manual landmarking. In this retrospective diagnostic study, we aimed to train and evaluate a deep learning (DL) pipeline based on SpatialConfiguration-Net for automatic localization of 3D cephalometric landmarks on computed tomography (CT) scans. A retrospective sample of consecutive presurgical CT scans was randomly distributed between a training/validation set (n = 160) and a test set (n = 38). The reference data consisted of 33 landmarks, manually localized once by 1 operator(n = 178) or twice by 3 operators (n = 20, test set only). After inference on the test set, 1 CT scan showed "very low" confidence level predictions; we excluded it from the overall analysis but still assessed and discussed the corresponding results. The model performance was evaluated by comparing the predictions with the reference data; the outcome set included localization accuracy, cephalometric measurements, and comparison to manual landmarking reproducibility. On the hold-out test set, the mean localization error was 1.0 ± 1.3 mm, while success detection rates for 2.0, 2.5, and 3.0 mm were 90.4%, 93.6%, and 95.4%, respectively. Mean errors were -0.3 ± 1.3° and -0.1 ± 0.7 mm for angular and linear measurements, respectively. When compared to manual reproducibility, the measurements were within the Bland-Altman 95% limits of agreement for 91.9% and 71.8% of skeletal and dentoalveolar variables, respectively. To conclude, while our DL method still requires improvement, it provided highly accurate 3D landmark localization on a challenging test set, with a reliability for skeletal evaluation on par with what clinicians obtain.
Collapse
Affiliation(s)
- G Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Universite Paris Cite, AP-HP, Hopital Pitie Salpetriere, Service de Medecine Bucco-Dentaire, Paris, France
| | - T Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - S Chang
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - F Rafflenbeul
- Department of Dentofacial Orthopedics, Faculty of Dental Surgery, Strasbourg University, Strasbourg, France
| | - A Kerbrat
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - P Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - L Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| |
Collapse
|
9
|
Thurzo A, Kosnáčová HS, Kurilová V, Kosmeľ S, Beňuš R, Moravanský N, Kováč P, Kuracinová KM, Palkovič M, Varga I. Use of Advanced Artificial Intelligence in Forensic Medicine, Forensic Anthropology and Clinical Anatomy. Healthcare (Basel) 2021; 9:1545. [PMID: 34828590 PMCID: PMC8619074 DOI: 10.3390/healthcare9111545] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 11/10/2021] [Accepted: 11/10/2021] [Indexed: 12/11/2022] Open
Abstract
Three-dimensional convolutional neural networks (3D CNN) of artificial intelligence (AI) are potent in image processing and recognition using deep learning to perform generative and descriptive tasks. Compared to its predecessor, the advantage of CNN is that it automatically detects the important features without any human supervision. 3D CNN is used to extract features in three dimensions where input is a 3D volume or a sequence of 2D pictures, e.g., slices in a cone-beam computer tomography scan (CBCT). The main aim was to bridge interdisciplinary cooperation between forensic medical experts and deep learning engineers, emphasizing activating clinical forensic experts in the field with possibly basic knowledge of advanced artificial intelligence techniques with interest in its implementation in their efforts to advance forensic research further. This paper introduces a novel workflow of 3D CNN analysis of full-head CBCT scans. Authors explore the current and design customized 3D CNN application methods for particular forensic research in five perspectives: (1) sex determination, (2) biological age estimation, (3) 3D cephalometric landmark annotation, (4) growth vectors prediction, (5) facial soft-tissue estimation from the skull and vice versa. In conclusion, 3D CNN application can be a watershed moment in forensic medicine, leading to unprecedented improvement of forensic analysis workflows based on 3D neural networks.
Collapse
Affiliation(s)
- Andrej Thurzo
- Department of Stomatology and Maxillofacial Surgery, Faculty of Medicine, Comenius University in Bratislava, 81250 Bratislava, Slovakia
- Department of Simulation and Virtual Medical Education, Faculty of Medicine, Comenius University, Sasinkova 4, 81272 Bratislava, Slovakia;
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
| | - Helena Svobodová Kosnáčová
- Department of Simulation and Virtual Medical Education, Faculty of Medicine, Comenius University, Sasinkova 4, 81272 Bratislava, Slovakia;
- Department of Genetics, Cancer Research Institute, Biomedical Research Center, Slovak Academy Sciences, Dúbravská Cesta 9, 84505 Bratislava, Slovakia
| | - Veronika Kurilová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 81219 Bratislava, Slovakia;
| | - Silvester Kosmeľ
- Deep Learning Engineering Department at Cognexa, Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovičova 2, 84216 Bratislava, Slovakia;
| | - Radoslav Beňuš
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Department of Anthropology, Faculty of Natural Sciences, Comenius University in Bratislava, Mlynská dolina Ilkovičova 6, 84215 Bratislava, Slovakia
| | - Norbert Moravanský
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Institute of Forensic Medicine, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia
| | - Peter Kováč
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Department of Criminal Law and Criminology, Faculty of Law Trnava University, Kollárova 10, 91701 Trnava, Slovakia
| | - Kristína Mikuš Kuracinová
- Institute of Pathological Anatomy, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia; (K.M.K.); (M.P.)
| | - Michal Palkovič
- Institute of Pathological Anatomy, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia; (K.M.K.); (M.P.)
- Forensic Medicine and Pathological Anatomy Department, Health Care Surveillance Authority (HCSA), Sasinkova 4, 81108 Bratislava, Slovakia
| | - Ivan Varga
- Institute of Histology and Embryology, Faculty of Medicine, Comenius University in Bratislava, 81372 Bratislava, Slovakia;
| |
Collapse
|