1
|
Jiang Y, Jiang C, Shi B, Wu Y, Xing S, Liang H, Huang J, Huang X, Huang L, Lin L. Automatic identification of hard and soft tissue landmarks in cone-beam computed tomography via deep learning with diversity datasets: a methodological study. BMC Oral Health 2025; 25:505. [PMID: 40200295 PMCID: PMC11980328 DOI: 10.1186/s12903-025-05831-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Accepted: 03/17/2025] [Indexed: 04/10/2025] Open
Abstract
BACKGROUND Manual landmark detection in cone beam computed tomography (CBCT) for evaluating craniofacial structures relies on medical expertise and is time-consuming. This study aimed to apply a new deep learning method to predict and locate soft and hard tissue craniofacial landmarks on CBCT in patients with various types of malocclusion. METHODS A total of 498 CBCT images were collected. Following the calibration procedure, two experienced clinicians identified 43 landmarks in the x-, y-, and z-coordinate planes on the CBCT images using Checkpoint Software, creating the ground truth by averaging the landmark coordinates. To evaluate the accuracy of our algorithm, we determined the mean absolute error along the x-, y-, and z-axes and calculated the mean radial error (MRE) between the reference landmark and predicted landmark, as well as the successful detection rate (SDR). RESULTS Each landmark prediction took approximately 4.2 s on a conventional graphics processing unit. The mean absolute error across all coordinates was 0.74 mm. The overall MRE for the 43 landmarks was 1.76 ± 1.13 mm, and the SDR was 60.16%, 91.05%, and 97.58% within 2-, 3-, and 4-mm error ranges of manual marking, respectively. The average MRE of the hard tissue landmarks (32/43) was 1.73 mm, while that for soft tissue landmarks (11/43) was 1.84 mm. CONCLUSIONS Our proposed algorithm demonstrates a clinically acceptable level of accuracy and robustness for automatic detection of CBCT soft- and hard-tissue landmarks across all types of malformations. The potential for artificial intelligence to assist in identifying three dimensional-CT landmarks in routine clinical practice and analysing large datasets for future research is promising.
Collapse
Affiliation(s)
- Yan Jiang
- Department of Stomatology, The First Affiliated Hospital of Fujian Medical University, Tai-Jiang District, No.20 Cha-Ting-Zhong Road, Fuzhou, 350005, China
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China
| | - Canyang Jiang
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China
| | - Bin Shi
- Department of Stomatology, The First Affiliated Hospital of Fujian Medical University, Tai-Jiang District, No.20 Cha-Ting-Zhong Road, Fuzhou, 350005, China
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China
| | - You Wu
- School of Stomatology, Fujian Medical University, Fuzhou, 350122, China
| | - Shuli Xing
- College of Computer Science and Mathematics, Fujian University of Technology, Fujian, 350118, China
| | - Hao Liang
- College of Computer Science and Mathematics, Fujian University of Technology, Fujian, 350118, China
| | - Jianping Huang
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China
| | - Xiaohong Huang
- Department of Stomatology, The First Affiliated Hospital of Fujian Medical University, Tai-Jiang District, No.20 Cha-Ting-Zhong Road, Fuzhou, 350005, China.
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China.
| | - Li Huang
- Department of Stomatology, The First Affiliated Hospital of Fujian Medical University, Tai-Jiang District, No.20 Cha-Ting-Zhong Road, Fuzhou, 350005, China.
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China.
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China.
| | - Lisong Lin
- Department of Stomatology, The First Affiliated Hospital of Fujian Medical University, Tai-Jiang District, No.20 Cha-Ting-Zhong Road, Fuzhou, 350005, China.
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, 350212, China.
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China.
| |
Collapse
|
2
|
Binvignat P, Chaurasia A, Lahoud P, Jacobs R, Pokhojaev A, Sarig R, Ducret M, Richert R. Isotopological remeshing and statistical shape analysis: Enhancing premolar tooth wear classification and simulation with machine learning. J Dent 2024; 149:105280. [PMID: 39094975 DOI: 10.1016/j.jdent.2024.105280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Revised: 07/25/2024] [Accepted: 07/30/2024] [Indexed: 08/04/2024] Open
Abstract
OBJECTIVE The aim of this study was to evaluate the accuracy of a combined approach based on an isotopological remeshing and statistical shape analysis (SSA) to capture key anatomical features of altered and intact premolars. Additionally, the study compares the capabilities of four Machine Learning (ML) algorithms in identifying or simulating tooth alterations. METHODS 113 premolar surfaces from a multicenter database were analyzed. These surfaces were processed using an isotopological remeshing method, followed by a SSA. Mean Euclidean distances between the initial and remeshed STL files were calculated to assess deviation in anatomical landmark positioning. Seven anatomical features were extracted from each tooth, and their correlations with shape modes and morphological characteristics were explored. Four ML algorithms, validated through three-fold cross-validation, were assessed for their ability to classify tooth types and alterations. Additionally, twenty intact teeth were altered and then reconstructed to verify the method's accuracy. RESULTS The first five modes encapsulated 76.1% of the total shape variability, with a mean landmark positioning deviation of 10.4 µm (±6.4). Significant correlations were found between shape modes and specific morphological features. The optimal ML algorithms demonstrated high accuracy (>83%) and precision (>86%). Simulations on intact teeth showed discrepancies in anatomical features below 3%. CONCLUSION The combination of an isotopological remeshing with SSA showed good reliability in capturing key anatomical features of the tooth. CLINICAL SIGNIFICANCE The encouraging performance of ML algorithms suggests a promising direction for supporting practitioners in diagnosing and planning treatments for patients with altered teeth, ultimately improving preventive care.
Collapse
Affiliation(s)
| | - Akhilanand Chaurasia
- Department of Oral Medicine and Radiology, King George's Medical University, Lucknow, India
| | - Pierre Lahoud
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, Leuven, Belgium; Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium; Division of Periodontology and Oral Microbiology, Department of Oral Health Sciences, KU Leuven, Leuven, Belgium
| | - Reinhilde Jacobs
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, Leuven, Belgium; Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium; Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| | - Ariel Pokhojaev
- Department of Oral Biology, Goldschleger School of Dental Medicine, Faculty of Medical & Health Sciences, Tel Aviv University, POB 39040, Tel Aviv 6997801, Israel; Shmunis Family Anthropology Institute, Dan David Center for Human Evolution and Biohistory Research, Tel Aviv University, POB 39040, Tel Aviv 6997801, Israel
| | - Rachel Sarig
- Department of Oral Biology, Goldschleger School of Dental Medicine, Faculty of Medical & Health Sciences, Tel Aviv University, POB 39040, Tel Aviv 6997801, Israel; Shmunis Family Anthropology Institute, Dan David Center for Human Evolution and Biohistory Research, Tel Aviv University, POB 39040, Tel Aviv 6997801, Israel
| | - Maxime Ducret
- Hospices Civils de Lyon, PAM Odontologie, Lyon, France; Laboratoire de Biologie Tissulaire et Ingénierie thérapeutique, UMR 5305 CNRS/UCBL/Univ de Lyon, Lyon 69008, France
| | - Raphael Richert
- Hospices Civils de Lyon, PAM Odontologie, Lyon, France; Laboratoire de Mécanique Des Contacts Et Structures LaMCoS, UMR 5259 INSA Lyon, CNRS, Villeurbanne 69621, France.
| |
Collapse
|
3
|
Lee Y, Pyeon JH, Han SH, Kim NJ, Park WJ, Park JB. A Comparative Study of Deep Learning and Manual Methods for Identifying Anatomical Landmarks through Cephalometry and Cone-Beam Computed Tomography: A Systematic Review and Meta-Analysis. APPLIED SCIENCES 2024; 14:7342. [DOI: 10.3390/app14167342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/10/2025]
Abstract
Background: Researchers have noted that the advent of artificial intelligence (AI) heralds a promising era, with potential to significantly enhance diagnostic and predictive abilities in clinical settings. The aim of this meta-analysis is to evaluate the discrepancies in identifying anatomical landmarks between AI and manual approaches. Methods: A comprehensive search strategy was employed, incorporating controlled vocabulary (MeSH) and free-text terms. This search was conducted by two reviewers to identify published systematic reviews. Three major electronic databases, namely, Medline via PubMed, the Cochrane database, and Embase, were searched up to May 2024. Results: Initially, 369 articles were identified. After conducting a comprehensive search and applying strict inclusion criteria, a total of ten studies were deemed eligible for inclusion in the meta-analysis. The results showed that the average difference in detecting anatomical landmarks between artificial intelligence and manual approaches was 0.35, with a 95% confidence interval (CI) ranging from −0.09 to 0.78. Additionally, the overall effect between the two groups was found to be insignificant. Upon further analysis of the subgroup of cephalometric radiographs, it was determined that there were no significant differences between the two groups in terms of detecting anatomical landmarks. Similarly, the subgroup of cone-beam computed tomography (CBCT) revealed no significant differences between the groups. Conclusions: In summary, the study concluded that the use of artificial intelligence is just as effective as the manual approach when it comes to detecting anatomical landmarks, both in general and in specific contexts such as cephalometric radiographs and CBCT evaluations.
Collapse
Affiliation(s)
- Yoonji Lee
- Orthodontics, Graduate School of Clinical Dental Science, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Jeong-Hye Pyeon
- Orthodontics, Graduate School of Clinical Dental Science, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Sung-Hoon Han
- Department of Orthodontics, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Na Jin Kim
- Medical Library, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Won-Jong Park
- Department of Oral and Maxillofacial Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Jun-Beom Park
- Department of Periodontics, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
- Dental Implantology, Graduate School of Clinical Dental Science, The Catholic University of Korea, Seoul 06591, Republic of Korea
- Department of Medicine, Graduate School, The Catholic University of Korea, Seoul 06591, Republic of Korea
| |
Collapse
|
4
|
Spangenberg GW, Uddin F, Faber KJ, Langohr GDG. Automatic bicipital groove identification in arthritic humeri for preoperative planning: A Random Forest Classifier approach. Comput Biol Med 2024; 178:108653. [PMID: 38861894 DOI: 10.1016/j.compbiomed.2024.108653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/06/2024] [Accepted: 05/22/2024] [Indexed: 06/13/2024]
Abstract
The bicipital groove is an important anatomical feature of the proximal humerus that needs to be identified during surgical planning for procedures such as shoulder arthroplasty and proximal humeral fracture reconstruction. Current algorithms for automatic identification prove ineffective in arthritic humeri due to the presence of osteophytes, reducing their usefulness for total shoulder arthroplasty. Our methodology involves the use of a Random Forest Classifier (RFC) to automatically detect the bicipital groove on segmented computed tomography scans of humeri. We evaluated our model on two distinct test datasets: one comprising non-arthritic humeri and another with arthritic humeri characterized by significant osteophytes. Our model detected the bicipital groove with a mean absolute error of less than 1mm on arthritic humeri, demonstrating a significant improvement over the previous gold standard approach. Successful identification of the bicipital groove with a high degree of accuracy even in arthritic humeri was accomplished. This model is open source and included in the python package shoulder.
Collapse
Affiliation(s)
- Gregory W Spangenberg
- Department of Mechanical Engineering, Western University, London, ON, Canada; The Roth McFarlane Hand and Upper Limb Centre, St. Joseph's Hospital, London, ON, Canada.
| | - Fares Uddin
- The Roth McFarlane Hand and Upper Limb Centre, St. Joseph's Hospital, London, ON, Canada; Department of Surgery, Western University, London, ON, Canada
| | - Kenneth J Faber
- The Roth McFarlane Hand and Upper Limb Centre, St. Joseph's Hospital, London, ON, Canada; Department of Surgery, Western University, London, ON, Canada
| | - G Daniel G Langohr
- Department of Mechanical Engineering, Western University, London, ON, Canada; The Roth McFarlane Hand and Upper Limb Centre, St. Joseph's Hospital, London, ON, Canada
| |
Collapse
|
5
|
Weingart JV, Schlager S, Metzger MC, Brandenburg LS, Hein A, Schmelzeisen R, Bamberg F, Kim S, Kellner E, Reisert M, Russe MF. Automated detection of cephalometric landmarks using deep neural patchworks. Dentomaxillofac Radiol 2023; 52:20230059. [PMID: 37427585 PMCID: PMC10461263 DOI: 10.1259/dmfr.20230059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/25/2023] [Accepted: 05/13/2023] [Indexed: 07/11/2023] Open
Abstract
OBJECTIVES This study evaluated the accuracy of deep neural patchworks (DNPs), a deep learning-based segmentation framework, for automated identification of 60 cephalometric landmarks (bone-, soft tissue- and tooth-landmarks) on CT scans. The aim was to determine whether DNP could be used for routine three-dimensional cephalometric analysis in diagnostics and treatment planning in orthognathic surgery and orthodontics. METHODS Full skull CT scans of 30 adult patients (18 female, 12 male, mean age 35.6 years) were randomly divided into a training and test data set (each n = 15). Clinician A annotated 60 landmarks in all 30 CT scans. Clinician B annotated 60 landmarks in the test data set only. The DNP was trained using spherical segmentations of the adjacent tissue for each landmark. Automated landmark predictions in the separate test data set were created by calculating the center of mass of the predictions. The accuracy of the method was evaluated by comparing these annotations to the manual annotations. RESULTS The DNP was successfully trained to identify all 60 landmarks. The mean error of our method was 1.94 mm (SD 1.45 mm) compared to a mean error of 1.32 mm (SD 1.08 mm) for manual annotations. The minimum error was found for landmarks ANS 1.11 mm, SN 1.2 mm, and CP_R 1.25 mm. CONCLUSION The DNP-algorithm was able to accurately identify cephalometric landmarks with mean errors <2 mm. This method could improve the workflow of cephalometric analysis in orthodontics and orthognathic surgery. Low training requirements while still accomplishing high precision make this method particularly promising for clinical use.
Collapse
Affiliation(s)
- Julia Vera Weingart
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Stefan Schlager
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marc Christian Metzger
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Leonard Simon Brandenburg
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Anna Hein
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Rainer Schmelzeisen
- Department of Oral and Maxillofacial Surgery, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Suam Kim
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Elias Kellner
- Department of Medical Physics, Faculty of Medicine, Medical Center – University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Department of Medical Physics, Faculty of Medicine, Medical Center – University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Medical Center – University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
6
|
Yang S, Lee SJ, Yoo JY, Kang SR, Kim JM, Kim JE, Huh KH, Lee SS, Heo MS, Yang HJ, Yi WJ. V 2-Net: An Attention-guided Volumetric Regression Network for Tooth Landmark Localization on CT Images with Metal Artifacts. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083381 DOI: 10.1109/embc40787.2023.10340891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
For virtual surgical planning in orthognathic surgery, marking tooth landmarks on CT images is an important procedure. However, the manual localization procedure of tooth landmarks is time-consuming, labor-intensive, and requires expert knowledge. Also, direct and automatic tooth landmark localization on CT images is difficult because of the lower resolution and metal artifacts of dental images. The purpose of this study was to propose an attention-guided volumetric regression network (V2-Net) for accurate tooth landmark localization on CT images with metal artifacts and lower resolution. V2-Net has an attention-guided network architecture using a coarse-to-fine-attention mechanism that guided the 3D probability distribution of tooth landmark locations within anatomical structures from the coarse V-Net to the fine V-Net for more focus on tooth landmarks. In addition, we combined attention-guided learning and a 3D attention module with optimal Pseudo Huber loss to improve the localization accuracy. Our results show that the proposed method achieves state-of-the-art accuracy of 0.85 ± 0.40 mm in terms of mean radial error, outperforming previous studies. In ablation studies, we observed that the proposed attention-guided learning and a 3D attention module improved the accuracy of tooth landmark localization in CT images of lower resolution and metal artifacts. Furthermore, our method achieved 97.92% in terms of the success detection rate within the clinically accepted accuracy range of 2.0 mm.
Collapse
|
7
|
Nishimoto S, Saito T, Ishise H, Fujiwara T, Kawai K, Kakibuchi M. Three-Dimensional Craniofacial Landmark Detection in Series of CT Slices Using Multi-Phased Regression Networks. Diagnostics (Basel) 2023; 13:diagnostics13111930. [PMID: 37296782 DOI: 10.3390/diagnostics13111930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 05/26/2023] [Accepted: 05/28/2023] [Indexed: 06/12/2023] Open
Abstract
Geometrical assessments of human skulls have been conducted based on anatomical landmarks. If developed, the automatic detection of these landmarks will yield both medical and anthropological benefits. In this study, an automated system with multi-phased deep learning networks was developed to predict the three-dimensional coordinate values of craniofacial landmarks. Computed tomography images of the craniofacial area were obtained from a publicly available database. They were digitally reconstructed into three-dimensional objects. Sixteen anatomical landmarks were plotted on each of the objects, and their coordinate values were recorded. Three-phased regression deep learning networks were trained using ninety training datasets. For the evaluation, 30 testing datasets were employed. The 3D error for the first phase, which tested 30 data, was 11.60 px on average (1 px = 500/512 mm). For the second phase, it was significantly improved to 4.66 px. For the third phase, it was further significantly reduced to 2.88. This was comparable to the gaps between the landmarks, as plotted by two experienced practitioners. Our proposed method of multi-phased prediction, which conducts coarse detection first and narrows down the detection area, may be a possible solution to prediction problems, taking into account the physical limitations of memory and computation.
Collapse
Affiliation(s)
- Soh Nishimoto
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Takuya Saito
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Hisako Ishise
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Toshihiro Fujiwara
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Kenichiro Kawai
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Masao Kakibuchi
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| |
Collapse
|
8
|
de Queiroz Tavares Borges Mesquita G, Vieira WA, Vidigal MTC, Travençolo BAN, Beaini TL, Spin-Neto R, Paranhos LR, de Brito Júnior RB. Artificial Intelligence for Detecting Cephalometric Landmarks: A Systematic Review and Meta-analysis. J Digit Imaging 2023; 36:1158-1179. [PMID: 36604364 PMCID: PMC10287619 DOI: 10.1007/s10278-022-00766-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 11/19/2022] [Accepted: 12/19/2022] [Indexed: 01/07/2023] Open
Abstract
Using computer vision through artificial intelligence (AI) is one of the main technological advances in dentistry. However, the existing literature on the practical application of AI for detecting cephalometric landmarks of orthodontic interest in digital images is heterogeneous, and there is no consensus regarding accuracy and precision. Thus, this review evaluated the use of artificial intelligence for detecting cephalometric landmarks in digital imaging examinations and compared it to manual annotation of landmarks. An electronic search was performed in nine databases to find studies that analyzed the detection of cephalometric landmarks in digital imaging examinations with AI and manual landmarking. Two reviewers selected the studies, extracted the data, and assessed the risk of bias using QUADAS-2. Random-effects meta-analyses determined the agreement and precision of AI compared to manual detection at a 95% confidence interval. The electronic search located 7410 studies, of which 40 were included. Only three studies presented a low risk of bias for all domains evaluated. The meta-analysis showed AI agreement rates of 79% (95% CI: 76-82%, I2 = 99%) and 90% (95% CI: 87-92%, I2 = 99%) for the thresholds of 2 and 3 mm, respectively, with a mean divergence of 2.05 (95% CI: 1.41-2.69, I2 = 10%) compared to manual landmarking. The menton cephalometric landmark showed the lowest divergence between both methods (SMD, 1.17; 95% CI, 0.82; 1.53; I2 = 0%). Based on very low certainty of evidence, the application of AI was promising for automatically detecting cephalometric landmarks, but further studies should focus on testing its strength and validity in different samples.
Collapse
Affiliation(s)
| | - Walbert A Vieira
- Department of Restorative Dentistry, Endodontics Division, School of Dentistry of Piracicaba, State University of Campinas, Piracicaba, São Paulo, Brazil
| | | | | | - Thiago Leite Beaini
- Department of Preventive and Community Dentistry, School of Dentistry, Federal University of Uberlândia, Campus Umuarama Av. Pará, 1720, Bloco 2G, sala 1, 38405-320, Uberlândia, Minas Gerais, Brazil
| | - Rubens Spin-Neto
- Department of Dentistry and Oral Health, Section for Oral Radiology, Aarhus University, Aarhus C, Denmark
| | - Luiz Renato Paranhos
- Department of Preventive and Community Dentistry, School of Dentistry, Federal University of Uberlândia, Campus Umuarama Av. Pará, 1720, Bloco 2G, sala 1, 38405-320, Uberlândia, Minas Gerais, Brazil.
| | | |
Collapse
|
9
|
Blum FMS, Möhlhenrich SC, Raith S, Pankert T, Peters F, Wolf M, Hölzle F, Modabber A. Evaluation of an artificial intelligence-based algorithm for automated localization of craniofacial landmarks. Clin Oral Investig 2023; 27:2255-2265. [PMID: 37014502 PMCID: PMC10159965 DOI: 10.1007/s00784-023-04978-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 03/21/2023] [Indexed: 04/05/2023]
Abstract
OBJECTIVES Due to advancing digitalisation, it is of interest to develop standardised and reproducible fully automated analysis methods of cranial structures in order to reduce the workload in diagnosis and treatment planning and to generate objectifiable data. The aim of this study was to train and evaluate an algorithm based on deep learning methods for fully automated detection of craniofacial landmarks in cone-beam computed tomography (CBCT) in terms of accuracy, speed, and reproducibility. MATERIALS AND METHODS A total of 931 CBCTs were used to train the algorithm. To test the algorithm, 35 landmarks were located manually by three experts and automatically by the algorithm in 114 CBCTs. The time and distance between the measured values and the ground truth previously determined by an orthodontist were analyzed. Intraindividual variations in manual localization of landmarks were determined using 50 CBCTs analyzed twice. RESULTS The results showed no statistically significant difference between the two measurement methods. Overall, with a mean error of 2.73 mm, the AI was 2.12% better and 95% faster than the experts. In the area of bilateral cranial structures, the AI was able to achieve better results than the experts on average. CONCLUSION The achieved accuracy of automatic landmark detection was in a clinically acceptable range, is comparable in precision to manual landmark determination, and requires less time. CLINICAL RELEVANCE Further enlargement of the database and continued development and optimization of the algorithm may lead to ubiquitous fully automated localization and analysis of CBCT datasets in future routine clinical practice.
Collapse
Affiliation(s)
| | | | - Stefan Raith
- Department of Maxillofacial Surgery, RWTH Aachen University, Aachen, Germany
| | - Tobias Pankert
- Department of Maxillofacial Surgery, RWTH Aachen University, Aachen, Germany
| | - Florian Peters
- Department of Maxillofacial Surgery, RWTH Aachen University, Aachen, Germany
| | - Michael Wolf
- Department of Orthodontics, University Hospital of RWTH Aachen, Pauwelsstraße 30, D-52074, Aachen, Germany
| | - Frank Hölzle
- Department of Maxillofacial Surgery, RWTH Aachen University, Aachen, Germany
| | - Ali Modabber
- Department of Maxillofacial Surgery, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
10
|
Serafin M, Baldini B, Cabitza F, Carrafiello G, Baselli G, Del Fabbro M, Sforza C, Caprioglio A, Tartaglia GM. Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: systematic review and meta-analysis. LA RADIOLOGIA MEDICA 2023; 128:544-555. [PMID: 37093337 PMCID: PMC10181977 DOI: 10.1007/s11547-023-01629-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/28/2023] [Indexed: 04/25/2023]
Abstract
OBJECTIVES The aim of the present systematic review and meta-analysis is to assess the accuracy of automated landmarking using deep learning in comparison with manual tracing for cephalometric analysis of 3D medical images. METHODS PubMed/Medline, IEEE Xplore, Scopus and ArXiv electronic databases were searched. Selection criteria were: ex vivo and in vivo volumetric data images suitable for 3D landmarking (Problem), a minimum of five automated landmarking performed by deep learning method (Intervention), manual landmarking (Comparison), and mean accuracy, in mm, between manual and automated landmarking (Outcome). QUADAS-2 was adapted for quality analysis. Meta-analysis was performed on studies that reported as outcome mean values and standard deviation of the difference (error) between manual and automated landmarking. Linear regression plots were used to analyze correlations between mean accuracy and year of publication. RESULTS The initial electronic screening yielded 252 papers published between 2020 and 2022. A total of 15 studies were included for the qualitative synthesis, whereas 11 studies were used for the meta-analysis. Overall random effect model revealed a mean value of 2.44 mm, with a high heterogeneity (I2 = 98.13%, τ2 = 1.018, p-value < 0.001); risk of bias was high due to the presence of issues for several domains per study. Meta-regression indicated a significant relation between mean error and year of publication (p value = 0.012). CONCLUSION Deep learning algorithms showed an excellent accuracy for automated 3D cephalometric landmarking. In the last two years promising algorithms have been developed and improvements in landmarks annotation accuracy have been done.
Collapse
Affiliation(s)
- Marco Serafin
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Benedetta Baldini
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy.
| | - Federico Cabitza
- Department of Informatics, System and Communication, University of Milano-Bicocca, Viale Sarca 336, 20126, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Belgioioso 173, 20157, Milan, Italy
| | - Gianpaolo Carrafiello
- Department of Oncology and Hematology-Oncology, University of Milan, Via Sforza 35, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Giuseppe Baselli
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy
| | - Massimo Del Fabbro
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Chiarella Sforza
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Alberto Caprioglio
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Gianluca M Tartaglia
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| |
Collapse
|
11
|
Torosdagli N, Anwar S, Verma P, Liberton DK, Lee JS, Han WW, Bagci U. Relational reasoning network for anatomical landmarking. J Med Imaging (Bellingham) 2023; 10:024002. [PMID: 36891503 PMCID: PMC9986769 DOI: 10.1117/1.jmi.10.2.024002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023] Open
Abstract
Purpose We perform anatomical landmarking for craniomaxillofacial (CMF) bones without explicitly segmenting them. Toward this, we propose a simple, yet efficient, deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations among the landmarks in CMF bones; specifically, mandible, maxilla, and nasal bones. Approach The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units. For a given few landmarks as input, RRN treats the landmarking process similar to a data imputation problem where predicted landmarks are considered missing. Results We applied RRN to cone-beam computed tomography scans obtained from 250 patients. With a fourfold cross-validation technique, we obtained an average root mean squared error of < 2 mm per landmark. Our proposed RRN has revealed unique relationships among the landmarks that help us in inferring informativeness of the landmark points. The proposed system identifies the missing landmark locations accurately even when severe pathology or deformations are present in the bones. Conclusions Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for CMF surgeries. Achieving this goal without the need for explicit bone segmentation addresses a major limitation of segmentation-based approaches, where segmentation failure (as often is the case in bones with severe pathology or deformation) could easily lead to incorrect landmarking. To the best of our knowledge, this is the first-of-its-kind algorithm finding anatomical relations of the objects using deep learning.
Collapse
Affiliation(s)
| | - Syed Anwar
- University of Central Florida, Orlando, Florida, United States
- Children’s National Hospital, Sheikh Zayed Institute, Washington, District of Columbia, United States
- George Washington University, Washington, District of Columbia, United States
| | - Payal Verma
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Denise K. Liberton
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Janice S. Lee
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Wade W. Han
- Boston Children’s Hospital, Harvard Medical School, Department of Otolaryngology - Head and Neck Surgery, Boston, Maryland, United States
- Ther-AI, LLC, Kissimmee, Florida, United States
| | - Ulas Bagci
- University of Central Florida, Orlando, Florida, United States
- Ther-AI, LLC, Kissimmee, Florida, United States
- Northwestern University, Departments of Radiology, BME, and ECE, Machine and Hybrid Intelligence Lab, Chicago, Illinois, United States
| |
Collapse
|
12
|
Ahn J, Nguyen TP, Kim YJ, Kim T, Yoon J. Automated analysis of three-dimensional CBCT images taken in natural head position that combines facial profile processing and multiple deep-learning models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107123. [PMID: 36156440 DOI: 10.1016/j.cmpb.2022.107123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 08/24/2022] [Accepted: 09/08/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVES Analyzing three-dimensional cone beam computed tomography (CBCT) images has become an indispensable procedure for diagnosis and treatment planning of orthodontic patients. Artificial intelligence, especially deep-learning techniques for analyzing image data, shows great potential for medical and dental image analysis and diagnosis. To explore the feasibility of automating measurement of 13 geometric parameters from three-dimensional cone beam computed tomography images taken in natural head position (NHP), this study proposed a smart system that combined a facial profile analysis algorithm with deep-learning models. MATERIALS AND METHODS Using multiple views extracted from the cone beam computed tomography data of 170 cases as a dataset, our proposed method automatically calculated 13 dental parameters by partitioning, detecting regions of interest, and extracting the facial profile. Subsequently, Mask-RCNN, a trained decentralized convolutional neural network was applied to detect 23 landmarks. All the techniques were integrated into a software application with a graphical user interface designed for user convenience. To demonstrate the system's ability to replace human experts, 30 CBCT data were selected for validation. Two orthodontists and one advanced general dentist located required landmarks by using a commercial dental program. The differences between manual and developed methods were calculated and reported as the errors. RESULTS The intraclass correlation coefficients (ICCs) and 95% confidence interval (95% CI) for intra-observer reliability were 0.98 (0.97-0.99) for observer 1; 0.95 (0.93-0.97) for observer 2; 0.98 (0.97-0.99) for observer 3 after measuring 13 parameters two times at two weeks interval. The combined ICC for intra-observer reliability was 0.97. The ICCs and 95% CI for inter-observer reliability were 0.94 (0.91-0.97). The mean absolute value of deviation was around 1 mm for the length parameters, and smaller than 2° for angle parameters. Furthermore, ANOVA test demonstrated the consistency between the measurements of the proposed method and those of human experts statistically (Fdis=2.68, ɑ=0.05). CONCLUSIONS The proposed system demonstrated the high consistency with the manual measurements of human experts and its applicability. This method aimed to help human experts save time and efforts for analyzing three-dimensional CBCT images of orthodontic patients.
Collapse
Affiliation(s)
- Janghoon Ahn
- Department of Orthodontics, Kangnam Sacred Heart Hospital, Hallym University, Singil-ro 1 gil, Yeongdeungpo-gu, Seoul 07441, Republic of Korea
| | - Thong Phi Nguyen
- Department of Mechanical Design Engineering/ Major in Materials, Devices, and Equipment, Hanyang University, 222, Wangsimni-ro, Seongdongsu, Seoul 04763, Republic of Korea; BK21 FOUR ERICA-ACE Centre, Hanyang University, Ansan-si, Gyeonggi-do 15588, Republic of Korea
| | - Yoon-Ji Kim
- Department of Orthodontics, Asan Medical Centre, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505 Republic of Korea
| | - Taeyong Kim
- Department of Advanced General Dentistry, Kangnam Sacred Heart Hospital, Hallym University, Singil-ro 1-gil, Yeongdeungpo-gu, Seoul 07441, Republic of Korea
| | - Jonghun Yoon
- Department of Mechanical Engineering, Hanyang University, 55, Hanyangdaehak-ro, Sangnok-gu, Ansan-si, Gyeonggi-do 15588, Republic of Korea; BK21 FOUR ERICA-ACE Centre, Hanyang University, Ansan-si, Gyeonggi-do 15588, Republic of Korea.
| |
Collapse
|
13
|
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, Chen X. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Collapse
|
14
|
Chen R, Ma Y, Chen N, Liu L, Cui Z, Lin Y, Wang W. Structure-Aware Long Short-Term Memory Network for 3D Cephalometric Landmark Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1791-1801. [PMID: 35130151 DOI: 10.1109/tmi.2022.3149281] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Detecting 3D landmarks on cone-beam computed tomography (CBCT) is crucial to assessing and quantifying the anatomical abnormalities in 3D cephalometric analysis. However, the current methods are time-consuming and suffer from large biases in landmark localization, leading to unreliable diagnosis results. In this work, we propose a novel Structure-Aware Long Short-Term Memory framework (SA-LSTM) for efficient and accurate 3D landmark detection. To reduce the computational burden, SA-LSTM is designed in two stages. It first locates the coarse landmarks via heatmap regression on a down-sampled CBCT volume and then progressively refines landmarks by attentive offset regression using multi-resolution cropped patches. To boost accuracy, SA-LSTM captures global-local dependence among the cropping patches via self-attention. Specifically, a novel graph attention module implicitly encodes the landmark's global structure to rationalize the predicted position. Moreover, a novel attention-gated module recursively filters irrelevant local features and maintains high-confident local predictions for aggregating the final result. Experiments conducted on an in-house dataset and a public dataset show that our method outperforms state-of-the-art methods, achieving 1.64 mm and 2.37 mm average errors, respectively. Furthermore, our method is very efficient, taking only 0.5 seconds for inferring the whole CBCT volume of resolution 768×768×576 .
Collapse
|
15
|
Chen X, Lian C, Deng HH, Kuang T, Lin HY, Xiao D, Gateno J, Shen D, Xia JJ, Yap PT. Fast and Accurate Craniomaxillofacial Landmark Detection via 3D Faster R-CNN. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3867-3878. [PMID: 34310293 PMCID: PMC8686670 DOI: 10.1109/tmi.2021.3099509] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Automatic craniomaxillofacial (CMF) landmark localization from cone-beam computed tomography (CBCT) images is challenging, considering that 1) the number of landmarks in the images may change due to varying deformities and traumatic defects, and 2) the CBCT images used in clinical practice are typically large. In this paper, we propose a two-stage, coarse-to-fine deep learning method to tackle these challenges with both speed and accuracy in mind. Specifically, we first use a 3D faster R-CNN to roughly locate landmarks in down-sampled CBCT images that have varying numbers of landmarks. By converting the landmark point detection problem to a generic object detection problem, our 3D faster R-CNN is formulated to detect virtual, fixed-size objects in small boxes with centers indicating the approximate locations of the landmarks. Based on the rough landmark locations, we then crop 3D patches from the high-resolution images and send them to a multi-scale UNet for the regression of heatmaps, from which the refined landmark locations are finally derived. We evaluated the proposed approach by detecting up to 18 landmarks on a real clinical dataset of CMF CBCT images with various conditions. Experiments show that our approach achieves state-of-the-art accuracy of 0.89 ± 0.64mm in an average time of 26.2 seconds per volume.
Collapse
|
16
|
He T, Yao J, Tian W, Yi Z, Tang W, Guo J. Cephalometric landmark detection by considering translational invariance in the two-stage framework. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.08.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
17
|
Tian S, Wang M, Dai N, Ma H, Li L, Fiorenza L, Sun Y, Li Y. DCPR-GAN: Dental Crown Prosthesis Restoration Using Two-stage Generative Adversarial Networks. IEEE J Biomed Health Inform 2021; 26:151-160. [PMID: 34637385 DOI: 10.1109/jbhi.2021.3119394] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Restoring the correct masticatory function of broken teeth is the basis of dental crown prosthesis rehabilitation. However, it is a challenging task primarily due to the complex and personalized morphology of the occlusal surface. In this article, we address this problem by designing a new two-stage generative adversarial network (GAN) to reconstruct a dental crown surface in the data-driven perspective. Specifically, in the first stage, a conditional GAN (CGAN) is designed to learn the inherent relationship between the defective tooth and the target crown, which can solve the problem of the occlusal relationship restoration. In the second stage, an improved CGAN is further devised by considering an occlusal groove parsing network (GroNet) and an occlusal fingerprint constraint to enforce the generator to enrich the functional characteristics of the occlusal surface. Experimental results demonstrate that the proposed framework significantly outperforms the state-of-the-art deep learning methods in functional occlusal surface reconstruction using a real-world patient database. Moreover, the standard deviation (SD) and root mean square (RMS) between the generated occlusal surface and the target crown calculated by our method are both less than 0.161mm. Importantly, the designed dental crown has enough anatomical morphology and higher clinical applicability.
Collapse
|
18
|
Qiu Q, Yang Z, Wu S, Qian D, Wei J, Gong G, Wang L, Yin Y. Automatic segmentation of hippocampus in hippocampal sparing whole brain radiotherapy: A multitask edge-aware learning. Med Phys 2021; 48:1771-1780. [PMID: 33555048 DOI: 10.1002/mp.14760] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 01/08/2021] [Accepted: 01/29/2021] [Indexed: 02/03/2023] Open
Abstract
PURPOSE This study aimed to improve the accuracy of the hippocampus segmentation through multitask edge-aware learning. METHOD We developed a multitask framework for computerized hippocampus segmentation. We used three-dimensional (3D) U-net as our backbone model with two training objectives: (a) to minimize the difference between the targeted binary mask and the model prediction; and (b) to optimize an auxiliary edge-prediction task which is designed to guide the model detection of the weak boundary of the hippocampus in model optimization. To balance the multiple task objectives, we proposed an improved gradient normalization by adaptively adjusting the weight of losses from different tasks. A total of 247 T1-weighted MRIs including 131 without contrast and 116 with contrast were collected from 247 patients to train and validate the proposed method. Segmentation was quantitatively evaluated with the dice coefficient (Dice), Hausdorff distance (HD), and average Hausdorff distance (AVD). The 3D U-net was used for baseline comparison. We used a Wilcoxon signed-rank test to compare repeated measurements (Dice, HD, and AVD) by different segmentations. RESULTS Through fivefold cross-validation, our multitask edge-aware learning achieved Dice of 0.8483 ± 0.0036, HD of 7.5706 ± 1.2330 mm, and AVD of 0.1522 ± 0.0165 mm, respectively. Conversely, the baseline results were 0.8340 ± 0.0072, 10.4631 ± 2.3736 mm, and 0.1884 ± 0.0286 mm, respectively. With a Wilcoxon signed-rank test, we found that the differences between our method and the baseline were statistically significant (P < 0.05). CONCLUSION Our results demonstrated the efficiency of multitask edge-aware learning in hippocampus segmentation for hippocampal sparing whole-brain radiotherapy. The proposed framework may also be useful for other low-contrast small organ segmentations on medical imaging modalities.
Collapse
Affiliation(s)
- Qingtao Qiu
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Ji'nan, P.R. China
| | - Ziduo Yang
- Perception Vision Medical Technologies Co. Ltd., Guangzhou, Guangdong, P.R. China
| | - Shuyu Wu
- Perception Vision Medical Technologies Co. Ltd., Guangzhou, Guangdong, P.R. China
| | - Dongdong Qian
- Perception Vision Medical Technologies Co. Ltd., Guangzhou, Guangdong, P.R. China
| | - Jun Wei
- Perception Vision Medical Technologies Co. Ltd., Guangzhou, Guangdong, P.R. China
| | - Guanzhong Gong
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Ji'nan, P.R. China
| | - Lizhen Wang
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Ji'nan, P.R. China
| | - Yong Yin
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Ji'nan, P.R. China
| |
Collapse
|
19
|
Xiao D, Lian C, Wang L, Deng H, Lin HY, Thung KH, Zhu J, Yuan P, Perez L, Gateno J, Shen SG, Yap PT, Xia JJ, Shen D. Estimating Reference Shape Model for Personalized Surgical Reconstruction of Craniomaxillofacial Defects. IEEE Trans Biomed Eng 2021; 68:362-373. [PMID: 32340932 PMCID: PMC8163108 DOI: 10.1109/tbme.2020.2990586] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE To estimate a patient-specific reference bone shape model for a patient with craniomaxillofacial (CMF) defects due to facial trauma. METHODS We proposed an automatic facial bone shape estimation framework using pre-traumatic conventional portrait photos and post-traumatic head computed tomography (CT) scans via a 3D face reconstruction and a deformable shape model. Specifically, a three-dimensional (3D) face was first reconstructed from the patient's pre-traumatic portrait photos. Second, a correlation model between the skin and bone surfaces was constructed using a sparse representation based on the CT images of training normal subjects. Third, by feeding the reconstructed 3D face into the correlation model, an initial reference shape model was generated. In addition, we refined the initial estimation by applying non-rigid surface matching between the initially estimated shape and the patient's post-traumatic bone based on the adaptive-focus deformable shape model (AFDSM). Furthermore, a statistical shape model, built from the training normal subjects, was utilized to constrain the deformation process to avoid overfitting. RESULTS AND CONCLUSION The proposed method was evaluated using both synthetic and real patient data. Experimental results show that the patient's abnormal facial bony structure can be recovered using our method, and the estimated reference shape model is considered clinically acceptable by an experienced CMF surgeon. SIGNIFICANCE The proposed method is more suitable to the complex CMF defects for CMF reconstructive surgical planning.
Collapse
Affiliation(s)
- Deqiang Xiao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Chunfeng Lian
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Hannah Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Hung-Ying Lin
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
- Department of Oral and Maxillofacial Surgery, National Taiwan University Hospital, Taipei, ROC
| | - Kim-Han Thung
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jihua Zhu
- School of Software Engineering, Xi’an Jiaotong University, Xi’an, China
| | - Peng Yuan
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Leonel Perez
- Oral and Maxillofacial Surgery at Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY, USA
| | - Steve Guofang Shen
- Oral and Craniomaxillofacial Surgery at Shanghai Ninth Hospital, Shanghai Jiaotong University College of Medicine, Shanghai, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - James J. Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist, Houston, TX 77030
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27514, USA
| |
Collapse
|
20
|
Dot G, Rafflenbeul F, Arbotto M, Gajny L, Rouch P, Schouman T. Accuracy and reliability of automatic three-dimensional cephalometric landmarking. Int J Oral Maxillofac Surg 2020; 49:1367-1378. [DOI: 10.1016/j.ijom.2020.02.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Revised: 11/28/2019] [Accepted: 02/24/2020] [Indexed: 10/24/2022]
|
21
|
Lian C, Wang F, Deng HH, Wang L, Xiao D, Kuang T, Lin HY, Gateno J, Shen SGF, Yap PT, Xia JJ, Shen D. Multi-task Dynamic Transformer Network for Concurrent Bone Segmentation and Large-Scale Landmark Localization with Dental CBCT. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12264:807-816. [PMID: 34935006 PMCID: PMC8687703 DOI: 10.1007/978-3-030-59719-1_78] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Accurate bone segmentation and anatomical landmark localization are essential tasks in computer-aided surgical simulation for patients with craniomaxillofacial (CMF) deformities. To leverage the complementarity between the two tasks, we propose an efficient end-to-end deep network, i.e., multi-task dynamic transformer network (DTNet), to concurrently segment CMF bones and localize large-scale landmarks in one-pass from large volumes of cone-beam computed tomography (CBCT) data. Our DTNet was evaluated quantitatively using CBCTs of patients with CMF deformities. The results demonstrated that our method outperforms the other state-of-the-art methods in both tasks of the bony segmentation and the landmark digitization. Our DTNet features three main technical contributions. First, a collaborative two-branch architecture is designed to efficiently capture both fine-grained image details and complete global context for high-resolution volume-to-volume prediction. Second, leveraging anatomical dependencies between landmarks, regionalized dynamic learners (RDLs) are designed in the concept of "learns to learn" to jointly regress large-scale 3D heatmaps of all landmarks under limited computational costs. Third, adaptive transformer modules (ATMs) are designed for the flexible learning of task-specific feature embedding from common feature bases.
Collapse
Affiliation(s)
- Chunfeng Lian
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Fan Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Hannah H Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Deqiang Xiao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Hung-Ying Lin
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, NY, USA
| | - Steve G F Shen
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Jiao Tong University, Shanghai, China
- Shanghai University of Medicine and Health Science, Shanghai, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, NY, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
22
|
Ma Q, Kobayashi E, Fan B, Nakagawa K, Sakuma I, Masamune K, Suenaga H. Automatic 3D landmarking model using patch-based deep neural networks for CT image of oral and maxillofacial surgery. Int J Med Robot 2020; 16:e2093. [PMID: 32065718 DOI: 10.1002/rcs.2093] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2019] [Revised: 02/12/2020] [Accepted: 02/13/2020] [Indexed: 12/15/2022]
Abstract
BACKGROUND Manual landmarking is a time consuming and highly professional work. Although some algorithm-based landmarking methods have been proposed, they lack flexibility and may be susceptible to data diversity. METHODS The CT images from 66 patients who underwent oral and maxillofacial surgery (OMS) were landmarked manually in MIMICS. Then the CT slices were exported as images for recreating the 3D volume. The coordinate data of landmarks were further processed in Matlab using a principal component analysis (PCA) method. A patch-based deep neural network model with a three-layer convolutional neural network (CNN) was trained to obtain landmarks from CT images. RESULTS The evaluating experiment showed that this CNN model could automatically finish landmarking in an average processing time of 37.871 seconds with an average accuracy of 5.785 mm. CONCLUSION This study shows a promising potential to relieve the workload of the surgeon and reduces the dependence on human experience for OMS landmarking.
Collapse
Affiliation(s)
- Qingchuan Ma
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| | - Etsuko Kobayashi
- Institute of Advanced BioMedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan
| | - Bowen Fan
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Keiichi Nakagawa
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Ichiro Sakuma
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Ken Masamune
- Institute of Advanced BioMedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan
| | - Hideyuki Suenaga
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| |
Collapse
|
23
|
Chen S, Wang L, Li G, Wu TH, Diachina S, Tejera B, Kwon JJ, Lin FC, Lee YT, Xu T, Shen D, Ko CC. Machine learning in orthodontics: Introducing a 3D auto-segmentation and auto-landmark finder of CBCT images to assess maxillary constriction in unilateral impacted canine patients. Angle Orthod 2020; 90:77-84. [PMID: 31403836 PMCID: PMC8087054 DOI: 10.2319/012919-59.1] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2019] [Accepted: 05/01/2019] [Indexed: 12/21/2022] Open
Abstract
OBJECTIVES To (1) introduce a novel machine learning method and (2) assess maxillary structure variation in unilateral canine impaction for advancing clinically viable information. MATERIALS AND METHODS A machine learning algorithm utilizing Learning-based multi-source IntegratioN frameworK for Segmentation (LINKS) was used with cone-beam computed tomography (CBCT) images to quantify volumetric skeletal maxilla discrepancies of 30 study group (SG) patients with unilaterally impacted maxillary canines and 30 healthy control group (CG) subjects. Fully automatic segmentation was implemented for maxilla isolation, and maxillary volumetric and linear measurements were performed. Analysis of variance was used for statistical evaluation. RESULTS Maxillary structure was successfully auto-segmented, with an average dice ratio of 0.80 for three-dimensional image segmentations and a minimal mean difference of two voxels on the midsagittal plane for digitized landmarks between the manually identified and the machine learning-based (LINKS) methods. No significant difference in bone volume was found between impaction ([2.37 ± 0.34] [Formula: see text] 104 mm3) and nonimpaction ([2.36 ± 0.35] [Formula: see text] 104 mm3) sides of SG. The SG maxillae had significantly smaller volumes, widths, heights, and depths (P < .05) than CG. CONCLUSIONS The data suggest that palatal expansion could be beneficial for those with unilateral canine impaction, as underdevelopment of the maxilla often accompanies that condition in the early teen years. Fast and efficient CBCT image segmentation will allow large clinical data sets to be analyzed effectively.
Collapse
|
24
|
Abstract
In this paper, we introduce a method for estimating patient-specific reference bony shape models for planning of reconstructive surgery for patients with acquired craniomaxillofacial (CMF) trauma. We propose an automatic bony shape estimation framework using pre-traumatic portrait photographs and post-traumatic head computed tomography (CT) scans. A 3D facial surface is first reconstructed from the patient's pre-traumatic photographs. An initial estimation of the patient's normal bony shape is then obtained with the reconstructed facial surface via sparse representation using a dictionary of paired facial and bony surfaces of normal subjects. We further refine the bony shape model by deforming the initial bony shape model to the post-traumatic 3D CT bony model, regularized by a statistical shape model built from a database of normal subjects. Experimental results show that our method is capable of effectively recovering the patient's normal facial bony shape in regions with defects, allowing CMF surgical planning to be performed precisely for a wider range of defects caused by trauma.
Collapse
|
25
|
Zhang J, Liu M, Wang L, Chen S, Yuan P, Li J, Shen SGF, Tang Z, Chen KC, Xia JJ, Shen D. Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization. Med Image Anal 2019; 60:101621. [PMID: 31816592 DOI: 10.1016/j.media.2019.101621] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Revised: 07/01/2019] [Accepted: 11/19/2019] [Indexed: 12/24/2022]
Abstract
Cone-beam computed tomography (CBCT) scans are commonly used in diagnosing and planning surgical or orthodontic treatment to correct craniomaxillofacial (CMF) deformities. Based on CBCT images, it is clinically essential to generate an accurate 3D model of CMF structures (e.g., midface, and mandible) and digitize anatomical landmarks. This process often involves two tasks, i.e., bone segmentation and anatomical landmark digitization. Because landmarks usually lie on the boundaries of segmented bone regions, the tasks of bone segmentation and landmark digitization could be highly associated. Also, the spatial context information (e.g., displacements from voxels to landmarks) in CBCT images is intuitively important for accurately indicating the spatial association between voxels and landmarks. However, most of the existing studies simply treat bone segmentation and landmark digitization as two standalone tasks without considering their inherent relationship, and rarely take advantage of the spatial context information contained in CBCT images. To address these issues, we propose a Joint bone Segmentation and landmark Digitization (JSD) framework via context-guided fully convolutional networks (FCNs). Specifically, we first utilize displacement maps to model the spatial context information in CBCT images, where each element in the displacement map denotes the displacement from a voxel to a particular landmark. An FCN is learned to construct the mapping from the input image to its corresponding displacement maps. Using the learned displacement maps as guidance, we further develop a multi-task FCN model to perform bone segmentation and landmark digitization jointly. We validate the proposed JSD method on 107 subjects, and the experimental results demonstrate that our method is superior to the state-of-the-art approaches in both tasks of bone segmentation and landmark digitization.
Collapse
Affiliation(s)
- Jun Zhang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA.
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA.
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA.
| | - Si Chen
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100191, China
| | - Peng Yuan
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Jianfu Li
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Steve Guo-Fang Shen
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Zhen Tang
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Ken-Chung Chen
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - James J Xia
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA.
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
26
|
|
27
|
Torosdagli N, Liberton DK, Verma P, Sincan M, Lee JS, Bagci U. Deep Geodesic Learning for Segmentation and Anatomical Landmarking. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:919-931. [PMID: 30334750 PMCID: PMC6475529 DOI: 10.1109/tmi.2018.2875814] [Citation(s) in RCA: 72] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
In this paper, we propose a novel deep learning framework for anatomy segmentation and automatic landmarking. Specifically, we focus on the challenging problem of mandible segmentation from cone-beam computed tomography (CBCT) scans and identification of 9 anatomical landmarks of the mandible on the geodesic space. The overall approach employs three inter-related steps. In the first step, we propose a deep neural network architecture with carefully designed regularization, and network hyper-parameters to perform image segmentation without the need for data augmentation and complex post-processing refinement. In the second step, we formulate the landmark localization problem directly on the geodesic space for sparsely-spaced anatomical landmarks. In the third step, we utilize a long short-term memory network to identify the closely-spaced landmarks, which is rather difficult to obtain using other standard networks. The proposed fully automated method showed superior efficacy compared to the state-of-the-art mandible segmentation and landmarking approaches in craniofacial anomalies and diseased states. We used a very challenging CBCT data set of 50 patients with a high-degree of craniomaxillofacial variability that is realistic in clinical practice. The qualitative visual inspection was conducted for distinct CBCT scans from 250 patients with high anatomical variability. We have also shown the state-of-the-art performance in an independent data set from the MICCAI Head-Neck Challenge (2015).
Collapse
|
28
|
Pei Y, Yi Y, Ma G, Kim TK, Guo Y, Xu T, Zha H. Spatially Consistent Supervoxel Correspondences of Cone-Beam Computed Tomography Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2310-2321. [PMID: 29993683 DOI: 10.1109/tmi.2018.2829629] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Establishing dense correspondences of cone-beam computed tomography (CBCT) images is a crucial step for the attribute transfer and morphological variation assessment in clinical orthodontics. In this paper, a novel method, unsupervised spatially consistent clustering forest, is proposed to tackle the challenges for automatic supervoxel-wise correspondences of CBCT images. A complexity analysis of the proposed method with respect to the clustering hypotheses is provided with a data-dependent learning guarantee. The learning bound considers both the sequential tree traversals determined by questions stored in branch nodes and the clustering compactness of leaf nodes. A novel tree-pruning algorithm, guided by the learning bound, is also proposed to remove locally inconsistent leaf nodes. The resulting forest yields spatially consistent affinity estimations, thanks to the pruning penalizing trees with inconsistent leaf assignments and the combinational contextual feature channels used to learn the forest. A forest-based metric is utilized to derive the pairwise affinities and dense correspondences of CBCT images. The proposed method has been applied to the label propagation of clinically captured CBCT images. In the experiments, the method outperforms variants of both supervised and unsupervised forest-based methods and state-of-the-art label-propagation methods, achieving the mean dice similarity coefficients of 0.92, 0.89, 0.94, and 0.93 for the mandible, the maxilla, the zygoma arch, and the teeth data, respectively.
Collapse
|
29
|
Zhensong Wang, Lifang Wei, Li Wang, Yaozong Gao, Wufan Chen, Dinggang Shen. Hierarchical Vertex Regression-Based Segmentation of Head and Neck CT Images for Radiotherapy Planning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:923-937. [PMID: 29757737 PMCID: PMC5954838 DOI: 10.1109/tip.2017.2768621] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Segmenting organs at risk from head and neck CT images is a prerequisite for the treatment of head and neck cancer using intensity modulated radiotherapy. However, accurate and automatic segmentation of organs at risk is a challenging task due to the low contrast of soft tissue and image artifact in CT images. Shape priors have been proved effective in addressing this challenging task. However, conventional methods incorporating shape priors often suffer from sensitivity to shape initialization and also shape variations across individuals. In this paper, we propose a novel approach to incorporate shape priors into a hierarchical learning-based model. The contributions of our proposed approach are as follows: 1) a novel mechanism for critical vertices identification is proposed to identify vertices with distinctive appearances and strong consistency across different subjects; 2) a new strategy of hierarchical vertex regression is also used to gradually locate more vertices with the guidance of previously located vertices; and 3) an innovative framework of joint shape and appearance learning is further developed to capture salient shape and appearance features simultaneously. Using these innovative strategies, our proposed approach can essentially overcome drawbacks of the conventional shape-based segmentation methods. Experimental results show that our approach can achieve much better results than state-of-the-art methods.
Collapse
|
30
|
Zhang J, Liu M, Shen D. Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:4753-4764. [PMID: 28678706 PMCID: PMC5729285 DOI: 10.1109/tip.2017.2721106] [Citation(s) in RCA: 73] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simultaneously in real time, using limited training data. Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale (e.g., thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection.
Collapse
|
31
|
Joint Craniomaxillofacial Bone Segmentation and Landmark Digitization by Context-Guided Fully Convolutional Networks. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2017; 10434:720-728. [PMID: 29376150 DOI: 10.1007/978-3-319-66185-8_81] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Generating accurate 3D models from cone-beam computed tomography (CBCT) images is an important step in developing treatment plans for patients with craniomaxillofacial (CMF) deformities. This process often involves bone segmentation and landmark digitization. Since anatomical landmarks generally lie on the boundaries of segmented bone regions, the tasks of bone segmentation and landmark digitization could be highly correlated. However, most existing methods simply treat them as two standalone tasks, without considering their inherent association. In addition, these methods usually ignore the spatial context information (i.e., displacements from voxels to landmarks) in CBCT images. To this end, we propose a context-guided fully convolutional network (FCN) for joint bone segmentation and landmark digitization. Specifically, we first train an FCN to learn the displacement maps to capture the spatial context information in CBCT images. Using the learned displacement maps as guidance information, we further develop a multi-task FCN to jointly perform bone segmentation and landmark digitization. Our method has been evaluated on 107 subjects from two centers, and the experimental results show that our method is superior to the state-of-the-art methods in both bone segmentation and landmark digitization.
Collapse
|
32
|
Zhang J, Liu M, Gao Y, Shen D. Alzheimer's Disease Diagnosis Using Landmark-Based Features From Longitudinal Structural MR Images. IEEE J Biomed Health Inform 2017; 21:1607-1616. [PMID: 28534798 DOI: 10.1109/jbhi.2017.2704614] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Structural magnetic resonance imaging (MRI) has been proven to be an effective tool for Alzheimer's disease (AD) diagnosis. While conventional MRI-based AD diagnosis typically uses images acquired at a single time point, a longitudinal study is more sensitive in detecting early pathological changes of AD, making it more favorable for accurate diagnosis. In general, there are two challenges faced in MRI-based diagnosis. First, extracting features from structural MR images requires time-consuming nonlinear registration and tissue segmentation, whereas the longitudinal study with involvement of more scans further exacerbates the computational costs. Moreover, the inconsistent longitudinal scans (i.e., different scanning time points and also the total number of scans) hinder extraction of unified feature representations in longitudinal studies. In this paper, we propose a landmark-based feature extraction method for AD diagnosis using longitudinal structural MR images, which does not require nonlinear registration or tissue segmentation in the application stage and is also robust to inconsistencies among longitudinal scans. Specifically, first, the discriminative landmarks are automatically discovered from the whole brain using training images, and then efficiently localized using a fast landmark detection method for testing images, without the involvement of any nonlinear registration and tissue segmentation; and second, high-level statistical spatial features and contextual longitudinal features are further extracted based on those detected landmarks, which can characterize spatial structural abnormalities and longitudinal landmark variations. Using these spatial and longitudinal features, a linear support vector machine is finally adopted to distinguish AD subjects or mild cognitive impairment (MCI) subjects from healthy controls (HCs). Experimental results on the Alzheimer's Disease Neuroimaging Initiative database demonstrate the superior performance and efficiency of the proposed method, with classification accuracies of 88.30% for AD versus HC and 79.02% for MCI versus HC, respectively.
Collapse
|
33
|
Lian C, Ruan S, Denoux T, Li H, Vera P. Spatial Evidential Clustering With Adaptive Distance Metric for Tumor Segmentation in FDG-PET Images. IEEE Trans Biomed Eng 2017; 65:21-30. [PMID: 28371772 DOI: 10.1109/tbme.2017.2688453] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
While the accurate delineation of tumor volumes in FDG-positron emission tomography (PET) is a vital task for diverse objectives in clinical oncology, noise and blur due to the imaging system make it a challenging work. In this paper, we propose to address the imprecision and noise inherent in PET using Dempster-Shafer theory, a powerful tool for modeling and reasoning with uncertain and/or imprecise information. Based on Dempster-Shafer theory, a novel evidential clustering algorithm is proposed and tailored for the tumor segmentation task in three-dimensional. For accurate clustering of PET voxels, each voxel is described not only by the single intensity value but also complementarily by textural features extracted from a patch surrounding the voxel. Considering that there are a large amount of textures without consensus regarding the most informative ones, and some of the extracted features are even unreliable due to the low-quality PET images, a specific procedure is included in the proposed clustering algorithm to adapt distance metric for properly representing the clustering distortions and the similarities between neighboring voxels. This integrated metric adaptation procedure will realize a low-dimensional transformation from the original space, and will limit the influence of unreliable inputs via feature selection. A Dempster-Shafer-theory-based spatial regularization is also proposed and included in the clustering algorithm, so as to effectively quantify the local homogeneity. The proposed method has been compared with other methods on the real-patient FDG-PET images, showing good performance.
Collapse
|
34
|
Zhang J, Liu M, An L, Gao Y, Shen D. Landmark-Based Alzheimer's Disease Diagnosis Using Longitudinal Structural MR Images. MEDICAL COMPUTER VISION AND BAYESIAN AND GRAPHICAL MODELS FOR BIOMEDICAL IMAGING : MICCAI 2016 INTERNATIONAL WORKSHOP, MCV AND BAMBI, ATHENS, GREECE, OCTOBER 21, 2016 : REVISED SELECTED PAPERS 2016; 10081:35-45. [PMID: 28936489 PMCID: PMC5603322 DOI: 10.1007/978-3-319-61188-4_4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, we propose a landmark-based feature extraction method for AD diagnosis using longitudinal structural MR images, which requires no nonlinear registration or tissue segmentation in the application stage and is robust to the inconsistency among longitudinal scans. Specifically, (1) the discriminative landmarks are first automatically discovered from the whole brain, which can be efficiently localized using a fast landmark detection method for the testing images; (2) High-level statistical spatial features and contextual longitudinal features are then extracted based on those detected landmarks. Using the spatial and longitudinal features, a linear support vector machine (SVM) is adopted for distinguishing AD subjects from healthy controls (HCs) and also mild cognitive impairment (MCI) subjects from HCs, respectively. Experimental results demonstrate the competitive classification accuracies, as well as a promising computational efficiency.
Collapse
Affiliation(s)
- Jun Zhang
- Department of Radiology and BRIC, UNC at Chapel Hill, Chapel Hill, NC, USA
| | - Mingxia Liu
- Department of Radiology and BRIC, UNC at Chapel Hill, Chapel Hill, NC, USA
| | - Le An
- Department of Radiology and BRIC, UNC at Chapel Hill, Chapel Hill, NC, USA
| | - Yaozong Gao
- Department of Radiology and BRIC, UNC at Chapel Hill, Chapel Hill, NC, USA
- Department of Computer Science, UNC at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC at Chapel Hill, Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
35
|
Gao Y, Shao Y, Lian J, Wang AZ, Chen RC, Shen D. Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1532-43. [PMID: 26800531 PMCID: PMC4918760 DOI: 10.1109/tmi.2016.2519264] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Computer Science, the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Yeqin Shao
- Nantong University, Jiangsu 226019, China and also with the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Andrew Z. Wang
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Ronald C. Chen
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA and also with Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea ()
| |
Collapse
|