1
|
Wu X, Sánchez CA, Lloyd JE, Borgard H, Fels S, Paydarfar JA, Halter RJ. Estimating tongue deformation during laryngoscopy using a hybrid FEM-multibody model and intraoperative tracking - a cadaver study. Comput Methods Biomech Biomed Engin 2025; 28:739-749. [PMID: 38193213 PMCID: PMC11231054 DOI: 10.1080/10255842.2023.2301672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/06/2023] [Accepted: 12/31/2023] [Indexed: 01/10/2024]
Abstract
Throat tumour margin control remains difficult due to the tight, enclosed space of the oral and throat regions and the tissue deformation resulting from placement of retractors and scopes during surgery. Intraoperative imaging can help with better localization but is hindered by non-image-compatible surgical instruments, cost, and unavailability. We propose a novel method of using instrument tracking and FEM-multibody modelling to simulate soft tissue deformation in the intraoperative setting, without requiring intraoperative imaging, to improve surgical guidance accuracy. We report our first empirical study, based on four trials of a cadaveric head specimen with full neck anatomy, yields a mean TLE of 10.8 ± 5.5 mm, demonstrating methodological feasibility.
Collapse
Affiliation(s)
- Xiaotian Wu
- Gordon Center for Medical Imaging, Massachusetts General
Hospital and Harvard Medical School, Boston, MA, USA
- Thayer School of Engineering, Dartmouth College, Hanover,
NH, USA
| | - C. Antonio Sánchez
- Department of Electrical and Computer Engineering,
University of British Columbia, Vancouver, Canada
| | - John E. Lloyd
- Department of Electrical and Computer Engineering,
University of British Columbia, Vancouver, Canada
| | - Heather Borgard
- Department of Electrical and Computer Engineering,
University of British Columbia, Vancouver, Canada
| | - Sidney Fels
- Department of Electrical and Computer Engineering,
University of British Columbia, Vancouver, Canada
| | - Joseph A. Paydarfar
- Section of Otolaryngology, Dartmouth-Hitchcock Medical
Center, Lebanon, NH, USA
- Geisel School of Medicine, Dartmouth College, Hanover, NH,
USA
| | - Ryan J. Halter
- Thayer School of Engineering, Dartmouth College, Hanover,
NH, USA
- Geisel School of Medicine, Dartmouth College, Hanover, NH,
USA
| |
Collapse
|
2
|
Alahmari M, Alahmari M, Almuaddi A, Abdelmagyd H, Rao K, Hamdoon Z, Alsaegh M, Chaitanya NCSK, Shetty S. Accuracy of artificial intelligence-based segmentation in maxillofacial structures: a systematic review. BMC Oral Health 2025; 25:350. [PMID: 40055718 PMCID: PMC11887095 DOI: 10.1186/s12903-025-05730-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Accepted: 02/26/2025] [Indexed: 03/23/2025] Open
Abstract
OBJECTIVE The aim of this review was to evaluate the accuracy of artificial intelligence (AI) in the segmentation of teeth, jawbone (maxilla, mandible with temporomandibular joint), and mandibular (inferior alveolar) canal in CBCT and CT scans. MATERIALS AND METHODS Articles were retrieved from MEDLINE, Cochrane CENTRAL, IEEE Xplore, and Google Scholar. Eligible studies were analyzed thematically, and their quality was appraised using the JBI checklist for diagnostic test accuracy studies. Meta-analysis was conducted for key performance metrics, including Dice Similarity Coefficient (DSC) and Average Surface Distance (ASD). RESULTS A total of 767 non-duplicate articles were identified, and 30 studies were included in the review. Of these, 27 employed deep-learning models, while 3 utilized classical machine-learning approaches. The pooled DSC for mandible segmentation was 0.94 (95% CI: 0.91-0.98), mandibular canal segmentation was 0.694 (95% CI: 0.551-0.838), maxilla segmentation was 0.907 (95% CI: 0.867-0.948), and teeth segmentation was 0.925 (95% CI: 0.891-0.959). Pooled ASD values were 0.534 mm (95% CI: 0.366-0.703) for the mandibular canal, 0.468 mm (95% CI: 0.295-0.641) for the maxilla, and 0.189 mm (95% CI: 0.043-0.335) for teeth. Other metrics, such as sensitivity and precision, were variably reported, with sensitivity exceeding 90% across studies. CONCLUSION AI-based segmentation, particularly using deep-learning models, demonstrates high accuracy in the segmentation of dental and maxillofacial structures, comparable to expert manual segmentation. The integration of AI into clinical workflows offers not only accuracy but also substantial time savings, positioning it as a promising tool for automated dental imaging.
Collapse
Affiliation(s)
- Manea Alahmari
- College of Dentistry, King Khalid University, Abha, Saudi Arabia
| | - Maram Alahmari
- Armed Forces Hospital Southern Region, Khamis Mushait, Saudi Arabia
| | | | - Hossam Abdelmagyd
- College of Dentistry, Suez Canal University, Ajman, United Arab Emirates
| | - Kumuda Rao
- AB Shetty Memorial Institute of Dental Sciences, Nitte (Deemed to be University), Mangalore, India
| | - Zaid Hamdoon
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Mohammed Alsaegh
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Nallan C S K Chaitanya
- College of Dental Sciences, RAK Medical and Health Sciences University, Ras-Al-Khaimah, United Arab Emirates
| | - Shishir Shetty
- College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates.
| |
Collapse
|
3
|
Chung EJ, Yang BE, Kang SH, Kim YH, Na JY, Park SY, On SW, Byun SH. Validation of 2D lateral cephalometric analysis using artificial intelligence-processed low-dose cone beam computed tomography. Heliyon 2024; 10:e39445. [PMID: 39583802 PMCID: PMC11584577 DOI: 10.1016/j.heliyon.2024.e39445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 07/19/2024] [Accepted: 10/15/2024] [Indexed: 11/26/2024] Open
Abstract
Objectives Traditional cephalometric radiographs depict a three-dimensional structure in a two-dimensional plane; therefore, errors may occur during a quantitative assessment. Cone beam computed tomography, on the other hand, minimizes image distortion, allowing essential areas to be observed without overlap. Artificial intelligence can be used to enhance low-dose cone beam computed tomography images. This study aimed to clinically validate the use of artificial intelligence-processed low-dose cone beam computed tomography for generating two-dimensional lateral cephalometric radiographs by comparing these artificial intelligence-enhanced radiographs with traditional two-dimensional lateral cephalograms and those derived from standard cone beam computed tomography. Methods Sixteen participants who had previously undergone both cone beam computed tomography and plain radiography were selected. Group I included standard lateral cephalometric radiographs. Group II included cone beam computed tomography-produced lateral cephalometric radiographs, and Group III included artificial intelligence-processed low-dose cone beam computed tomography-produced lateral cephalometric radiographs. Lateral cephalometric radiographs of the three groups were analyzed using an artificial intelligence-based cephalometric analysis platform. Results A total of six angles and five lengths were measured for dentofacial diagnosis. There were no significant differences in measurements except for nasion-menton among the three groups. Conclusions Low-dose cone beam computed tomography could be an efficient method for cephalometric analyses in dentofacial treatment. Artificial intelligence-processed low-dose cone beam computed tomography imaging procedures have the potential in a wide range of dental applications. Further research is required to develop artificial intelligence technologies capable of producing acceptable and effective outcomes in various clinical situations. Clinical significance Replacing standard cephalograms with cone beam computed tomography (CBCT) to evaluate the craniofacial relationship has the potential to significantly enhance the diagnosis and treatment of selected patients. The effectiveness of low-dose (LD)-CBCT was assessed in this study. The results indicated that lateral cephalograms reconstructed using LD-CBCT were comparable to standard lateral cephalograms.
Collapse
Affiliation(s)
- Eun-Ji Chung
- Department of Conservative Dentistry, Hallym University Sacred Heart Hospital, Anyang, 14066, Republic of Korea
| | - Byoung-Eun Yang
- Department of Oral and Maxillofacial Surgery, Hallym University Sacred Heart Hospital, Anyang, 14066, Republic of Korea
- Graduate School of Clinical Dentistry, Hallym University, Chuncheon, 24252, Republic of Korea
- Institute of Clinical Dentistry, Hallym University, Chuncheon, 24252, Republic of Korea
- Dental AI-Robotics Center, Hallym University Sacred Heart Hospital, Anyang, 14066, Republic of Korea
| | - Sam-Hee Kang
- Department of Conservative Dentistry, Hallym University Sacred Heart Hospital, Anyang, 14066, Republic of Korea
| | - Young-Hee Kim
- Institute of Clinical Dentistry, Hallym University, Chuncheon, 24252, Republic of Korea
- Department of Oral and Maxillofacial Radiology, Hallym University Sacred Heart Hospital, Anyang, 14066, Republic of Korea
| | - Ji-Yeon Na
- Institute of Clinical Dentistry, Hallym University, Chuncheon, 24252, Republic of Korea
- Dental AI-Robotics Center, Hallym University Sacred Heart Hospital, Anyang, 14066, Republic of Korea
- Department of Oral and Maxillofacial Radiology, Hallym University Sacred Heart Hospital, Anyang, 14066, Republic of Korea
| | - Sang-Yoon Park
- Department of Oral and Maxillofacial Surgery, Hallym University Sacred Heart Hospital, Anyang, 14066, Republic of Korea
- Graduate School of Clinical Dentistry, Hallym University, Chuncheon, 24252, Republic of Korea
- Institute of Clinical Dentistry, Hallym University, Chuncheon, 24252, Republic of Korea
- Dental AI-Robotics Center, Hallym University Sacred Heart Hospital, Anyang, 14066, Republic of Korea
| | - Sung-Woon On
- Graduate School of Clinical Dentistry, Hallym University, Chuncheon, 24252, Republic of Korea
- Institute of Clinical Dentistry, Hallym University, Chuncheon, 24252, Republic of Korea
- Department of Oral and Maxillofacial Surgery, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, 18450, Republic of Korea
| | - Soo-Hwan Byun
- Department of Oral and Maxillofacial Surgery, Hallym University Sacred Heart Hospital, Anyang, 14066, Republic of Korea
- Graduate School of Clinical Dentistry, Hallym University, Chuncheon, 24252, Republic of Korea
- Institute of Clinical Dentistry, Hallym University, Chuncheon, 24252, Republic of Korea
- Dental AI-Robotics Center, Hallym University Sacred Heart Hospital, Anyang, 14066, Republic of Korea
| |
Collapse
|
4
|
Melerowitz L, Sreenivasa S, Nachbar M, Stsefanenka A, Beck M, Senger C, Predescu N, Ullah Akram S, Budach V, Zips D, Heiland M, Nahles S, Stromberger C. Design and evaluation of a deep learning-based automatic segmentation of maxillary and mandibular substructures using a 3D U-Net. Clin Transl Radiat Oncol 2024; 47:100780. [PMID: 38712013 PMCID: PMC11070663 DOI: 10.1016/j.ctro.2024.100780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 04/09/2024] [Accepted: 04/17/2024] [Indexed: 05/08/2024] Open
Abstract
Background Current segmentation approaches for radiation treatment planning in head and neck cancer patients (HNCP) typically consider the entire mandible as an organ at risk, whereas segmentation of the maxilla remains uncommon. Accurate risk assessment for osteoradionecrosis (ORN) or implant-based dental rehabilitation after radiation therapy may require a nuanced analysis of dose distribution in specific mandibular and maxillary segments. Manual segmentation is time-consuming and inconsistent, and there is no definition of jaw subsections. Materials and methods The mandible and maxilla were divided into 12 substructures. The model was developed from 82 computed tomography (CT) scans of HNCP and adopts an encoder-decoder three-dimensional (3D) U-Net structure. The efficiency and accuracy of the automated method were compared against manual segmentation on an additional set of 20 independent CT scans. The evaluation metrics used were the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and surface DSC (sDSC). Results Automated segmentations were performed in a median of 86 s, compared to manual segmentations, which took a median of 53.5 min. The median DSC per substructure ranged from 0.81 to 0.91, and the median HD95 ranged from 1.61 to 4.22. The number of artifacts did not affect these scores. The maxillary substructures showed lower metrics than the mandibular substructures. Conclusions The jaw substructure segmentation demonstrated high accuracy, time efficiency, and promising results in CT scans with and without metal artifacts. This novel model could provide further investigation into dose relationships with ORN or dental implant failure in normal tissue complication prediction models.
Collapse
Affiliation(s)
- L. Melerowitz
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - S. Sreenivasa
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - M. Nachbar
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - A. Stsefanenka
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - M. Beck
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - C. Senger
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - N. Predescu
- MVision AI, Paciuksenkatu 29 00270 Helsinki, Finland
| | | | - V. Budach
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - D. Zips
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| | - M. Heiland
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Augustenburger Platz 1, 13353, Berlin, Germany
| | - S. Nahles
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Augustenburger Platz 1, 13353, Berlin, Germany
| | - C. Stromberger
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, Augustenburger Platz 1, 13353, Berlin, Germany
| |
Collapse
|
5
|
Lucido JJ, DeWees TA, Leavitt TR, Anand A, Beltran CJ, Brooke MD, Buroker JR, Foote RL, Foss OR, Gleason AM, Hodge TL, Hughes CO, Hunzeker AE, Laack NN, Lenz TK, Livne M, Morigami M, Moseley DJ, Undahl LM, Patel Y, Tryggestad EJ, Walker MZ, Zverovitch A, Patel SH. Validation of clinical acceptability of deep-learning-based automated segmentation of organs-at-risk for head-and-neck radiotherapy treatment planning. Front Oncol 2023; 13:1137803. [PMID: 37091160 PMCID: PMC10115982 DOI: 10.3389/fonc.2023.1137803] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/24/2023] [Indexed: 04/09/2023] Open
Abstract
Introduction Organ-at-risk segmentation for head and neck cancer radiation therapy is a complex and time-consuming process (requiring up to 42 individual structure, and may delay start of treatment or even limit access to function-preserving care. Feasibility of using a deep learning (DL) based autosegmentation model to reduce contouring time without compromising contour accuracy is assessed through a blinded randomized trial of radiation oncologists (ROs) using retrospective, de-identified patient data. Methods Two head and neck expert ROs used dedicated time to create gold standard (GS) contours on computed tomography (CT) images. 445 CTs were used to train a custom 3D U-Net DL model covering 42 organs-at-risk, with an additional 20 CTs were held out for the randomized trial. For each held-out patient dataset, one of the eight participant ROs was randomly allocated to review and revise the contours produced by the DL model, while another reviewed contours produced by a medical dosimetry assistant (MDA), both blinded to their origin. Time required for MDAs and ROs to contour was recorded, and the unrevised DL contours, as well as the RO-revised contours by the MDAs and DL model were compared to the GS for that patient. Results Mean time for initial MDA contouring was 2.3 hours (range 1.6-3.8 hours) and RO-revision took 1.1 hours (range, 0.4-4.4 hours), compared to 0.7 hours (range 0.1-2.0 hours) for the RO-revisions to DL contours. Total time reduced by 76% (95%-Confidence Interval: 65%-88%) and RO-revision time reduced by 35% (95%-CI,-39%-91%). All geometric and dosimetric metrics computed, agreement with GS was equivalent or significantly greater (p<0.05) for RO-revised DL contours compared to the RO-revised MDA contours, including volumetric Dice similarity coefficient (VDSC), surface DSC, added path length, and the 95%-Hausdorff distance. 32 OARs (76%) had mean VDSC greater than 0.8 for the RO-revised DL contours, compared to 20 (48%) for RO-revised MDA contours, and 34 (81%) for the unrevised DL OARs. Conclusion DL autosegmentation demonstrated significant time-savings for organ-at-risk contouring while improving agreement with the institutional GS, indicating comparable accuracy of DL model. Integration into the clinical practice with a prospective evaluation is currently underway.
Collapse
Affiliation(s)
- J. John Lucido
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Todd A. DeWees
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Todd R. Leavitt
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Aman Anand
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| | - Chris J. Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL, United States
| | | | - Justine R. Buroker
- Research Services, Comprehensive Cancer Center, Mayo Clinic, Rochester, MN, United States
| | - Robert L. Foote
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Olivia R. Foss
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Angela M. Gleason
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Teresa L. Hodge
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | - Ashley E. Hunzeker
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Nadia N. Laack
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Tamra K. Lenz
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Douglas J. Moseley
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Lisa M. Undahl
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Yojan Patel
- Google Health, Mountain View, CA, United States
| | - Erik J. Tryggestad
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Samir H. Patel
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| |
Collapse
|
6
|
Pankert T, Lee H, Peters F, Hölzle F, Modabber A, Raith S. Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-022-02830-w. [PMID: 36637748 PMCID: PMC10363055 DOI: 10.1007/s11548-022-02830-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 12/26/2022] [Indexed: 01/14/2023]
Abstract
PURPOSE For computer-aided planning of facial bony surgery, the creation of high-resolution 3D-models of the bones by segmenting volume imaging data is a labor-intensive step, especially as metal dental inlays or implants cause severe artifacts that reduce the quality of the computer-tomographic imaging data. This study provides a method to segment accurate, artifact-free 3D surface models of mandibles from CT data using convolutional neural networks. METHODS The presented approach cascades two independently trained 3D-U-Nets to perform accurate segmentations of the mandible bone from full resolution CT images. The networks are trained in different settings using three different loss functions and a data augmentation pipeline. Training and evaluation datasets consist of manually segmented CT images from 307 dentate and edentulous individuals, partly with heavy imaging artifacts. The accuracy of the models is measured using overlap-based, surface-based and anatomical-curvature-based metrics. RESULTS Our approach produces high-resolution segmentations of the mandibles, coping with severe imaging artifacts in the CT imaging data. The use of the two-stepped approach yields highly significant improvements to the prediction accuracies. The best models achieve a Dice coefficient of 94.824% and an average surface distance of 0.31 mm on our test dataset. CONCLUSION The use of two cascaded U-Net allows high-resolution predictions for small regions of interest in the imaging data. The proposed method is fast and allows a user-independent image segmentation, producing objective and repeatable results that can be used in automated surgical planning procedures.
Collapse
Affiliation(s)
- Tobias Pankert
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany.
| | - Hyun Lee
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany
| | - Florian Peters
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany
| | - Ali Modabber
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany
| | - Stefan Raith
- Department of Oral and Maxillofacial Surgery, RWTH Aachen University Hospital, Aachen, Germany
| |
Collapse
|
7
|
Morris MX, Rajesh A, Asaad M, Hassan A, Saadoun R, Butler CE. Deep Learning Applications in Surgery: Current Uses and Future Directions. Am Surg 2023; 89:36-42. [PMID: 35567312 DOI: 10.1177/00031348221101490] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Deep learning (DL) is a subset of machine learning that is rapidly gaining traction in surgical fields. Its tremendous capacity for powerful data-driven problem-solving has generated computational breakthroughs in many realms, with the fields of medicine and surgery becoming increasingly prominent avenues. Through its multi-layer architecture of interconnected neural networks, DL enables feature extraction and pattern recognition of highly complex and large-volume data. Across various surgical specialties, DL is being applied to optimize both preoperative planning and intraoperative performance in new and innovative ways. Surgeons are now able to integrate deep learning tools into their practice to improve patient safety and outcomes. Through this review, we explore the applications of deep learning in surgery and related subspecialties with an aim to shed light on the practical utilization of this technology in the present and near future.
Collapse
Affiliation(s)
- Miranda X Morris
- 12277Duke University School of Medicine, Durham, NC, USA.,101571Duke Pratt School of Engineering, Durham, NC, USA
| | - Aashish Rajesh
- Department of Surgery, 14742University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Malke Asaad
- Department of Plastic Surgery, 6595University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Abbas Hassan
- Department of Plastic Surgery, 571198The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Rakan Saadoun
- Department of Plastic Surgery, 6595University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Charles E Butler
- Department of Plastic Surgery, 571198The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
8
|
Steybe D, Poxleitner P, Metzger MC, Brandenburg LS, Schmelzeisen R, Bamberg F, Tran PH, Kellner E, Reisert M, Russe MF. Automated segmentation of head CT scans for computer-assisted craniomaxillofacial surgery applying a hierarchical patch-based stack of convolutional neural networks. Int J Comput Assist Radiol Surg 2022; 17:2093-2101. [PMID: 35665881 PMCID: PMC9515026 DOI: 10.1007/s11548-022-02673-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 05/03/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Computer-assisted techniques play an important role in craniomaxillofacial surgery. As segmentation of three-dimensional medical imaging represents a cornerstone for these procedures, the present study was aiming at investigating a deep learning approach for automated segmentation of head CT scans. METHODS The deep learning approach of this study was based on the patchwork toolbox, using a multiscale stack of 3D convolutional neural networks. The images were split into nested patches using a fixed 3D matrix size with decreasing physical size in a pyramid format of four scale depths. Manual segmentation of 18 craniomaxillofacial structures was performed in 20 CT scans, of which 15 were used for the training of the deep learning network and five were used for validation of the results of automated segmentation. Segmentation accuracy was evaluated by Dice similarity coefficient (DSC), surface DSC, 95% Hausdorff distance (95HD) and average symmetric surface distance (ASSD). RESULTS Mean for DSC was 0.81 ± 0.13 (range: 0.61 [mental foramen] - 0.98 [mandible]). Mean Surface DSC was 0.94 ± 0.06 (range: 0.87 [mental foramen] - 0.99 [mandible]), with values > 0.9 for all structures but the mental foramen. Mean 95HD was 1.93 ± 2.05 mm (range: 1.00 [mandible] - 4.12 mm [maxillary sinus]) and for ASSD, a mean of 0.42 ± 0.44 mm (range: 0.09 [mandible] - 1.19 mm [mental foramen]) was found, with values < 1 mm for all structures but the mental foramen. CONCLUSION In this study, high accuracy of automated segmentation of a variety of craniomaxillofacial structures could be demonstrated, suggesting this approach to be suitable for the incorporation into a computer-assisted craniomaxillofacial surgery workflow. The small amount of training data required and the flexibility of an open source-based network architecture enable a broad variety of clinical and research applications.
Collapse
Affiliation(s)
- David Steybe
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany.
| | - Philipp Poxleitner
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
- Berta-Ottenstein-Programme for Clinician Scientists, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marc Christian Metzger
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Leonard Simon Brandenburg
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Rainer Schmelzeisen
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Phuong Hien Tran
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Elias Kellner
- Department of Medical Physics, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Department of Medical Physics, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
9
|
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, Chen X. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Collapse
|
10
|
Orhan K, Shamshiev M, Ezhov M, Plaksin A, Kurbanova A, Ünsal G, Gusarev M, Golitsyna M, Aksoy S, Mısırlı M, Rasmussen F, Shumilov E, Sanders A. AI-based automatic segmentation of craniomaxillofacial anatomy from CBCT scans for automatic detection of pharyngeal airway evaluations in OSA patients. Sci Rep 2022; 12:11863. [PMID: 35831451 PMCID: PMC9279304 DOI: 10.1038/s41598-022-15920-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 07/01/2022] [Indexed: 11/21/2022] Open
Abstract
This study aims to generate and also validate an automatic detection algorithm for pharyngeal airway on CBCT data using an AI software (Diagnocat) which will procure a measurement method. The second aim is to validate the newly developed artificial intelligence system in comparison to commercially available software for 3D CBCT evaluation. A Convolutional Neural Network-based machine learning algorithm was used for the segmentation of the pharyngeal airways in OSA and non-OSA patients. Radiologists used semi-automatic software to manually determine the airway and their measurements were compared with the AI. OSA patients were classified as minimal, mild, moderate, and severe groups, and the mean airway volumes of the groups were compared. The narrowest points of the airway (mm), the field of the airway (mm2), and volume of the airway (cc) of both OSA and non-OSA patients were also compared. There was no statistically significant difference between the manual technique and Diagnocat measurements in all groups (p > 0.05). Inter-class correlation coefficients were 0.954 for manual and automatic segmentation, 0.956 for Diagnocat and automatic segmentation, 0.972 for Diagnocat and manual segmentation. Although there was no statistically significant difference in total airway volume measurements between the manual measurements, automatic measurements, and DC measurements in non-OSA and OSA patients, we evaluated the output images to understand why the mean value for the total airway was higher in DC measurement. It was seen that the DC algorithm also measures the epiglottis volume and the posterior nasal aperture volume due to the low soft-tissue contrast in CBCT images and that leads to higher values in airway volume measurement.
Collapse
Affiliation(s)
- Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey. .,Medical Design Application, and Research Center (MEDITAM), Ankara University, Ankara, Turkey. .,Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, Lublin, Poland.
| | | | | | | | - Aida Kurbanova
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus.,Research Center of Experimental Health Science (DESAM), Near East University, Nicosia, Cyprus
| | | | | | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Melis Mısırlı
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Finn Rasmussen
- Internal Medicine Department Lunge Section, SVS Esbjerg, Esbjerg, Denmark.,Life Lung Health Center, Nicosia, Cyprus
| | | | | |
Collapse
|
11
|
|
12
|
Moghaddasi H, Zade AAT, Aziz MJ, Parhiz A, Farnia P, Ahmadian A, Alirezaie J. A Hybrid Capsule Network for Automatic 3D Mandible Segmentation applied in Virtual Surgical Planning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3768-3771. [PMID: 36085869 DOI: 10.1109/embc48229.2022.9871107] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Automatic mandible segmentation of CT images is an essential step to achieve an accurate preoperative prediction of an intended target in three-dimensional (3D) virtual surgical planning. Segmentation of the mandible is a challenging task due to the complexity of the mandible structure, imaging artifacts, and metal implants or dental filling materials. In recent years, utilizing convolutional neural networks (CNNs) have made significant improvements in mandible segmentation. However, aggregating data at pooling layers in addition to collecting and labeling a large volume of data for training CNNs are significant issues in medical practice. We have optimized data-efficient 3D-UCaps to achieve the advantages of both the capsule network and the CNN, for accurate mandible segmentation on volumetric CT images. A novel hybrid loss function based on a weighted combination of the focal and margin loss functions is also proposed to handle the problem of voxel class imbalance. To evaluate the performance of our proposed method, a similar experiment was conducted with the 3D-UNet. All experiments are performed on the public domain database for computational anatomy (PDDCA). The proposed method and 3D-UNet achieved an average dice coefficient of 90% and 88% on the PDDCA, respectively. The results indicate that the proposed method leads to accurate mandible segmentation and outperforms the popular 3D-UNet model. It is concluded that the proposed approach is very effective as it requires more than 50% fewer parameters than the 3D-UNet.
Collapse
|
13
|
Dot G, Schouman T, Dubois G, Rouch P, Gajny L. Fully automatic segmentation of craniomaxillofacial CT scans for computer-assisted orthognathic surgery planning using the nnU-Net framework. Eur Radiol 2022; 32:3639-3648. [PMID: 35037088 DOI: 10.1007/s00330-021-08455-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 09/27/2021] [Accepted: 11/01/2021] [Indexed: 01/06/2023]
Abstract
OBJECTIVES To evaluate the performance of the nnU-Net open-source deep learning framework for automatic multi-task segmentation of craniomaxillofacial (CMF) structures in CT scans obtained for computer-assisted orthognathic surgery. METHODS Four hundred and fifty-three consecutive patients having undergone high-resolution CT scans before orthognathic surgery were randomly distributed among a training/validation cohort (n = 300) and a testing cohort (n = 153). The ground truth segmentations were generated by 2 operators following an industry-certified procedure for use in computer-assisted surgical planning and personalized implant manufacturing. Model performance was assessed by comparing model predictions with ground truth segmentations. Examination of 45 CT scans by an industry expert provided additional evaluation. The model's generalizability was tested on a publicly available dataset of 10 CT scans with ground truth segmentation of the mandible. RESULTS In the test cohort, mean volumetric Dice similarity coefficient (vDSC) and surface Dice similarity coefficient at 1 mm (sDSC) were 0.96 and 0.97 for the upper skull, 0.94 and 0.98 for the mandible, 0.95 and 0.99 for the upper teeth, 0.94 and 0.99 for the lower teeth, and 0.82 and 0.98 for the mandibular canal. Industry expert segmentation approval rates were 93% for the mandible, 89% for the mandibular canal, 82% for the upper skull, 69% for the upper teeth, and 58% for the lower teeth. CONCLUSION While additional efforts are required for the segmentation of dental apices, our results demonstrated the model's reliability in terms of fully automatic segmentation of preoperative orthognathic CT scans. KEY POINTS • The nnU-Net deep learning framework can be trained out-of-the-box to provide robust fully automatic multi-task segmentation of CT scans performed for computer-assisted orthognathic surgery planning. • The clinical viability of the trained nnU-Net model is shown on a challenging test dataset of 153 CT scans randomly selected from clinical practice, showing metallic artifacts and diverse anatomical deformities. • Commonly used biomedical segmentation evaluation metrics (volumetric and surface Dice similarity coefficient) do not always match industry expert evaluation in the case of more demanding clinical applications.
Collapse
Affiliation(s)
- Gauthier Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France. .,Universite de Paris, AP-HP, Hopital Pitie-Salpetriere, Service d'Odontologie, Paris, France.
| | - Thomas Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - Guillaume Dubois
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Materialise, Malakoff, France
| | - Philippe Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,EPF-Graduate School of Engineering, Sceaux, France
| | - Laurent Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France
| |
Collapse
|