1
|
Hamad KQA, Said KN, Engelschalk M, Matoug-Elwerfelli M, Gupta N, Eric J, Ali SA, Ali K, Daas H, Alhaija ESA. Taxonomic discordance of immersive realities in dentistry: A systematic scoping review.: Taxonomic discordance of immersive realities. J Dent 2024:105058. [PMID: 38729286 DOI: 10.1016/j.jdent.2024.105058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 05/04/2024] [Accepted: 05/07/2024] [Indexed: 05/12/2024] Open
Abstract
OBJECTIVES This review aimed to map taxonomy frameworks, descriptions, and applications of immersive technologies in the dental literature. DATA The Preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) guidelines was followed, and the protocol was registered at open science framework platform (https://doi.org/10.17605/OSF.IO/H6N8M). SOURCES Systematic search was conducted in MEDLINE (via PubMed), Scopus, and Cochrane Library databases, and complemented by manual search. STUDY SELECTION A total of 84 articles were included, with 81% between 2019 and 2023. Most studies were experimental (62%), including education (25%), protocol feasibility (20%), in vitro (11%), and cadaver (6%). Other study types included clinical report/technique article (24%), clinical study (9%), technical note/tip to reader (4%), and randomized controlled trial (1%). Three-quarters of the included studies were published in oral and maxillofacial surgery (38%), dental education (26%), and implant (12%) disciplines. Methods of display included head mounted display device (HMD) (55%), see through screen (32%), 2D screen display (11%), and projector display (2%). Descriptions of immersive realities were fragmented and inconsistent with lack of clear taxonomy framework for the umbrella and the subset terms including virtual reality (VR), augmented reality (AR), mixed reality (MR), augmented virtuality (AV), extended reality, and X reality. CONCLUSIONS Immersive reality applications in dentistry are gaining popularity with a notable surge in the number of publications in the last 5 years. Ambiguities are apparent in the descriptions of immersive realities. A taxonomy framework based on method of display (full or partial) and reality class (VR, AR, or MR) is proposed. CLINICAL SIGNIFICANCE Understanding different reality classes can be perplexing due to their blurred boundaries and conceptual overlapping. Immersive technologies offer novel educational and clinical applications. This domain is fast developing. With the current fragmented and inconsistent terminologies, a comprehensive taxonomy framework is necessary.
Collapse
Affiliation(s)
- Khaled Q Al Hamad
- College of Dental Medicine, QU Health, Qatar University, Doha, Qatar.
| | - Khalid N Said
- College of Dental Medicine, QU Health, Qatar University, Doha, Qatar; Hamad Medical Corporation
| | - Marcus Engelschalk
- Department of Oral and Maxillofacial Surgery, University Medical Center Hamburg-Eppendorf, Germany
| | | | - Nidhi Gupta
- College of Dental Medicine, QU Health, Qatar University, Doha, Qatar
| | - Jelena Eric
- College of Dental Medicine, QU Health, Qatar University, Doha, Qatar
| | - Shaymaa A Ali
- Hamad Medical Corporation; College of Dental Medicine, QU Health, Qatar University, Doha, Qatar
| | - Kamran Ali
- College of Dental Medicine, QU Health, Qatar University, Doha, Qatar
| | - Hanin Daas
- College of Dental Medicine, QU Health, Qatar University, Doha, Qatar
| | | |
Collapse
|
2
|
Li F, Gao Q, Wang N, Greene N, Song T, Dianat O, Azimi E. Mixed reality guided root canal therapy. Healthc Technol Lett 2024; 11:167-178. [PMID: 38638496 PMCID: PMC11022218 DOI: 10.1049/htl2.12077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 01/11/2024] [Indexed: 04/20/2024] Open
Abstract
Root canal therapy (RCT) is a widely performed procedure in dentistry, with over 25 million individuals undergoing it annually. This procedure is carried out to address inflammation or infection within the root canal system of affected teeth. However, accurately aligning CT scan information with the patient's tooth has posed challenges, leading to errors in tool positioning and potential negative outcomes. To overcome these challenges, a mixed reality application is developed using an optical see-through head-mounted display (OST-HMD). The application incorporates visual cues, an augmented mirror, and dynamically updated multi-view CT slices to address depth perception issues and achieve accurate tooth localization, comprehensive canal exploration, and prevention of perforation during RCT. Through the preliminary experimental assessment, significant improvements in the accuracy of the procedure are observed. Specifically, with the system the accuracy in position was improved from 1.4 to 0.4 mm (more than a 70% gain) using an Optical Tracker (NDI) and from 2.8 to 2.4 mm using an HMD, thereby achieving submillimeter accuracy with NDI. 6 participants were enrolled in the user study. The result of the study suggests that the average displacement on the crown plane of 1.27 ± 0.83 cm, an average depth error of 0.90 ± 0.72 cm and an average angular deviation of 1.83 ± 0.83°. Our error analysis further highlights the impact of HMD spatial localization and head motion on the registration and calibration process. Through seamless integration of CT image information with the patient's tooth, our mixed reality application assists dentists in achieving precise tool placement. This advancement in technology has the potential to elevate the quality of root canal procedures, ensuring better accuracy and enhancing overall treatment outcomes.
Collapse
Affiliation(s)
- Fangjie Li
- Department of Biomedical EngineeringJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Qingying Gao
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Nengyu Wang
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Nicholas Greene
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Tianyu Song
- Chair for Computer Aided Medical ProceduresTechnical University of MunichMunichGermany
| | - Omid Dianat
- School of DentistryUniversity of MarylandBaltimoreMarylandUSA
| | - Ehsan Azimi
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| |
Collapse
|
3
|
Peng MJ, Chen HY, Chen P, Tan Z, Hu Y, To MKT, He E. Virtual reality-based surgical planning simulator for tumorous resection in FreeForm Modeling: an illustrative case of clinical teaching. Quant Imaging Med Surg 2024; 14:2060-2068. [PMID: 38415160 PMCID: PMC10895132 DOI: 10.21037/qims-23-1151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 12/12/2023] [Indexed: 02/29/2024]
Abstract
The importance of virtual reality (VR) has been emphasized by many medical studies, yet it has been relatively under-applied to surgical operation. This study characterized how VR has been applied in clinical education and evaluated its tutorial utility by designing a surgical model of tumorous resection as a simulator for preoperative planning and medical tutorial. A 36-year-old male patient with a femoral tumor who was admitted to the Affiliated Jiangmen Traditional Chinese Medicine Hospital was randomly selected and scanned by computed tomography (CT). The data in digital imaging and communications in medicine (*.DICOM) format were imported into Mimics to reconstruct a femoral model, and were generated to the format of *.stl executing in the computer-aided design (CAD) software SenSable FreeForm Modeling (SFM). A bony tumor was simulated by adding clay to the femur, the procedure of tumorous resection was virtually performed with a toolkit called Phantom, and its bony defect was filled with virtual cement. A 3D workspace was created to enable the individual multimodality manipulation, and a virtual operation of tumorous excision was successfully carried out with indefinitely repeated running. The precise delineation of surgical margins was shown to be achieved with expert proficiency and inexperienced hands among 43 of 50 participants. This simulative educator presented an imitation of high definition, those trained by VR models achieved a higher success rate of 86% than the rate of 74% achieved by those trained by conventional methods. This tumorous resection was repeatably handled by SFM, including the establishment of surgical strategy, whereby participants felt that respondent force feedback was beneficial to surgical teaching programs, enabling engagement of learning experiences by immersive events which mimic real-world circumstances to reinforce didactic and clinical concepts.
Collapse
Affiliation(s)
- Matthew Jianqiao Peng
- Department of Spinal Surgery, Affiliated Jiangmen Traditional Chinese Medicine Hospital of Jinan University, Jiangmen, China
| | - Hai-Yan Chen
- Department of Orthopedics, Huidong People’s Hospital, Huizhou, China
| | - Peikai Chen
- Department of Orthopedics and Traumatology, The University of Hong Kong-Shenzhen Hospital, Hong Kong, China
| | - Zhijia Tan
- Department of Orthopedics and Traumatology, The University of Hong Kong-Shenzhen Hospital, Hong Kong, China
| | - Yong Hu
- Department of Orthopedics and Traumatology, The University of Hong Kong-Shenzhen Hospital, Hong Kong, China
| | - Michael Kai-Tsun To
- Department of Orthopedics and Traumatology, The University of Hong Kong-Shenzhen Hospital, Hong Kong, China
| | - Erxing He
- Department of Spinal Surgery, Affiliated 4th Hospital of Guangzhou Medical University, Guangzhou, China
| |
Collapse
|
4
|
Gernandt S, Tomasella O, Scolozzi P, Fenelon M. Contribution of 3D printing for the surgical management of jaws cysts and benign tumors: A systematic review of the literature. J Stomatol Oral Maxillofac Surg 2023; 124:101433. [PMID: 36914002 DOI: 10.1016/j.jormas.2023.101433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 03/08/2023] [Indexed: 03/13/2023]
Abstract
BACKGROUND Three-dimensional (3D) printing is now a widely recognized surgical tool in oral and maxillofacial surgery. However, little is known about its benefits for the surgical management of benign maxillary and mandibular tumors and cysts. PURPOSE The objective of this systematic review was to assess the contribution of 3D printing in the management of benign jaw lesions. METHODS A systematic review, registered in PROSPERO, was conducted using PubMed and Scopus databases, up to December 2022, by following PRISMA guidelines. Studies reporting 3D printing applications for the surgical management of benign jaw lesions were considered. RESULTS This review included thirteen studies involving 74 patients. The principal use of 3D printing was to produce anatomical models, intraoperative surgical guides, or both, allowing for the successful removal of maxillary and mandibular lesions. The greatest reported benefits of printed models were the visualization of the lesion and its anatomical relationships to anticipate intraoperative risks. Surgical guides were designed as drilling locating guides or osteotomy cutting guides and contributed to decreasing operating time and improving the accuracy of the surgery. CONCLUSION Using 3D printing technologies to manage benign jaw lesions results in less invasive procedures by facilitating precise osteotomies, reducing operating times, and complications. More studies with higher levels of evidence are needed to confirm our results.
Collapse
Affiliation(s)
- Steven Gernandt
- Division of Oral and Maxillofacial Surgery, Department of Surgery, Geneva University Hospitals, Geneva, Switzerland
| | - Olivia Tomasella
- UFR des Sciences Odontologiques, Univ. Bordeaux, 33000 Bordeaux, France
| | - Paolo Scolozzi
- Division of Oral and Maxillofacial Surgery, Department of Surgery, Geneva University Hospitals, Geneva, Switzerland.
| | - Mathilde Fenelon
- Division of Oral and Maxillofacial Surgery, Department of Surgery, Geneva University Hospitals, Geneva, Switzerland; UFR des Sciences Odontologiques, Univ. Bordeaux, 33000 Bordeaux, France; Service de chirurgie orale, CHU de Bordeaux, France
| |
Collapse
|
5
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
6
|
Farronato M, Torres A, Pedano MS, Jacobs R. Novel method for augmented reality guided endodontics: an in vitro study. J Dent 2023;:104476. [PMID: 36905949 DOI: 10.1016/j.jdent.2023.104476] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 02/02/2023] [Accepted: 02/28/2023] [Indexed: 03/11/2023] Open
Abstract
OBJECTIVE The aim of this study is to evaluate the accuracy in endodontics of a novel augmented reality (AR) method for guided access cavity preparation in 3D-printed jaws. METHODS Two operators with different levels of experience in endodontics performed pre-planned virtually guided access cavities through a novel markerless AR system developed by a team among the authors on three sets of 3D-printed jaw models using a 3D printer (Objet Connex 350, Stratasys) mounted on a phantom. After the treatment, a post-operative high-resolution CBCT scan (NewTom VGI Evo, Cefla) was taken for each model and registered to the pre-operative model. All the access cavities were then digitally reconstructed by filling the cavity area using 3D medical software (3-Matic 15.0, Materialise). For the anterior teeth and the premolars, the deviation at the coronal and apical entry points as well as the angular deviation of the access cavity were compared to the virtual plan. For the molars, the deviation at the coronal entry point was compared to the virtual plan. Additionally, the surface area of all access cavities at the entry point was measured and compared to the virtual plan. Descriptive statistics for each parameter were performed. A 95% confidence interval was calculated. RESULTS A total of 90 access cavities were drilled up to a depth of 4 mm inside the tooth. The mean deviation in the frontal teeth and in the premolars at the entry point was 0.51 mm and 0.77 mm at the apical point, with a mean angular deviation of 8.5° and a mean surface overlap of 57%. The mean deviation for the molars at the entry point was 0.63 mm, with a mean surface overlap of 82%. CONCLUSION The use of AR as a digital guide for endodontic access cavity drilling on different teeth showed promising results and might have potential for clinical use. However, further development and research might be needed before in vivo validation to overcome the limitations of the study.
Collapse
|
7
|
Sugahara K, Koyachi M, Tachizawa K, Iwasaki A, Matsunaga S, Odaka K, Sugimoto M, Abe S, Nishii Y, Katakura A. Using mixed reality and CAD/CAM technology for treatment of maxillary non-union after Le Fort I osteotomy: a case description. Quant Imaging Med Surg 2023; 13:1190-1199. [PMID: 36819286 PMCID: PMC9929389 DOI: 10.21037/qims-22-414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 11/11/2022] [Indexed: 01/05/2023]
Affiliation(s)
- Keisuke Sugahara
- Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, Tokyo, Japan;,Oral Health Science Center, Tokyo Dental College, Tokyo, Japan
| | - Masahide Koyachi
- Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, Tokyo, Japan
| | - Kotaro Tachizawa
- Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, Tokyo, Japan
| | - Akira Iwasaki
- Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, Tokyo, Japan
| | - Satoru Matsunaga
- Oral Health Science Center, Tokyo Dental College, Tokyo, Japan;,Department of Anatomy, Tokyo Dental College, Tokyo, Japan
| | - Kento Odaka
- Department of Oral and Maxillofacial Radiology, Tokyo Dental College, Tokyo, Japan
| | - Maki Sugimoto
- Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, Tokyo, Japan;,Innovation Lab, Teikyo University Okinaga Research Institute, Tokyo, Japan
| | - Shinichi Abe
- Department of Anatomy, Tokyo Dental College, Tokyo, Japan
| | - Yasushi Nishii
- Department of Orthodontics, Tokyo Dental College, Tokyo, Japan
| | - Akira Katakura
- Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, Tokyo, Japan;,Oral Health Science Center, Tokyo Dental College, Tokyo, Japan
| |
Collapse
|
8
|
Shahbaz M, Miao H, Farhaj Z, Gong X, Weikai S, Dong W, Jun N, Shuwei L, Yu D. Mixed reality navigation training system for liver surgery based on a high-definition human cross-sectional anatomy data set. Cancer Med 2023; 12:7992-8004. [PMID: 36607128 PMCID: PMC10134360 DOI: 10.1002/cam4.5583] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 11/24/2022] [Accepted: 12/17/2022] [Indexed: 01/07/2023] Open
Abstract
OBJECTIVES This study aims to use the three-dimensional (3D) mixed-reality model of liver, entailing complex intrahepatic systems and to deeply study the anatomical structures and to promote the training, diagnosis and treatment of liver diseases. METHODS Vascular perfusion human specimens were used for thin-layer frozen milling to obtain liver cross-sections. The 104-megapixel-high-definition cross sectional data set was established and registered to achieve structure identification and manual segmentation. The digital model was reconstructed and data was used to print a 3D hepatic model. The model was combined with HoloLens mixed reality technology to reflect the complex relationships of intrahepatic systems. We simulated 3D patient specific anatomy for identification and preoperative planning, conducted a questionnaire survey, and evaluated the results. RESULTS The 3D digital model and 1:1 transparent and colored model of liver established truly reflected intrahepatic vessels and their complex relationships. The reconstructed model imported into HoloLens could be accurately matched with the 3D model. Only 7.7% participants could identify accessory hepatic veins. The depth and spatial-relationship of intrahepatic structures were better understandable for 92%. The 100%, 84.6%, 69% and 84% believed the 3D models were useful in planning, safer surgical paths, reducing intraoperative complications and training of young surgeons respectively. CONCLUSIONS A detailed 3D model can be reconstructed using the higher quality cross-sectional anatomical data set. When combined with 3D printing and HoloLens technology, a novel hybrid-reality navigation-training system for liver surgery is created. Mixed Reality training is a worthy alternative to provide 3D information to clinicians and its possible application in surgery. This conclusion was obtained based on a questionnaire and evaluation. Surgeons with extensive experience in surgical operations perceived in the questionnaire that this technology might be useful in liver surgery, would help in precise preoperative planning, accurate intraoperative identification, and reduction of hepatic injury.
Collapse
Affiliation(s)
- Muhammad Shahbaz
- Department of Radiology, Qilu Hospital of Shandong UniversityJinanShandongChina
- Research Center for Sectional and Imaging AnatomyDigital Human Institute, School of Basic Medical Science, Shandong UniversityJinanShandongChina
- Department of General SurgeryQilu Hospital of Shandong UniversityJinanShandongChina
| | - Huachun Miao
- Department of Anatomy, Wannan Medical CollegeWuhuAnhuiChina
| | - Zeeshan Farhaj
- Department of Cardiovascular Surgery, Shandong Qianfoshan Hospital, Cheeloo College of MedicineShandong UniversityJinanShandongChina
| | - Xin Gong
- Department of Anatomy, Wannan Medical CollegeWuhuAnhuiChina
| | - Sun Weikai
- Department of Radiology, Qilu Hospital of Shandong UniversityJinanShandongChina
| | - Wenqing Dong
- Department of Anatomy, Wannan Medical CollegeWuhuAnhuiChina
| | - Niu Jun
- Department of General SurgeryQilu Hospital of Shandong UniversityJinanShandongChina
| | - Liu Shuwei
- Research Center for Sectional and Imaging AnatomyDigital Human Institute, School of Basic Medical Science, Shandong UniversityJinanShandongChina
| | - Dexin Yu
- Department of Radiology, Qilu Hospital of Shandong UniversityJinanShandongChina
| |
Collapse
|
9
|
Koyama Y, Sugahara K, Koyachi M, Tachizawa K, Iwasaki A, Wakita I, Nishiyama A, Matsunaga S, Katakura A. Mixed reality for extraction of maxillary mesiodens. Maxillofac Plast Reconstr Surg 2023; 45:1. [PMID: 36602618 PMCID: PMC9816364 DOI: 10.1186/s40902-022-00370-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 12/25/2022] [Indexed: 01/06/2023] Open
Abstract
BACKGROUND Mesiodentes are the most common supernumerary teeth. The cause is not fully understood, although proliferations of genetic factors and the dental lamina have been implicated. Mesiodentes can cause delayed or ectopic eruption of permanent incisors, which can further alter occlusion and appearance. Careful attention should be paid to the position and direction of the mesiodentes because of possible damage to adjacent roots in the permanent dentition period, errant extraction in the deciduous and mixed dentition periods, and damage to the permanent tooth embryo. To avoid these complications, we applied mixed reality (MR) technology using the HoloLens® (Microsoft, California). In this study, we report on three cases of mesiodentes extraction under general anesthesia using MR technology. RESULTS The patients ranged in age from 6 to 11 years, all three were boys, and the direction of eruption was inverted in all cases. The extraction approach was palatal in two cases and labial in one case. The average operative time was 32 min, and bleeding was minimal in all cases. No intraoperative or postoperative complications occurred. An image was shared preoperatively with all the surgeons using an actual situation model. Three surgeons used Microsoft HoloLens® during surgery, shared MR, and operated while superimposing the application image in the surgical field. CONCLUSIONS The procedure was performed safely; further development of MR surgery support systems in the future is suggested.
Collapse
Affiliation(s)
- Yu Koyama
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Keisuke Sugahara
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan ,grid.265070.60000 0001 1092 3624Oral Health Science Center, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Masahide Koyachi
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Kotaro Tachizawa
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Akira Iwasaki
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Ichiro Wakita
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Akihiro Nishiyama
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Satoru Matsunaga
- grid.265070.60000 0001 1092 3624Department of Anatomy, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Akira Katakura
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan ,grid.265070.60000 0001 1092 3624Oral Health Science Center, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| |
Collapse
|
10
|
Ceccariglia F, Cercenelli L, Badiali G, Marcelli E, Tarsitano A. Application of Augmented Reality to Maxillary Resections: A Three-Dimensional Approach to Maxillofacial Oncologic Surgery. J Pers Med 2022; 12:jpm12122047. [PMID: 36556268 PMCID: PMC9785494 DOI: 10.3390/jpm12122047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 12/03/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
In the relevant global context, although virtual reality, augmented reality, and mixed reality have been emerging methodologies for several years, only now have technological and scientific advances made them suitable for revolutionizing clinical care and medical settings through the provision of advanced features and improved healthcare services. Over the past fifteen years, tools and applications using augmented reality (AR) have been designed and tested in the context of various surgical and medical disciplines, including maxillofacial surgery. The purpose of this paper is to show how a marker-less AR guidance system using the Microsoft® HoloLens 2 can be applied in mandible and maxillary demolition surgery to guide maxillary osteotomies. We describe three mandibular and maxillary oncologic resections performed during 2021 using AR support. In these three patients, we applied a marker-less tracking method based on recognition of the patient's facial profile. The surgeon, using HoloLens 2 smart glasses, could see the virtual surgical planning superimposed on the patient's anatomy. We showed that performing osteotomies under AR guidance is feasible and viable, as demonstrated by comparison with osteotomies performed using CAD-CAM cutting guides. This technology has advantages and disadvantages. However, further research is needed to improve the stability and robustness of the marker-less tracking method applied to patient face recognition.
Collapse
Affiliation(s)
- Francesco Ceccariglia
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
- Correspondence: ; Tel.: +39-051-2144197
| | - Laura Cercenelli
- eDimes Lab-Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy
| | - Giovanni Badiali
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
| | - Emanuela Marcelli
- eDimes Lab-Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy
| | - Achille Tarsitano
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
| |
Collapse
|
11
|
Palumbo A. Microsoft HoloLens 2 in Medical and Healthcare Context: State of the Art and Future Prospects. Sensors (Basel) 2022; 22:s22207709. [PMID: 36298059 PMCID: PMC9611914 DOI: 10.3390/s22207709] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 09/29/2022] [Accepted: 10/07/2022] [Indexed: 05/08/2023]
Abstract
In the world reference context, although virtual reality, augmented reality and mixed reality have been emerging methodologies for several years, only today technological and scientific advances have made them suitable to revolutionize clinical care and medical contexts through the provision of enhanced functionalities and improved health services. This systematic review provides the state-of-the-art applications of the Microsoft® HoloLens 2 in a medical and healthcare context. Focusing on the potential that this technology has in providing digitally supported clinical care, also but not only in relation to the COVID-19 pandemic, studies that proved the applicability and feasibility of HoloLens 2 in a medical and healthcare scenario were considered. The review presents a thorough examination of the different studies conducted since 2019, focusing on HoloLens 2 medical sub-field applications, device functionalities provided to users, software/platform/framework used, as well as the study validation. The results provided in this paper could highlight the potential and limitations of the HoloLens 2-based innovative solutions and bring focus to emerging research topics, such as telemedicine, remote control and motor rehabilitation.
Collapse
Affiliation(s)
- Arrigo Palumbo
- Department of Medical and Surgical Sciences, Magna Græcia University, 88100 Catanzaro, Italy
| |
Collapse
|
12
|
Aoyama R, Anazawa U, Hotta H, Watanabe I, Takahashi Y, Matsumoto S, Ishibashi T. Augmented Reality Device for Preoperative Marking of Spine Surgery Can Improve the Accuracy of Level Identification. Spine Surg Relat Res 2022; 6:303-309. [PMID: 35800633 PMCID: PMC9200419 DOI: 10.22603/ssrr.2021-0168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 09/16/2021] [Indexed: 11/23/2022] Open
Abstract
Introduction Wrong-site spine surgery is an incident that could result in possible severe complications. In this present spinal surgery, the accurate spinal level is confirmed via preoperative or intraoperative radiographic marking. However, the location of radiographic marking has been determined from the manual palpation on the landmarks of the body surface. As a result, severe spine deformity can make it hard to identify the spinal level by manual palpation, thus leading to misidentification of the spinal level. Recently, the use of mixed reality in spine surgery is gradually increasing. In this study, we will demonstrate a head-mounted display (HMD) device that can project a hologram (3D image) of the patient's bone onto the actual patient's body to improve the accuracy of level identification for spine surgery. Technical Note 3D CT images are created preoperatively, and the bone's STL data (3D data) are generated with the workstation. The created STL data are downloaded to the augmented reality software Holoeyes, installed on the HMD. Through this device, surgeons can view the hologram (3D image) of a patient's bone overlaying on an actual patient's body. We temporally estimated the spinous process level only by manual palpation without an HMD. Then, we estimated the spinous process level again after matching this hologram to a real bone with an HMD. The accuracy of the level identification with an HMD and without an HMD was examined by radiographic marking in order to evaluate the misidentification rate of the level. Without an HMD, the misidentification rate of the level was at 26.5%, while with it, the rate was reduced to 14.3%. Conclusions On preoperative marking, an HMD-projecting bone image onto a patient's body could allow us to estimate the spinal level more accurately. Identification of the spinal level using mixed reality is effective in preventing wrong-site spine surgery.
Collapse
Affiliation(s)
- Ryoma Aoyama
- Department of Orthopedics, Tokyo Dental College Ichikawa General Hospital
| | - Ukei Anazawa
- Department of Orthopedics, Tokyo Dental College Ichikawa General Hospital
| | - Hiraku Hotta
- Department of Orthopedics, Tokyo Dental College Ichikawa General Hospital
| | - Itsuo Watanabe
- Department of Orthopedics, Tokyo Dental College Ichikawa General Hospital
| | - Yuichiro Takahashi
- Department of Orthopedics, Tokyo Dental College Ichikawa General Hospital
| | - Shogo Matsumoto
- Department of Orthopedics, Tokyo Dental College Ichikawa General Hospital
| | - Toshiki Ishibashi
- Department of Orthopedics, Tokyo Dental College Ichikawa General Hospital
| |
Collapse
|
13
|
Aoyama R, Anazawa U, Hotta H, Watanabe I, Takahashi Y, Matsumoto S. A Novel Technique of Mixed Reality Systems in the Treatment of Spinal Cord Tumors. Cureus 2022; 14:e23096. [PMID: 35296052 PMCID: PMC8917809 DOI: 10.7759/cureus.23096] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2022] [Indexed: 11/29/2022] Open
Abstract
Several reports have compared spinal cord tumor removal techniques but none have clearly described the appropriate site and level of indication for laminectomy or laminoplasty. The approach method for tumor removal depends on the type and localization of the tumor and the surgeon's skill. Therefore, a system that can suggest various surgical techniques is useful for spinal cord tumor surgery. The mixed reality system introduced in this paper is an excellent system that can suggest various surgical procedures. Using this system for spinal cord tumor removal, we made the surgery less invasive; therefore, we introduced this system and demonstrated its usefulness. Stereoscopic data of the patients with spinal cord tumors were obtained from preoperative myelogram-CT data. Stereoscopic laminectomy models including tumors were created using Blender, a free three-dimensional (3D) image editing software. We observed these data as 3D object images using a head-mounted display (HMD). This HMD is commercially available and relatively inexpensive. The surgical procedure is determined by considering those 3D images, radiological diagnosis, and the skill of surgeons. Intraoperative confirmation of the laminectomy site could be performed using the HMD. The 3D visualization of pathological conditions resulted in correct preoperative surgical planning and less invasive surgery in all five cases. Stereoscopic images using HMDs allow us a more intuitive understanding of the positional relationship between the tumor and spinal structure. These 3D object images can bring us more accurate preoperative planning and proper determination of surgical methods.
Collapse
|
14
|
Morimoto T, Kobayashi T, Hirata H, Otani K, Sugimoto M, Tsukamoto M, Yoshihara T, Ueno M, Mawatari M. XR (Extended Reality: Virtual Reality, Augmented Reality, Mixed Reality) Technology in Spine Medicine: Status Quo and Quo Vadis. J Clin Med 2022; 11:470. [PMID: 35054164 PMCID: PMC8779726 DOI: 10.3390/jcm11020470] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/01/2022] [Accepted: 01/11/2022] [Indexed: 02/06/2023] Open
Abstract
In recent years, with the rapid advancement and consumerization of virtual reality, augmented reality, mixed reality, and extended reality (XR) technology, the use of XR technology in spine medicine has also become increasingly popular. The rising use of XR technology in spine medicine has also been accelerated by the recent wave of digital transformation (i.e., case-specific three-dimensional medical images and holograms, wearable sensors, video cameras, fifth generation, artificial intelligence, and head-mounted displays), and further accelerated by the COVID-19 pandemic and the increase in minimally invasive spine surgery. The COVID-19 pandemic has a negative impact on society, but positive impacts can also be expected, including the continued spread and adoption of telemedicine services (i.e., tele-education, tele-surgery, tele-rehabilitation) that promote digital transformation. The purpose of this narrative review is to describe the accelerators of XR (VR, AR, MR) technology in spine medicine and then to provide a comprehensive review of the use of XR technology in spine medicine, including surgery, consultation, education, and rehabilitation, as well as to identify its limitations and future perspectives (status quo and quo vadis).
Collapse
|