1
|
Solis-Oviedo CJ, Pérez Jiménez FJ, Campos JA, Nájera Ríos CI, Bañuelos Saucedo MÁ, Pérez-Escamirosa F. A 3D-printed hybrid portable simulator for skills training in arthroscopic knee surgery. Proc Inst Mech Eng H 2025; 239:398-410. [PMID: 40165487 DOI: 10.1177/09544119251328414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Arthroscopic surgery has become the first option for the treatment of joint injuries. However, training outside the operating room is limited by the lack of portability and high cost of high-fidelity simulators. The aim of this study is to present the ArthSim hybrid simulator, a low-cost portable device for the training of psychomotor skills of orthopaedic surgeons in arthroscopic knee surgery. The ArthSim simulator consists of a physical model of the knee with an integrated motion tracking system with a virtual reality application that captures and replicates the movements of the knee joint and the two arthroscopic instruments inside the virtual model, in a mixed reality approach to arthroscopy training. The functionality of ArthSim's technology was evaluated in two experiments: static and dynamic. The interaction of the physical knee joint and the arthroscopic instruments within the virtual model was evaluated by eight orthopaedic surgeons, who recreated the common positions of the knee, arthroscope, and instruments during the exploration of the internal structures. The results indicated a surgical total workspace of 80 mm3 with a range of motion of 115° for flexion, 23° for abduction, and 33° for rotation in the knee joint. The measurements showed linearity and repeatability with errors below, for motion capture. Feedback provided by orthopaedic surgeons on ArthSim was used to identify the device's points of improvement. The ArthSim simulator provides an effective alternative for arthroscopic training in a hybrid simulation approach, offering natural haptics to enhance the surgical experience of orthopaedic surgeons.
Collapse
Affiliation(s)
- Carlos Javier Solis-Oviedo
- Instituto de Ciencias Aplicadas y Tecnología (ICAT), Universidad Nacional Autónoma de México (UNAM), Coyoacán, Ciudad de México, México
| | | | - Jonathan Acuña Campos
- Servicio de Ortopedia del Deporte y Artroscopia, Instituto Nacional de Rehabilitación, Ciudad de México, México
| | - César Iván Nájera Ríos
- Servicio de Ortopedia, Corporativo Hospital Satélite, Naucalpan de Juarez, Estado de México, México
| | - Miguel Ángel Bañuelos Saucedo
- Instituto de Ciencias Aplicadas y Tecnología (ICAT), Universidad Nacional Autónoma de México (UNAM), Coyoacán, Ciudad de México, México
| | - Fernando Pérez-Escamirosa
- Instituto de Ciencias Aplicadas y Tecnología (ICAT), Universidad Nacional Autónoma de México (UNAM), Coyoacán, Ciudad de México, México
| |
Collapse
|
2
|
Zhang Z, Meng B, Li W, Cao J. The role of navigation technology in anterior cruciate ligament reconstruction bone tunnel positioning. J Robot Surg 2025; 19:90. [PMID: 40019692 DOI: 10.1007/s11701-025-02254-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2024] [Accepted: 02/18/2025] [Indexed: 03/01/2025]
Abstract
In the past decade, navigation technology-assisted bone tunnels positioning for anterior cruciate ligament reconstruction (ACLR) has received great attention. The purpose of this review is to summarize the navigation technologies applied in ACLR, describe the tunnel positioning accuracy of these technologies, and summarize their advantages and disadvantages, providing a basis for navigation technology to assist ACLR. This review discusses the limitations of traditional bone tunnel positioning methods in ACLR and further introduces various navigation techniques, focusing on their positioning accuracy and postoperative outcomes for patients. Additionally, it presents commercial systems utilizing reality-based technologies and examines their impact on the arthroscopic learning curve for less experienced surgeons. The osseous landmarks are currently the most used positioning method, but they still have shortcomings. Navigation technologies primarily focus on computer-assisted navigation, which, however, requires additional incisions. Virtual reality and augmented reality are mainly utilized in preoperative planning, with the best-reported positioning accuracy of augmented reality being 0.32 mm, while most other accuracies are within 3 mm. Mixed reality offers a novel approach for precise positioning, resulting in more optimal and consistent postoperative tunnel placement. Navigation technology improves the positioning accuracy of the bone tunnels and achieves good short-term results. Key to the future is long-term follow-up to assess clinical outcomes of navigation techniques.
Collapse
Affiliation(s)
- Zi Zhang
- Department of Sports Injury and Arthroscopy, Tianjin Hospital, Tianjin University, Tianjin, 300222, People's Republic of China
- Medical School of Tianjin University, Tianjin University, Tianjin, China
| | - Binyang Meng
- Department of Sports Injury and Arthroscopy, Tianjin Hospital, Tianjin University, Tianjin, 300222, People's Republic of China
- Medical School of Tianjin University, Tianjin University, Tianjin, China
| | - Wenhe Li
- Department of Sports Injury and Arthroscopy, Tianjin Hospital, Tianjin University, Tianjin, 300222, People's Republic of China
- Medical School of Tianjin University, Tianjin University, Tianjin, China
| | - Jiangang Cao
- Department of Sports Injury and Arthroscopy, Tianjin Hospital, Tianjin University, Tianjin, 300222, People's Republic of China.
| |
Collapse
|
3
|
Madani S, Sayadi A, Turcotte R, Cecere R, Aoude A, Hooshiar A. A universal calibration framework for mixed-reality assisted surgery. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 259:108470. [PMID: 39602987 DOI: 10.1016/j.cmpb.2024.108470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 09/06/2024] [Accepted: 10/18/2024] [Indexed: 11/29/2024]
Abstract
BACKGROUND Mixed-reality-assisted surgery has become increasingly prominent, offering real-time 3D visualization of target anatomy such as tumors. These systems facilitate translating preoperative 3D surgical plans to the patient's body intraoperatively and allow for interactive modifications based on the patient's real-time conditions. However, achieving sub-millimetre accuracy in mixed-reality (MR) visualization and interaction is crucial to mitigate device-related risks and enhance surgical precision. OBJECTIVE Given the critical role of camera calibration in hologram-to-patient anatomy registration, this study aims to develop a new device-agnostic and robust calibration method capable of achieving sub-millimetre accuracy, addressing the prevalent uncertainties associated with MR camera-to-world calibration. METHODS We utilized the precision of surgical navigation systems (NAV) to address the hand-eye calibration problem, thereby localizing the MR camera within a navigated surgical scene. The proposed calibration method was integrated into a representative surgery system and subjected to rigorous testing across various 2D and 3D camera trajectories that simulate surgeon head movements. RESULTS The calibration method demonstrated positional errors as low as 0.2 mm in spatial trajectories, with a standard error also at 0.2 mm, underscoring its robustness against camera motion. This accuracy complies with the accuracy and stability requirements essential for surgical applications. CONCLUSION The proposed fiducial-based hand-eye calibration method effectively incorporates the accuracy and reliability of surgical navigation systems into MR camera systems used in intraoperative applications. This integration facilitates high precision in surgical navigation, proving critical for enhancing surgical outcomes in mixed-reality-assisted procedures.
Collapse
Affiliation(s)
- Sepehr Madani
- Surgical Performance Enhancement and Robotics (SuPER) Centre, Department of Surgery, McGill University, 1650 Cedar Avenue, Montreal QC H3G 1A4, Canada
| | - Amir Sayadi
- Surgical Performance Enhancement and Robotics (SuPER) Centre, Department of Surgery, McGill University, 1650 Cedar Avenue, Montreal QC H3G 1A4, Canada
| | - Robert Turcotte
- Division of Orthopedic Surgery, Department of Surgery, McGill University, 1650 Cedar Avenue, Montreal QC H3G 1A4, Canada
| | - Renzo Cecere
- Division of Cardiac Surgery, Department of Surgery, McGill University, 1001 Decarie Blvd., Montreal QC H4A 3J1, Canada
| | - Ahmed Aoude
- Division of Orthopedic Surgery, Department of Surgery, McGill University, 1650 Cedar Avenue, Montreal QC H3G 1A4, Canada
| | - Amir Hooshiar
- Surgical Performance Enhancement and Robotics (SuPER) Centre, Department of Surgery, McGill University, 1650 Cedar Avenue, Montreal QC H3G 1A4, Canada.
| |
Collapse
|
4
|
Shu H, Liu M, Seenivasan L, Gu S, Ku P, Knopf J, Taylor R, Unberath M. Seamless augmented reality integration in arthroscopy: a pipeline for articular reconstruction and guidance. Healthc Technol Lett 2025; 12:e12119. [PMID: 39816701 PMCID: PMC11730702 DOI: 10.1049/htl2.12119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2024] [Accepted: 12/01/2024] [Indexed: 01/18/2025] Open
Abstract
Arthroscopy is a minimally invasive surgical procedure used to diagnose and treat joint problems. The clinical workflow of arthroscopy typically involves inserting an arthroscope into the joint through a small incision, during which surgeons navigate and operate largely by relying on their visual assessment through the arthroscope. However, the arthroscope's restricted field of view and lack of depth perception pose challenges in navigating complex articular structures and achieving surgical precision during procedures. Aiming at enhancing intraoperative awareness, a robust pipeline that incorporates simultaneous localization and mapping, depth estimation, and 3D Gaussian splatting (3D GS) is presented to realistically reconstruct intra-articular structures solely based on monocular arthroscope video. Extending 3D reconstruction to augmented reality (AR) applications, the solution offers AR assistance for articular notch measurement and annotation anchoring in a human-in-the-loop manner. Compared to traditional structure-from-motion and neural radiance field-based methods, the pipeline achieves dense 3D reconstruction and competitive rendering fidelity with explicit 3D representation in 7 min on average. When evaluated on four phantom datasets, our method achieves root-mean-square-error(RMSE) = 2.21 mm reconstruction error, peak signal-to-noise ratio(PSNR) = 32.86 and structure similarity index measure(SSIM) = 0.89 on average. Because the pipeline enables AR reconstruction and guidance directly from monocular arthroscopy without any additional data and/or hardware, the solution may hold the potential for enhancing intraoperative awareness and facilitating surgical precision in arthroscopy. The AR measurement tool achieves accuracy within1.59 ± 1.81 mm and the AR annotation tool achieves a mIoU of 0.721.
Collapse
Affiliation(s)
- Hongchao Shu
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Mingxu Liu
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | | | - Suxi Gu
- Department of OrthopedicsTsinghua Changgung HospitalTsinghua UniversitySchool of MedicineBeijingChina
| | - Ping‐Cheng Ku
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | | | - Russell Taylor
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Mathias Unberath
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMarylandUSA
| |
Collapse
|
5
|
Han Z, Dou Q. A review on organ deformation modeling approaches for reliable surgical navigation using augmented reality. Comput Assist Surg (Abingdon) 2024; 29:2357164. [PMID: 39253945 DOI: 10.1080/24699322.2024.2357164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/11/2024] Open
Abstract
Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body. This is achieved through superimposing preoperative organ models onto the actual anatomy. Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy. To enable reliable navigation in augmented surgery, modeling of intraoperative deformation to obtain an accurate alignment of the preoperative organ model with the intraoperative anatomy is indispensable. Despite the existence of various methods proposed to model intraoperative organ deformation, there are still few literature reviews that systematically categorize and summarize these approaches. This review aims to fill this gap by providing a comprehensive and technical-oriented overview of modeling methods for intraoperative organ deformation in augmented reality in surgery. Through a systematic search and screening process, 112 closely relevant papers were included in this review. By presenting the current status of organ deformation modeling methods and their clinical applications, this review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.
Collapse
Affiliation(s)
- Zheng Han
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
6
|
Shen Y, Wang S, Shen Y, Hu J. The Application of Augmented Reality Technology in Perioperative Visual Guidance: Technological Advances and Innovation Challenges. SENSORS (BASEL, SWITZERLAND) 2024; 24:7363. [PMID: 39599139 PMCID: PMC11598101 DOI: 10.3390/s24227363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Revised: 11/09/2024] [Accepted: 11/16/2024] [Indexed: 11/29/2024]
Abstract
In contemporary medical practice, perioperative visual guidance technology has become a critical element in enhancing the precision and safety of surgical procedures. This study provides a comprehensive review of the advancements in the application of Augmented Reality (AR) technology for perioperative visual guidance. This review begins with a retrospective look at the evolution of AR technology, including its initial applications in neurosurgery. It then delves into the technical challenges that AR faces in areas such as image processing, 3D reconstruction, spatial localization, and registration, underscoring the importance of improving the accuracy of AR systems and ensuring their stability and consistency in clinical use. Finally, the review looks forward to how AR technology could be further facilitated in medical applications with the integration of cutting-edge technologies like skin electronic devices and how the incorporation of machine learning could significantly enhance the accuracy of AR visual systems. As technology continues to advance, there is ample reason to believe that AR will be seamlessly integrated into medical practice, ushering the healthcare field into a new "Golden Age".
Collapse
Affiliation(s)
| | - Shuyi Wang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; (Y.S.); (Y.S.); (J.H.)
| | | | | |
Collapse
|
7
|
Puleio F, Tosco V, Pirri R, Simeone M, Monterubbianesi R, Lo Giudice G, Lo Giudice R. Augmented Reality in Dentistry: Enhancing Precision in Clinical Procedures-A Systematic Review. Clin Pract 2024; 14:2267-2283. [PMID: 39585006 PMCID: PMC11587009 DOI: 10.3390/clinpract14060178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Revised: 10/21/2024] [Accepted: 10/24/2024] [Indexed: 11/26/2024] Open
Abstract
Background: Augmented reality (AR) enhances sensory perception by adding extra information, improving anatomical localization and simplifying treatment views. In dentistry, digital planning on bidimensional screens lacks real-time feedback, leading to potential errors. However, it is not clear if AR can improve the clinical treatment precision. The aim of this research is to evaluate if the use of AR-based instruments could improve dental procedure precision. Methods: This review covered studies from January 2018 to June 2023, focusing on AR in dentistry. The PICO question was "Does AR increase the precision of dental interventions compared to non-AR techniques?". The systematic review was carried out on electronic databases, including Ovid MEDLINE, PubMed, and the Web of Science, with the following inclusion criteria: studies comparing the variation in the precision of interventions carried out with AR instruments and non-AR techniques. Results: Thirteen studies were included. Conclusions: The results of this systematic review demonstrate that AR enhances the precision of various dental procedures. The authors advise clinicians to use AR-based tools in order to improve the precision of their therapies.
Collapse
Affiliation(s)
- Francesco Puleio
- Department of Biomedical and Dental Sciences and Morphofunctional Imaging, Messina University, 98100 Messina, Italy;
| | - Vincenzo Tosco
- Department of Clinical Sciences and Stomatology (DISCO), Università Politecnica delle Marche, 60126 Ancona, Italy; (V.T.); (R.M.)
| | | | - Michele Simeone
- Department of Neuroscience, Reproductive Science and Dentistry, University of Naples Federico II, 80138 Naples, Italy;
| | - Riccardo Monterubbianesi
- Department of Clinical Sciences and Stomatology (DISCO), Università Politecnica delle Marche, 60126 Ancona, Italy; (V.T.); (R.M.)
| | - Giorgio Lo Giudice
- Department of Biomedical and Dental Sciences and Morphofunctional Imaging, Messina University, 98100 Messina, Italy;
| | - Roberto Lo Giudice
- Department of Human Pathology of Adults and Developmental Age, University of Messina, 98100 Messina, Italy;
| |
Collapse
|
8
|
Li C, Zhang G, Zhao B, Xie D, Du H, Duan X, Hu Y, Zhang L. Advances of surgical robotics: image-guided classification and application. Natl Sci Rev 2024; 11:nwae186. [PMID: 39144738 PMCID: PMC11321255 DOI: 10.1093/nsr/nwae186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/19/2024] [Accepted: 05/07/2024] [Indexed: 08/16/2024] Open
Abstract
Surgical robotics application in the field of minimally invasive surgery has developed rapidly and has been attracting increasingly more research attention in recent years. A common consensus has been reached that surgical procedures are to become less traumatic and with the implementation of more intelligence and higher autonomy, which is a serious challenge faced by the environmental sensing capabilities of robotic systems. One of the main sources of environmental information for robots are images, which are the basis of robot vision. In this review article, we divide clinical image into direct and indirect based on the object of information acquisition, and into continuous, intermittent continuous, and discontinuous according to the target-tracking frequency. The characteristics and applications of the existing surgical robots in each category are introduced based on these two dimensions. Our purpose in conducting this review was to analyze, summarize, and discuss the current evidence on the general rules on the application of image technologies for medical purposes. Our analysis gives insight and provides guidance conducive to the development of more advanced surgical robotics systems in the future.
Collapse
Affiliation(s)
- Changsheng Li
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Gongzi Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Baoliang Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Dongsheng Xie
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Hailong Du
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Xingguang Duan
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Lihai Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
9
|
Mangulabnan JE, Soberanis-Mukul RD, Teufel T, Sahu M, Porras JL, Vedula SS, Ishii M, Hager G, Taylor RH, Unberath M. An endoscopic chisel: intraoperative imaging carves 3D anatomical models. Int J Comput Assist Radiol Surg 2024; 19:1359-1366. [PMID: 38753135 DOI: 10.1007/s11548-024-03151-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/12/2024] [Indexed: 07/10/2024]
Abstract
PURPOSE Preoperative imaging plays a pivotal role in sinus surgery where CTs offer patient-specific insights of complex anatomy, enabling real-time intraoperative navigation to complement endoscopy imaging. However, surgery elicits anatomical changes not represented in the preoperative model, generating an inaccurate basis for navigation during surgery progression. METHODS We propose a first vision-based approach to update the preoperative 3D anatomical model leveraging intraoperative endoscopic video for navigated sinus surgery where relative camera poses are known. We rely on comparisons of intraoperative monocular depth estimates and preoperative depth renders to identify modified regions. The new depths are integrated in these regions through volumetric fusion in a truncated signed distance function representation to generate an intraoperative 3D model that reflects tissue manipulation RESULTS: We quantitatively evaluate our approach by sequentially updating models for a five-step surgical progression in an ex vivo specimen. We compute the error between correspondences from the updated model and ground-truth intraoperative CT in the region of anatomical modification. The resulting models show a decrease in error during surgical progression as opposed to increasing when no update is employed. CONCLUSION Our findings suggest that preoperative 3D anatomical models can be updated using intraoperative endoscopy video in navigated sinus surgery. Future work will investigate improvements to monocular depth estimation as well as removing the need for external navigation systems. The resulting ability to continuously update the patient model may provide surgeons with a more precise understanding of the current anatomical state and paves the way toward a digital twin paradigm for sinus surgery.
Collapse
Affiliation(s)
| | | | - Timo Teufel
- Johns Hopkins University, Baltimore, MD, 21211, USA
| | - Manish Sahu
- Johns Hopkins University, Baltimore, MD, 21211, USA
| | - Jose L Porras
- Johns Hopkins Medical Institutions, Baltimore, MD, 21287, USA
| | | | - Masaru Ishii
- Johns Hopkins Medical Institutions, Baltimore, MD, 21287, USA
| | | | - Russell H Taylor
- Johns Hopkins University, Baltimore, MD, 21211, USA
- Johns Hopkins Medical Institutions, Baltimore, MD, 21287, USA
| | - Mathias Unberath
- Johns Hopkins University, Baltimore, MD, 21211, USA
- Johns Hopkins Medical Institutions, Baltimore, MD, 21287, USA
| |
Collapse
|
10
|
Canton SP, Austin CN, Steuer F, Dadi S, Sharma N, Kass NM, Fogg D, Clayton E, Cunningham O, Scott D, LaBaze D, Andrews EG, Biehl JT, Hogan MV. Feasibility and Usability of Augmented Reality Technology in the Orthopaedic Operating Room. Curr Rev Musculoskelet Med 2024; 17:117-128. [PMID: 38607522 PMCID: PMC11068703 DOI: 10.1007/s12178-024-09888-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/06/2024] [Indexed: 04/13/2024]
Abstract
PURPOSE OF REVIEW Augmented reality (AR) has gained popularity in various sectors, including gaming, entertainment, and healthcare. The desire for improved surgical navigation within orthopaedic surgery has led to the evaluation of the feasibility and usability of AR in the operating room (OR). However, the safe and effective use of AR technology in the OR necessitates a proper understanding of its capabilities and limitations. This review aims to describe the fundamental elements of AR, highlight limitations for use within the field of orthopaedic surgery, and discuss potential areas for development. RECENT FINDINGS To date, studies have demonstrated evidence that AR technology can be used to enhance navigation and performance in orthopaedic procedures. General hardware and software limitations of the technology include the registration process, ergonomics, and battery life. Other limitations are related to the human response factors such as inattentional blindness, which may lead to the inability to see complications within the surgical field. Furthermore, the prolonged use of AR can cause eye strain and headache due to phenomena such as the vergence-convergence conflict. AR technology may prove to be a better alternative to current orthopaedic surgery navigation systems. However, the current limitations should be mitigated to further improve the feasibility and usability of AR in the OR setting. It is important for both non-clinicians and clinicians to work in conjunction to guide the development of future iterations of AR technology and its implementation into the OR workflow.
Collapse
Affiliation(s)
- Stephen P Canton
- Department of Orthopaedic Surgery, University of Pittsburgh, 3471 Fifth Ave, Pittsburgh, PA, 15213, USA.
| | | | - Fritz Steuer
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Srujan Dadi
- Rowan-Virtua School of Osteopathic Medicine, Stratford, NJ, USA
| | - Nikhil Sharma
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Nicolás M Kass
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - David Fogg
- Texas Tech University Health Sciences Center El Paso, El Paso, TX, USA
| | - Elizabeth Clayton
- Department of Orthopaedic Surgery, University of Pittsburgh, 3471 Fifth Ave, Pittsburgh, PA, 15213, USA
| | - Onaje Cunningham
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Devon Scott
- Department of Orthopaedic Surgery, University of Pittsburgh, 3471 Fifth Ave, Pittsburgh, PA, 15213, USA
| | - Dukens LaBaze
- Department of Orthopaedic Surgery, University of Pittsburgh, 3471 Fifth Ave, Pittsburgh, PA, 15213, USA
| | - Edward G Andrews
- Department of Neurological Surgery University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Jacob T Biehl
- School of Computing and Information, University of Pittsburgh, Pittsburgh, PA, USA
| | - MaCalus V Hogan
- Department of Orthopaedic Surgery, University of Pittsburgh, 3471 Fifth Ave, Pittsburgh, PA, 15213, USA
| |
Collapse
|
11
|
Bian D, Lin Z, Lu H, Zhong Q, Wang K, Tang X, Zang J. The application of extended reality technology-assisted intraoperative navigation in orthopedic surgery. Front Surg 2024; 11:1336703. [PMID: 38375409 PMCID: PMC10875025 DOI: 10.3389/fsurg.2024.1336703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 01/23/2024] [Indexed: 02/21/2024] Open
Abstract
Extended reality (XR) technology refers to any situation where real-world objects are enhanced with computer technology, including virtual reality, augmented reality, and mixed reality. Augmented reality and mixed reality technologies have been widely applied in orthopedic clinical practice, including in teaching, preoperative planning, intraoperative navigation, and surgical outcome evaluation. The primary goal of this narrative review is to summarize the effectiveness and superiority of XR-technology-assisted intraoperative navigation in the fields of trauma, joint, spine, and bone tumor surgery, as well as to discuss the current shortcomings in intraoperative navigation applications. We reviewed titles of more than 200 studies obtained from PubMed with the following search terms: extended reality, mixed reality, augmented reality, virtual reality, intraoperative navigation, and orthopedic surgery; of those 200 studies, 69 related papers were selected for abstract review. Finally, the full text of 55 studies was analyzed and reviewed. They were classified into four groups-trauma, joint, spine, and bone tumor surgery-according to their content. Most of studies that we reviewed showed that XR-technology-assisted intraoperative navigation can effectively improve the accuracy of implant placement, such as that of screws and prostheses, reduce postoperative complications caused by inaccurate implantation, facilitate the achievement of tumor-free surgical margins, shorten the surgical duration, reduce radiation exposure for patients and surgeons, minimize further damage caused by the need for visual exposure during surgery, and provide richer and more efficient intraoperative communication, thereby facilitating academic exchange, medical assistance, and the implementation of remote healthcare.
Collapse
Affiliation(s)
- Dongxiao Bian
- Department of Musculoskeletal Tumor, Peking University People’s Hospital, Beijing, China
| | - Zhipeng Lin
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Hao Lu
- Traumatic Orthopedic Department, Peking University People’s Hospital, Beijing, China
| | - Qunjie Zhong
- Arthritis Clinic and Research Center, Peking University People’s Hospital, Beijing, China
| | - Kaifeng Wang
- Spinal Surgery Department, Peking University People’s Hospital, Beijing, China
| | - Xiaodong Tang
- Department of Musculoskeletal Tumor, Peking University People’s Hospital, Beijing, China
| | - Jie Zang
- Department of Musculoskeletal Tumor, Peking University People’s Hospital, Beijing, China
| |
Collapse
|
12
|
Chen Z, Cruciani L, Lievore E, Fontana M, De Cobelli O, Musi G, Ferrigno G, De Momi E. Spatio-temporal layers based intra-operative stereo depth estimation network via hierarchical prediction and progressive training. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107937. [PMID: 38006707 DOI: 10.1016/j.cmpb.2023.107937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 11/18/2023] [Accepted: 11/19/2023] [Indexed: 11/27/2023]
Abstract
BACKGROUND AND OBJECTIVE Safety of robotic surgery can be enhanced through augmented vision or artificial constraints to the robotl motion, and intra-operative depth estimation is the cornerstone of these applications because it provides precise position information of surgical scenes in 3D space. High-quality depth estimation of endoscopic scenes has been a valuable issue, and the development of deep learning provides more possibility and potential to address this issue. METHODS In this paper, a deep learning-based approach is proposed to recover 3D information of intra-operative scenes. To this aim, a fully 3D encoder-decoder network integrating spatio-temporal layers is designed, and it adopts hierarchical prediction and progressive learning to enhance prediction accuracy and shorten training time. RESULTS Our network gets the depth estimation accuracy of MAE 2.55±1.51 (mm) and RMSE 5.23±1.40 (mm) using 8 surgical videos with a resolution of 1280×1024, which performs better compared with six other state-of-the-art methods that were trained on the same data. CONCLUSIONS Our network can implement a promising depth estimation performance in intra-operative scenes using stereo images, allowing the integration in robot-assisted surgery to enhance safety.
Collapse
Affiliation(s)
- Ziyang Chen
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy.
| | - Laura Cruciani
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy
| | - Elena Lievore
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy
| | - Matteo Fontana
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy
| | - Ottavio De Cobelli
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy; University of Milan, Department of Oncology and Onco-haematology, Faculty of Medicine and Surgery, Milan, Italy
| | - Gennaro Musi
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy; University of Milan, Department of Oncology and Onco-haematology, Faculty of Medicine and Surgery, Milan, Italy
| | - Giancarlo Ferrigno
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy
| | - Elena De Momi
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy; European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy
| |
Collapse
|
13
|
Ying M, Wang Y, Yang K, Wang H, Liu X. A deep learning knowledge distillation framework using knee MRI and arthroscopy data for meniscus tear detection. Front Bioeng Biotechnol 2024; 11:1326706. [PMID: 38292305 PMCID: PMC10825958 DOI: 10.3389/fbioe.2023.1326706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 12/22/2023] [Indexed: 02/01/2024] Open
Abstract
Purpose: To construct a deep learning knowledge distillation framework exploring the utilization of MRI alone or combing with distilled Arthroscopy information for meniscus tear detection. Methods: A database of 199 paired knee Arthroscopy-MRI exams was used to develop a multimodal teacher network and an MRI-based student network, which used residual neural networks architectures. A knowledge distillation framework comprising the multimodal teacher network T and the monomodal student network S was proposed. We optimized the loss functions of mean squared error (MSE) and cross-entropy (CE) to enable the student network S to learn arthroscopic information from the teacher network T through our deep learning knowledge distillation framework, ultimately resulting in a distilled student network S T. A coronal proton density (PD)-weighted fat-suppressed MRI sequence was used in this study. Fivefold cross-validation was employed, and the accuracy, sensitivity, specificity, F1-score, receiver operating characteristic (ROC) curves and area under the receiver operating characteristic curve (AUC) were used to evaluate the medial and lateral meniscal tears detection performance of the models, including the undistilled student model S, the distilled student model S T and the teacher model T. Results: The AUCs of the undistilled student model S, the distilled student model S T, the teacher model T for medial meniscus (MM) tear detection and lateral meniscus (LM) tear detection are 0.773/0.672, 0.792/0.751 and 0.834/0.746, respectively. The distilled student model S T had higher AUCs than the undistilled model S. After undergoing knowledge distillation processing, the distilled student model demonstrated promising results, with accuracy (0.764/0.734), sensitivity (0.838/0.661), and F1-score (0.680/0.754) for both medial and lateral tear detection better than the undistilled one with accuracy (0.734/0.648), sensitivity (0.733/0.607), and F1-score (0.620/0.673). Conclusion: Through the knowledge distillation framework, the student model S based on MRI benefited from the multimodal teacher model T and achieved an improved meniscus tear detection performance.
Collapse
Affiliation(s)
- Mengjie Ying
- Department of Orthopedics, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yufan Wang
- Engineering Research Center for Digital Medicine of the Ministry of Education, Shanghai, China
- School of Biomedical Engineering and Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Kai Yang
- Department of Radiology, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haoyuan Wang
- Department of Orthopedics, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xudong Liu
- Department of Orthopedics, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
14
|
He F, Qi X, Feng Q, Zhang Q, Pan N, Yang C, Liu S. Research on augmented reality navigation of in vitro fenestration of stent-graft based on deep learning and virtual-real registration. Comput Assist Surg (Abingdon) 2023; 28:2289339. [PMID: 38059572 DOI: 10.1080/24699322.2023.2289339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023] Open
Abstract
OBJECTIVES In vitro fenestration of stent-graft (IVFS) demands high-precision navigation methods to achieve optimal surgical outcomes. This study aims to propose an augmented reality (AR) navigation method for IVFS, which can provide in situ overlay display to locate fenestration positions. METHODS We propose an AR navigation method to assist doctors in performing IVFS. A deep learning-based aorta segmentation algorithm is used to achieve automatic and rapid aorta segmentation. The Vuforia-based virtual-real registration and marker recognition algorithm are integrated to ensure accurate in situ AR image. RESULTS The proposed method can provide three-dimensional in situ AR image, and the fiducial registration error after virtual-real registration is 2.070 mm. The aorta segmentation experiment obtains dice similarity coefficient of 91.12% and Hausdorff distance of 2.59, better than conventional algorithms before improvement. CONCLUSIONS The proposed method can intuitively and accurately locate fenestration positions, and therefore can assist doctors in performing IVFS.
Collapse
Affiliation(s)
- Fengfeng He
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaoyu Qi
- Department of Vascular Surgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qingmin Feng
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qiang Zhang
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ning Pan
- School of Biomedical Engineering, South-Central Minzu University, Wuhan, China
| | - Chao Yang
- Department of Vascular Surgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shenglin Liu
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
15
|
Yavari E, Moosa S, Cohen D, Cantu-Morales D, Nagai K, Hoshino Y, de Sa D. Technology-assisted anterior cruciate ligament reconstruction improves tunnel placement but leads to no change in clinical outcomes: a systematic review and meta-analysis. Knee Surg Sports Traumatol Arthrosc 2023; 31:4299-4311. [PMID: 37329370 DOI: 10.1007/s00167-023-07481-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 06/02/2023] [Indexed: 06/19/2023]
Abstract
PURPOSE To investigate the effect of technology-assisted Anterior Cruciate Ligament Reconstruction (ACLR) on post-operative clinical outcomes and tunnel placement compared to conventional arthroscopic ACLR. METHODS CENTRAL, MEDLINE, and Embase were searched from January 2000 to November 17, 2022. Articles were included if there was intraoperative use of computer-assisted navigation, robotics, diagnostic imaging, computer simulations, or 3D printing (3DP). Two reviewers searched, screened, and evaluated the included studies for data quality. Data were abstracted using descriptive statistics and pooled using relative risk ratios (RR) or mean differences (MD), both with 95% confidence intervals (CI), where appropriate. RESULTS Eleven studies were included with total 775 patients and majority male participants (70.7%). Ages ranged from 14 to 54 years (391 patients) and follow-up ranged from 12 to 60 months (775 patients). Subjective International Knee Documentation Committee (IKDC) scores increased in the technology-assisted surgery group (473 patients; P = 0.02; MD 1.97, 95% CI 0.27 to 3.66). There was no difference in objective IKDC scores (447 patients; RR 1.02, 95% CI 0.98 to 1.06), Lysholm scores (199 patients; MD 1.14, 95% CI - 1.03 to 3.30) or negative pivot-shift tests (278 patients; RR 1.07, 95% CI 0.97 to 1.18) between the two groups. When using technology-assisted surgery, 6 (351 patients) of 8 (451 patients) studies reported more accurate femoral tunnel placement and 6 (321 patients) of 10 (561 patients) studies reported more accurate tibial tunnel placement in at least one measure. One study (209 patients) demonstrated a significant increase in cost associated with use of computer-assisted navigation (mean 1158€) versus conventional surgery (mean 704€). Of the two studies using 3DP templates, production costs ranging from $10 to $42 USD were cited. There was no difference in adverse events between the two groups. CONCLUSION Clinical outcomes do not differ between technology-assisted surgery and conventional surgery. Computer-assisted navigation is more expensive and time consuming while 3DP is inexpensive and does not lead to greater operating times. ACLR tunnels can be more accurately located in radiologically ideal places by using technology, but anatomic placement is still undetermined because of variability and inaccuracy of the evaluation systems utilized. LEVEL OF EVIDENCE Level III.
Collapse
Affiliation(s)
- Ehsan Yavari
- Michael G. DeGroote School of Medicine, McMaster University, Waterloo Regional Campus, Kitchener, ON, N2G 1C5, Canada.
| | - Sabreena Moosa
- Michael G. DeGroote School of Medicine, McMaster University, Waterloo Regional Campus, Kitchener, ON, N2G 1C5, Canada
| | - Dan Cohen
- Division of Orthopaedic Surgery, Department of Surgery, McMaster University, Hamilton, ON, Canada
| | | | - Kanto Nagai
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, Japan
| | - Yuichi Hoshino
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, Japan
| | - Darren de Sa
- Division of Orthopaedic Surgery, Department of Surgery, McMaster University, 1280 Main Street West, MUMC 4E14, Hamilton, ON, L8S 4L8, Canada
| |
Collapse
|
16
|
Fang C, Mo P, Chan H, Cheung J, Wong JSH, Wong TM, Mak YK, Ching K, Ho G, Leung F. Can a Wireless Full-HD Head Mounted Display System Improve Knee Arthroscopy Performance? - A Randomized Study Using a Knee Simulator. Surg Innov 2023; 30:477-485. [PMID: 36448618 PMCID: PMC10403956 DOI: 10.1177/15533506221142960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/06/2023]
Abstract
INTRODUCTION Our prototype wireless full-HD Augmented Reality Head-Mounted Display (AR-HMD) aims to eliminate surgeon head turning and reduce theater clutter. Learning and performance versus TV Monitors (TVM) is evaluated in simulated knee arthroscopy. METHODS 19 surgeons and 19 novices were randomized into either the control group (A) or intervention group (B) and tasked to perform 5 simulated loose-body retrieval procedures on a bench-top knee arthroscopy simulator. A cross-over study design was adopted whereby subjects alternated between devices during trials 1-3, deemed the "Unfamiliar" phase, and then used the same device consecutively in trials 4-5, to assess performance in a more "Familiarized" state. Measured outcomes were time-to-completion and incidence of bead drops. RESULTS In the unfamiliar phase, HMD had 67% longer mean time-to-completion than TVM (194.7 ± 152.6s vs 116.7 ± 78.7s, P < .001). Once familiarized, HMD remained inferior to TVM, with 48% longer completion times (133.8 ± 123.3s vs 90.6 ± 55s, P = .052). Cox regression revealed device type (OR = 0.526, CI 0.391-0.709, P < .001) and number of procedure repetitions (OR = 1.186, CI 1.072-1.311, P = .001) are significantly and independently related to faster time-to-completion. However, experience is not a significant factor (OR = 1.301, CI 0.971-1.741, P = .078). Bead drops were similar between the groups in both unfamiliar (HMD: 27 vs TVM: 22, P = .65) and familiarized phases (HMD: 11 vs TVM: 17, P = .97). CONCLUSION Arthroscopic procedures continue to be better performed under conventional TVM. However, similar quality levels can be reached by HMD when given more time. Given the theoretical advantages, further research into improving HMD designs is advocated.
Collapse
Affiliation(s)
- Christian Fang
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Pinky Mo
- The University of Hong Kong, Hong Kong
| | - Holy Chan
- The University of Hong Kong, Hong Kong
| | - Jake Cheung
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Janus Siu Him Wong
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Tak-Man Wong
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Yan-Kit Mak
- Department of Orthopaedics and Traumatology, Pamela Youde Nethersole Eastern Hospital, Hong Kong
| | - Kathine Ching
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Grace Ho
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Frankie Leung
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| |
Collapse
|
17
|
Pierzchajlo N, Stevenson TC, Huynh H, Nguyen J, Boatright S, Arya P, Chakravarti S, Mehrki Y, Brown NJ, Gendreau J, Lee SJ, Chen SG. Augmented Reality in Minimally Invasive Spinal Surgery: A Narrative Review of Available Technology. World Neurosurg 2023; 176:35-42. [PMID: 37059357 DOI: 10.1016/j.wneu.2023.04.030] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 04/08/2023] [Indexed: 04/16/2023]
Abstract
INTRODUCTION Spine surgery has undergone significant changes in approach and technique. With the adoption of intraoperative navigation, minimally invasive spinal surgery (MISS) has arguably become the gold standard. Augmented reality (AR) has now emerged as a front-runner in anatomical visualization and narrower operative corridors. In effect, AR is poised to revolutionize surgical training and operative outcomes. Our study examines the current literature on AR-assisted MISS, synthesizes findings, and creates a narrative highlighting the history and future of AR in spine surgery. MATERIAL AND METHODS Relevant literature was gathered using the PubMed (Medline) database from 1975 to 2023. Pedicle screw placement models were the primary intervention in AR. These were compared to the outcomes of traditional MISS RESULTS: We found that AR devices on the market show promising clinical outcomes in preoperative training and intraoperative use. Three prominent systems were as follows: XVision, HoloLens, and ImmersiveTouch. In the studies, surgeons, residents, and medical students had opportunities to operate AR systems, showcasing their educational potential across each phase of learning. Specifically, one facet described training with cadaver models to gauge accuracy in pedicle screw placement. AR-MISS exceeded free-hand methods without unique complications or contraindications. CONCLUSIONS While still in its infancy, AR has already proven beneficial for educational training and intraoperative MISS applications. We believe that with continued research and advancement of this technology, AR is poised to become a dominant player within the fundamentals of surgical education and MISS operative technique.
Collapse
Affiliation(s)
| | | | - Huey Huynh
- Mercer University, School of Medicine, Savannah, GA, USA
| | - Jimmy Nguyen
- Mercer University, School of Medicine, Savannah, GA, USA
| | | | - Priya Arya
- Mercer University, School of Medicine, Savannah, GA, USA
| | | | - Yusuf Mehrki
- Department of Neurosurgery, University of Florida, Jacksonville, FL, USA
| | - Nolan J Brown
- Department of Neurosurgery, University of California Irvine, Orange, CA, USA
| | - Julian Gendreau
- Department of Biomedical Engineering, Johns Hopkins Whiting School of Engineering, Baltimore, MD, USA
| | - Seung Jin Lee
- Department of Neurosurgery, Mayo Clinic, Jacksonville, FL, USA
| | - Selby G Chen
- Department of Neurosurgery, Mayo Clinic, Jacksonville, FL, USA
| |
Collapse
|
18
|
León-Muñoz VJ, Moya-Angeler J, López-López M, Lisón-Almagro AJ, Martínez-Martínez F, Santonja-Medina F. Integration of Square Fiducial Markers in Patient-Specific Instrumentation and Their Applicability in Knee Surgery. J Pers Med 2023; 13:jpm13050727. [PMID: 37240897 DOI: 10.3390/jpm13050727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/23/2023] [Accepted: 04/23/2023] [Indexed: 05/28/2023] Open
Abstract
Computer technologies play a crucial role in orthopaedic surgery and are essential in personalising different treatments. Recent advances allow the usage of augmented reality (AR) for many orthopaedic procedures, which include different types of knee surgery. AR assigns the interaction between virtual environments and the physical world, allowing both to intermingle (AR superimposes information on real objects in real-time) through an optical device and allows personalising different processes for each patient. This article aims to describe the integration of fiducial markers in planning knee surgeries and to perform a narrative description of the latest publications on AR applications in knee surgery. Augmented reality-assisted knee surgery is an emerging set of techniques that can increase accuracy, efficiency, and safety and decrease the radiation exposure (in some surgical procedures, such as osteotomies) of other conventional methods. Initial clinical experience with AR projection based on ArUco-type artificial marker sensors has shown promising results and received positive operator feedback. Once initial clinical safety and efficacy have been demonstrated, the continued experience should be studied to validate this technology and generate further innovation in this rapidly evolving field.
Collapse
Affiliation(s)
- Vicente J León-Muñoz
- Department of Orthopaedic Surgery and Traumatology, Hospital General Universitario Reina Sofía, 30003 Murcia, Spain
- Instituto de Cirugía Avanzada de la Rodilla (ICAR), 30005 Murcia, Spain
| | - Joaquín Moya-Angeler
- Department of Orthopaedic Surgery and Traumatology, Hospital General Universitario Reina Sofía, 30003 Murcia, Spain
- Instituto de Cirugía Avanzada de la Rodilla (ICAR), 30005 Murcia, Spain
| | - Mirian López-López
- Subdirección General de Tecnologías de la Información, Servicio Murciano de Salud, 30100 Murcia, Spain
| | - Alonso J Lisón-Almagro
- Department of Orthopaedic Surgery and Traumatology, Hospital General Universitario Reina Sofía, 30003 Murcia, Spain
| | - Francisco Martínez-Martínez
- Department of Orthopaedic Surgery and Traumatology, Hospital Clínico Universitario Virgen de la Arrixaca, 30120 Murcia, Spain
| | - Fernando Santonja-Medina
- Department of Orthopaedic Surgery and Traumatology, Hospital Clínico Universitario Virgen de la Arrixaca, 30120 Murcia, Spain
- Department of Surgery, Pediatrics and Obstetrics & Gynecology, Faculty of Medicine, University of Murcia, 30120 Murcia, Spain
| |
Collapse
|
19
|
Brockmeyer P, Wiechens B, Schliephake H. The Role of Augmented Reality in the Advancement of Minimally Invasive Surgery Procedures: A Scoping Review. Bioengineering (Basel) 2023; 10:501. [PMID: 37106688 PMCID: PMC10136262 DOI: 10.3390/bioengineering10040501] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/13/2023] [Accepted: 04/20/2023] [Indexed: 04/29/2023] Open
Abstract
The purpose of this review was to analyze the evidence on the role of augmented reality (AR) in the improvement of minimally invasive surgical (MIS) procedures. A scoping literature search of the PubMed and ScienceDirect databases was performed to identify articles published in the last five years that addressed the direct impact of AR technology on MIS procedures or that addressed an area of education or clinical care that could potentially be used for MIS development. A total of 359 studies were screened and 31 articles were reviewed in depth and categorized into three main groups: Navigation, education and training, and user-environment interfaces. A comparison of studies within the different application groups showed that AR technology can be useful in various disciplines to advance the development of MIS. Although AR-guided navigation systems do not yet offer a precision advantage, benefits include improved ergonomics and visualization, as well as reduced surgical time and blood loss. Benefits can also be seen in improved education and training conditions and improved user-environment interfaces that can indirectly influence MIS procedures. However, there are still technical challenges that need to be addressed to demonstrate added value to patient care and should be evaluated in clinical trials with sufficient patient numbers or even in systematic reviews or meta-analyses.
Collapse
Affiliation(s)
- Phillipp Brockmeyer
- Department of Oral and Maxillofacial Surgery, University Medical Center Goettingen, D-37075 Goettingen, Germany
| | - Bernhard Wiechens
- Department of Orthodontics, University Medical Center Goettingen, D-37075 Goettingen, Germany
| | - Henning Schliephake
- Department of Oral and Maxillofacial Surgery, University Medical Center Goettingen, D-37075 Goettingen, Germany
| |
Collapse
|
20
|
Jeung D, Jung K, Lee HJ, Hong J. Augmented reality-based surgical guidance for wrist arthroscopy with bone-shift compensation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107323. [PMID: 36608430 DOI: 10.1016/j.cmpb.2022.107323] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 08/17/2022] [Accepted: 12/22/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVES Intraoperative joint condition is different from preoperative CT/MR due to the motion applied during surgery, inducing an inaccurate approach to surgical targets. This study aims to provide real-time augmented reality (AR)-based surgical guidance for wrist arthroscopy based on a bone-shift model through an in vivo computed tomography (CT) study. METHODS To accurately visualize concealed wrist bones on the intra-articular arthroscopic image, we propose a surgical guidance system with a novel bone-shift compensation method using noninvasive fiducial markers. First, to measure the effect of traction during surgery, two noninvasive fiducial markers were attached before surgery. In addition, two virtual link models connecting the wrist bones were implemented. When wrist traction occurs during the operation, the movement of the fiducial marker is measured, and bone-shift compensation is applied to move the virtual links in the direction of the traction. The proposed bone-shift compensation method was verified with the in vivo CT data of 10 participants. Finally, to introduce AR, camera calibration for the arthroscope parameters was performed, and a patient-specific template was used for registration between the patient and the wrist bone model. As a result, a virtual bone model with three-dimensional information could be accurately projected on a two-dimensional arthroscopic image plane. RESULTS The proposed method was possible to estimate the position of wrist bone in the traction state with an accuracy of 1.4 mm margin. After bone-shift compensation was applied, the target point error was reduced by 33.6% in lunate, 63.3% in capitate, 55.0% in scaphoid, and 74.8% in trapezoid than those in preoperative wrist CT. In addition, a phantom experiment was introduced simulating the real surgical environment. AR display allowed to expand the field of view (FOV) of the arthroscope and helped in visualizing the anatomical structures around the bones. CONCLUSIONS This study demonstrated the successful handling of AR error caused by wrist traction using the proposed method. In addition, the method allowed accurate AR visualization of the concealed bones and expansion of the limited FOV of the arthroscope. The proposed bone-shift compensation can also be applied to other joints, such as the knees or shoulders, by representing their bone movements using corresponding virtual links. In addition, the movement of the joint skin during surgery can be measured using noninvasive fiducial markers in the same manner as that used for the wrist joint.
Collapse
Affiliation(s)
- Deokgi Jeung
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea
| | - Kyunghwa Jung
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea; Korea Research Institute of Standards and Science, Daejeon, South Korea
| | - Hyun-Joo Lee
- Department of Orthopaedic Surgery, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu, South Korea.
| | - Jaesung Hong
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea.
| |
Collapse
|
21
|
Ma L, Huang T, Wang J, Liao H. Visualization, registration and tracking techniques for augmented reality guided surgery: a review. Phys Med Biol 2023; 68. [PMID: 36580681 DOI: 10.1088/1361-6560/acaf23] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/29/2022] [Indexed: 12/31/2022]
Abstract
Augmented reality (AR) surgical navigation has developed rapidly in recent years. This paper reviews and analyzes the visualization, registration, and tracking techniques used in AR surgical navigation systems, as well as the application of these AR systems in different surgical fields. The types of AR visualization are divided into two categories ofin situvisualization and nonin situvisualization. The rendering contents of AR visualization are various. The registration methods include manual registration, point-based registration, surface registration, marker-based registration, and calibration-based registration. The tracking methods consist of self-localization, tracking with integrated cameras, external tracking, and hybrid tracking. Moreover, we describe the applications of AR in surgical fields. However, most AR applications were evaluated through model experiments and animal experiments, and there are relatively few clinical experiments, indicating that the current AR navigation methods are still in the early stage of development. Finally, we summarize the contributions and challenges of AR in the surgical fields, as well as the future development trend. Despite the fact that AR-guided surgery has not yet reached clinical maturity, we believe that if the current development trend continues, it will soon reveal its clinical utility.
Collapse
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Jie Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| |
Collapse
|
22
|
Abstract
Augmented reality (AR) is an innovative system that enhances the real world by superimposing virtual objects on reality. The aim of this study was to analyze the application of AR in medicine and which of its technical solutions are the most used. We carried out a scoping review of the articles published between 2019 and February 2022. The initial search yielded a total of 2649 articles. After applying filters, removing duplicates and screening, we included 34 articles in our analysis. The analysis of the articles highlighted that AR has been traditionally and mainly used in orthopedics in addition to maxillofacial surgery and oncology. Regarding the display application in AR, the Microsoft HoloLens Optical Viewer is the most used method. Moreover, for the tracking and registration phases, the marker-based method with a rigid registration remains the most used system. Overall, the results of this study suggested that AR is an innovative technology with numerous advantages, finding applications in several new surgery domains. Considering the available data, it is not possible to clearly identify all the fields of application and the best technologies regarding AR.
Collapse
|
23
|
Augmented Reality: Mapping Methods and Tools for Enhancing the Human Role in Healthcare HMI. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094295] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Background: Augmented Reality (AR) represents an innovative technology to improve data visualization and strengthen the human perception. Among Human–Machine Interaction (HMI), medicine can benefit most from the adoption of these digital technologies. In this perspective, the literature on orthopedic surgery techniques based on AR was evaluated, focusing on identifying the limitations and challenges of AR-based healthcare applications, to support the research and the development of further studies. Methods: Studies published from January 2018 to December 2021 were analyzed after a comprehensive search on PubMed, Google Scholar, Scopus, IEEE Xplore, Science Direct, and Wiley Online Library databases. In order to improve the review reporting, the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used. Results: Authors selected sixty-two articles meeting the inclusion criteria, which were categorized according to the purpose of the study (intraoperative, training, rehabilitation) and according to the surgical procedure used. Conclusions: AR has the potential to improve orthopedic training and practice by providing an increasingly human-centered clinical approach. Further research can be addressed by this review to cover problems related to hardware limitations, lack of accurate registration and tracking systems, and absence of security protocols.
Collapse
|
24
|
Pan J, Yu D, Li R, Huang X, Wang X, Zheng W, Zhu B, Liu X. Multi-Modality guidance based surgical navigation for percutaneous endoscopic transforaminal discectomy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 212:106460. [PMID: 34736173 DOI: 10.1016/j.cmpb.2021.106460] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 10/06/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Fluoroscopic guidance is a critical step for the puncture procedure in percutaneous endoscopic transforaminal discectomy (PETD). However, two-dimensional observations of the three-dimensional anatomic structure suffer from the effects of projective simplification. To accurately assess the spatial relations between the patient vertebra tissues and puncture needle, a considerable number of fluoroscopic images from different orientations need to be acquired by the surgeons. This process significantly increases the radiation risk for both the patient and surgeons. METHODS In this paper, we propose an augmented reality (AR) surgical navigation system for PETD based on multi-modality information, which contains fluoroscopy, optical tracking, and depth camera. To register the fluoroscopic image with the intraoperative video, we design a lightweight non-invasive fiducial with markers and detect the markers based on the deep learning method. It can display the intraoperative video fused with the registered fluoroscopic images. We also present a self-adaptive calibration and transformation method between a 6-DOF optical tracking device and a depth camera, which are in different coordinate systems. RESULTS With the substantially reduced frequency of fluoroscopy imaging, the system can accurately track and superimpose the virtual puncture needle on fluoroscopy images in real-time. From operating theatre in vivo animal experiments, the results illustrate that the system average positioning accuracy can reach 1.98mm and the orientation accuracy can reach 1.19∘. From the clinical validation results, the system significantly lower the frequency of fluoroscopy imaging (42.7%) and reduce the radiation risk for both the patient and surgeons. CONCLUSION Coupled with the user study, both the quantitative and qualitative results indicate that our navigation system has the potential to be highly useful in clinical practice. Compared with the existing navigation systems, which are usually equipped with a variety of large and high-cost medical equipments, such as O-arm, cone-beam CT, and robots, our navigation system does not need special equipment and can be implemented with common equipment in the operating room, such as C-arm, desktop, etc., even in small hospitals.
Collapse
Affiliation(s)
- Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; PENG CHENG Laboratory, Shenzhen 518000, China.
| | - Dongfang Yu
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Ranyang Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; PENG CHENG Laboratory, Shenzhen 518000, China.
| | - Xin Huang
- The Pain Medicine Center, Peking University Third Hospital, Beijing, China
| | - Xinliang Wang
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Wenhao Zheng
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Bin Zhu
- The Pain Medicine Center, Peking University Third Hospital, Beijing, China
| | - Xiaoguang Liu
- The Pain Medicine Center, Peking University Third Hospital, Beijing, China
| |
Collapse
|
25
|
Huang T, Li R, Li Y, Zhang X, Liao H. Augmented reality-based autostereoscopic surgical visualization system for telesurgery. Int J Comput Assist Radiol Surg 2021; 16:1985-1997. [PMID: 34363583 DOI: 10.1007/s11548-021-02463-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 07/15/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE The visualization of remote surgical scenes is the key to realizing the remote operation of surgical robots. However, current non-endoscopic surgical robot systems lack an effective visualization tool to offer sufficient surgical scene information and depth perception. METHODS We propose a novel autostereoscopic surgical visualization system integrating 3D intraoperative scene reconstruction, autostereoscopic 3D display, and augmented reality-based image fusion. The preoperative organ structure and the intraoperative surface point cloud are obtained from medical imaging and the RGB-D camera, respectively, and aligned by an automatic marker-free intraoperative registration algorithm. After registration, preoperative meshes with precalculated illumination and intraoperative textured point cloud are blended in real time. Finally, the fused image is shown on a 3D autostereoscopic display device to achieve depth perception. RESULTS A prototype of the autostereoscopic surgical visualization system was built. The system had a horizontal image resolution of 1.31 mm, a vertical image resolution of 0.82 mm, an average rendering rate of 33.1 FPS, an average registration rate of 20.5 FPS, and average registration errors of approximately 3 mm. A telesurgical robot prototype based on 3D autostereoscopic display was built. The quantitative evaluation experiments showed that our system achieved similar operational accuracy (1.79 ± 0.87 mm) as the conventional system (1.95 ± 0.71 mm), while having advantages in terms of completion time (with 34.11% reduction) and path length (with 35.87% reduction). Post-experimental questionnaires indicated that the system was user-friendly for novices and experts. CONCLUSION We propose a 3D surgical visualization system with augmented instruction and depth perception for telesurgery. The qualitative and quantitative evaluation results illustrate the accuracy and efficiency of the proposed system. Therefore, it shows great prospects in robotic surgery and telesurgery.
Collapse
Affiliation(s)
- Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Ruiyang Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Yangxi Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Xinran Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|