1
|
Asadi Z, Asadi M, Kazemipour N, Léger É, Kersten-Oertel M. A decade of progress: bringing mixed reality image-guided surgery systems in the operating room. Comput Assist Surg (Abingdon) 2024; 29:2355897. [PMID: 38794834 DOI: 10.1080/24699322.2024.2355897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2024] Open
Abstract
Advancements in mixed reality (MR) have led to innovative approaches in image-guided surgery (IGS). In this paper, we provide a comprehensive analysis of the current state of MR in image-guided procedures across various surgical domains. Using the Data Visualization View (DVV) Taxonomy, we analyze the progress made since a 2013 literature review paper on MR IGS systems. In addition to examining the current surgical domains using MR systems, we explore trends in types of MR hardware used, type of data visualized, visualizations of virtual elements, and interaction methods in use. Our analysis also covers the metrics used to evaluate these systems in the operating room (OR), both qualitative and quantitative assessments, and clinical studies that have demonstrated the potential of MR technologies to enhance surgical workflows and outcomes. We also address current challenges and future directions that would further establish the use of MR in IGS.
Collapse
Affiliation(s)
- Zahra Asadi
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Mehrdad Asadi
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Negar Kazemipour
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Étienne Léger
- Montréal Neurological Institute & Hospital (MNI/H), Montréal, Canada
- McGill University, Montréal, Canada
| | - Marta Kersten-Oertel
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| |
Collapse
|
2
|
Abstract
BACKGROUND In recent years, numerous innovative yet challenging surgeries, such as minimally invasive procedures, have introduced an overwhelming amount of new technologies, increasing the cognitive load for surgeons and potentially diluting their attention. Cognitive support technologies (CSTs) have been in development to reduce surgeons' cognitive load and minimize errors. Despite its huge demands, it still lacks a systematic review. METHODS Literature was searched up until May 21st, 2021. Pubmed, Web of Science, and IEEExplore. Studies that aimed at reducing the cognitive load of surgeons were included. Additionally, studies that contained an experimental trial with real patients and real surgeons were prioritized, although phantom and animal studies were also included. Major outcomes that were assessed included surgical error, anatomical localization accuracy, total procedural time, and patient outcome. RESULTS A total of 37 studies were included. Overall, the implementation of CSTs had better surgical performance than the traditional methods. Most studies reported decreased error rate and increased efficiency. In terms of accuracy, most CSTs had over 90% accuracy in identifying anatomical markers with an error margin below 5 mm. Most studies reported a decrease in surgical time, although some were statistically insignificant. DISCUSSION CSTs have been shown to reduce the mental workload of surgeons. However, the limited ergonomic design of current CSTs has hindered their widespread use in the clinical setting. Overall, more clinical data on actual patients is needed to provide concrete evidence before the ubiquitous implementation of CSTs.
Collapse
Affiliation(s)
- Zhong Shi Zhang
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - Yun Wu
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - Bin Zheng
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
3
|
Ding AS, Lu A, Li Z, Sahu M, Galaiya D, Siewerdsen JH, Unberath M, Taylor RH, Creighton FX. A Self-Configuring Deep Learning Network for Segmentation of Temporal Bone Anatomy in Cone-Beam CT Imaging. Otolaryngol Head Neck Surg 2023; 169:988-998. [PMID: 36883992 PMCID: PMC11060418 DOI: 10.1002/ohn.317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 01/19/2023] [Accepted: 02/19/2023] [Indexed: 03/09/2023]
Abstract
OBJECTIVE Preoperative planning for otologic or neurotologic procedures often requires manual segmentation of relevant structures, which can be tedious and time-consuming. Automated methods for segmenting multiple geometrically complex structures can not only streamline preoperative planning but also augment minimally invasive and/or robot-assisted procedures in this space. This study evaluates a state-of-the-art deep learning pipeline for semantic segmentation of temporal bone anatomy. STUDY DESIGN A descriptive study of a segmentation network. SETTING Academic institution. METHODS A total of 15 high-resolution cone-beam temporal bone computed tomography (CT) data sets were included in this study. All images were co-registered, with relevant anatomical structures (eg, ossicles, inner ear, facial nerve, chorda tympani, bony labyrinth) manually segmented. Predicted segmentations from no new U-Net (nnU-Net), an open-source 3-dimensional semantic segmentation neural network, were compared against ground-truth segmentations using modified Hausdorff distances (mHD) and Dice scores. RESULTS Fivefold cross-validation with nnU-Net between predicted and ground-truth labels were as follows: malleus (mHD: 0.044 ± 0.024 mm, dice: 0.914 ± 0.035), incus (mHD: 0.051 ± 0.027 mm, dice: 0.916 ± 0.034), stapes (mHD: 0.147 ± 0.113 mm, dice: 0.560 ± 0.106), bony labyrinth (mHD: 0.038 ± 0.031 mm, dice: 0.952 ± 0.017), and facial nerve (mHD: 0.139 ± 0.072 mm, dice: 0.862 ± 0.039). Comparison against atlas-based segmentation propagation showed significantly higher Dice scores for all structures (p < .05). CONCLUSION Using an open-source deep learning pipeline, we demonstrate consistently submillimeter accuracy for semantic CT segmentation of temporal bone anatomy compared to hand-segmented labels. This pipeline has the potential to greatly improve preoperative planning workflows for a variety of otologic and neurotologic procedures and augment existing image guidance and robot-assisted systems for the temporal bone.
Collapse
Affiliation(s)
- Andy S. Ding
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alexander Lu
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Manish Sahu
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jeffrey H. Siewerdsen
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
4
|
Enkaoua A, Islam M, Ramalhinho J, Dowrick T, Booker J, Khan DZ, Marcus HJ, Clarkson MJ. Image-guidance in endoscopic pituitary surgery: an in-silico study of errors involved in tracker-based techniques. Front Surg 2023; 10:1222859. [PMID: 37780914 PMCID: PMC10540627 DOI: 10.3389/fsurg.2023.1222859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 08/11/2023] [Indexed: 10/03/2023] Open
Abstract
Background Endoscopic endonasal surgery is an established minimally invasive technique for resecting pituitary adenomas. However, understanding orientation and identifying critical neurovascular structures in this anatomically dense region can be challenging. In clinical practice, commercial navigation systems use a tracked pointer for guidance. Augmented Reality (AR) is an emerging technology used for surgical guidance. It can be tracker based or vision based, but neither is widely used in pituitary surgery. Methods This pre-clinical study aims to assess the accuracy of tracker-based navigation systems, including those that allow for AR. Two setups were used to conduct simulations: (1) the standard pointer setup, tracked by an infrared camera; and (2) the endoscope setup that allows for AR, using reflective markers on the end of the endoscope, tracked by infrared cameras. The error sources were estimated by calculating the Euclidean distance between a point's true location and the point's location after passing it through the noisy system. A phantom study was then conducted to verify the in-silico simulation results and show a working example of image-based navigation errors in current methodologies. Results The errors of the tracked pointer and tracked endoscope simulations were 1.7 and 2.5 mm respectively. The phantom study showed errors of 2.14 and 3.21 mm for the tracked pointer and tracked endoscope setups respectively. Discussion In pituitary surgery, precise neighboring structure identification is crucial for success. However, our simulations reveal that the errors of tracked approaches were too large to meet the fine error margins required for pituitary surgery. In order to achieve the required accuracy, we would need much more accurate tracking, better calibration and improved registration techniques.
Collapse
Affiliation(s)
- Aure Enkaoua
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mobarakol Islam
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - João Ramalhinho
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Thomas Dowrick
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - James Booker
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Division of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Danyal Z. Khan
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Division of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Hani J. Marcus
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Division of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Matthew J. Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| |
Collapse
|
5
|
Suresh D, Aydin A, James S, Ahmed K, Dasgupta P. The Role of Augmented Reality in Surgical Training: A Systematic Review. Surg Innov 2023; 30:366-382. [PMID: 36412148 PMCID: PMC10331622 DOI: 10.1177/15533506221140506] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
This review aims to provide an update on the role of augmented reality (AR) in surgical training and investigate whether the use of AR improves performance measures compared to traditional approaches in surgical trainees. PUBMED, EMBASE, Google Scholar, Cochrane Library, British Library and Science Direct were searched following PRIMSA guidelines. All English language original studies pertaining to AR in surgical training were eligible for inclusion. Qualitative analysis was performed and results were categorised according to simulator models, subsequently being evaluated using Messick's framework for validity and McGaghie's translational outcomes for simulation-based learning. Of the 1132 results retrieved, 45 were included in the study. 29 platforms were identified, with the highest 'level of effectiveness' recorded as 3. In terms of validity parameters, 10 AR models received a strong 'content validity' score of 2.15 models had a 'response processes' score ≥ 1. 'Internal structure' and 'consequences' were largely not discussed. 'Relations to other variables' was the best assessed criterion, with 9 platforms achieving a high score of 2. Overall, the Microsoft HoloLens received the highest level of recommendation for both validity and level of effectiveness. Augmented reality in surgical education is feasible and effective as an adjunct to traditional training. The Microsoft HoloLens has shown the most promising results across all parameters and produced improved performance measures in surgical trainees. In terms of the other simulator models, further research is required with stronger study designs, in order to validate the use of AR in surgical training.
Collapse
Affiliation(s)
- Dhivya Suresh
- Guy’s, King’s and St Thomas’ School of Medical Education, King’s College London, London, UK
| | - Abdullatif Aydin
- MRC Centre for Transplantation, Guy’s Hospital, King’s College London, London, UK
| | - Stuart James
- Department of General Surgery, Princess Royal University Hospital, London, UK
| | - Kamran Ahmed
- MRC Centre for Transplantation, Guy’s Hospital, King’s College London, London, UK
| | - Prokar Dasgupta
- MRC Centre for Transplantation, Guy’s Hospital, King’s College London, London, UK
| |
Collapse
|
6
|
Cao Y, Shi Y, Hong W, Dai P, Sun X, Yu H, Xie L. Continuum robots for endoscopic sinus surgery: Recent advances, challenges, and prospects. Int J Med Robot 2023; 19:e2471. [PMID: 36251333 DOI: 10.1002/rcs.2471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 09/18/2022] [Accepted: 10/11/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Endoscopic sinus surgery (ESS) has been recognized as an effective treatment modality for paranasal sinus diseases. Over the past decade, continuum robots (CRs) for ESS have been studied, but there are still some challenges. This paper presents a review on the scientific studies of CRs for ESS. METHODS Based on the analysis of the anatomical structure of the paranasal sinus, the requirements of CRs for ESS are discussed. Recent studies on rigid robots, handheld flexible robots, and CRs for ESS are presented. Surgical path planning, navigation, and control are also included. RESULTS Concentric tube CRs and cable-driven CRs have great potential for applications in ESS. The CRs incorporated with multiple replaceable arms with different functions are preferable in ESS. CONCLUSION Further study on navigation and control is required to improve the performance of CRs for ESS.
Collapse
Affiliation(s)
- Yongfeng Cao
- School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yuxuan Shi
- Department of Otolaryngology, Eye and ENT Hospital, Fudan University, Shanghai, China.,Research Units of New Technologies of Endoscopic Surgery in Skull Base Tumor, Chinese Academy of Medical Sciences, Beijing, China
| | - Wuzhou Hong
- School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Peidong Dai
- Department of Otolaryngology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Xicai Sun
- Department of Otolaryngology, Eye and ENT Hospital, Fudan University, Shanghai, China.,Research Units of New Technologies of Endoscopic Surgery in Skull Base Tumor, Chinese Academy of Medical Sciences, Beijing, China
| | - Hongmeng Yu
- Department of Otolaryngology, Eye and ENT Hospital, Fudan University, Shanghai, China.,Research Units of New Technologies of Endoscopic Surgery in Skull Base Tumor, Chinese Academy of Medical Sciences, Beijing, China
| | - Le Xie
- School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China.,Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
7
|
The intraoperative use of augmented and mixed reality technology to improve surgical outcomes: A systematic review. Int J Med Robot 2022; 18:e2450. [DOI: 10.1002/rcs.2450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 07/23/2022] [Accepted: 07/27/2022] [Indexed: 11/07/2022]
|
8
|
Multicenter assessment of augmented reality registration methods for image-guided interventions. Radiol Med 2022; 127:857-865. [DOI: 10.1007/s11547-022-01515-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 06/13/2022] [Indexed: 10/17/2022]
|
9
|
Mikamo M, Furukawa R, Oka S, Kotachi T, Okamoto Y, Tanaka S, Sagawa R, Kawasaki H. 3D endoscope system with AR display superimposing dense and wide-angle-of-view 3D points obtained by using micro pattern projector. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:881-885. [PMID: 36085656 DOI: 10.1109/embc48229.2022.9871060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In recent years, augmented reality (AR) technologies have been widespread for supporting various kinds of tasks, by superimposing useful information on the users' view of the real environments. In endoscopic diagnosis, AR systems can be helpful as an aid in presenting information to endoscopists who have their hands full. In this paper, we propose a system that can superimpose shapes, which are reconstructed from an endoscope image, onto the field of view. The feature of the proposed system is that it reconstructs 3D shapes from the images captured by the endoscope and superimposes them onto the real views. As a result, the superimposed view allows the doctor to keep operating the endoscope while observing the patient's internal body with additional information. The proposed system is composed of the reconstruction module and the display module. The reconstruction module is for acquiring 3D shapes based on an active stereo method. In particular, we propose a novel projection pattern that can reconstruct wide areas of the endoscopic view. The display module shows the 3D shape obtained by the reconstructed module, superimposing on the field of view. In the experiments, we show that it is possible to perform a wide range of dense 3D reconstructions using the new projection patterns. In addition, we confirmed the usefulness of the AR system by interviewing medical doctors.
Collapse
|
10
|
Thavarajasingam SG, Vardanyan R, Arjomandi Rad A, Thavarajasingam A, Khachikyan A, Mendoza N, Nair R, Vajkoczy P. The use of augmented reality in transsphenoidal surgery: A systematic review. Br J Neurosurg 2022; 36:457-471. [PMID: 35393900 DOI: 10.1080/02688697.2022.2057435] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
BACKGROUND Augmented reality (AR) has become a promising tool in neurosurgery. It can minimise the anatomical challenges faced by conventional endoscopic or microscopic transsphenoidal reoperations and can assist in intraoperative guidance, preoperative planning, and surgical training. OBJECTIVES The aims of this systematic review are to describe, compare, and evaluate the use of AR in endoscopic and microscopic transsphenoidal surgery, incorporating the latest primary research. METHODS A systematic review was performed to explore and evaluate existing primary evidence for using AR in transsphenoidal surgery. A comprehensive search of MEDLINE and EMBASE was conducted from database inception to 11th August 2021 for primary data on the use of AR in microscopic and endoscopic endonasal skull base surgery. Additional articles were identified through searches on PubMed, Google Scholar, JSTOR, SCOPUS, Web of Science, Engineering Village, IEEE transactions, and HDAS. A synthesis without meta-analysis (SWiM) analysis was employed quantitatively and qualitatively on the impact of AR on landmark identification, intraoperative navigation, accuracy, time, surgeon experience, and patient outcomes. RESULTS In this systematic review, 17 studies were included in the final analysis. The main findings were that AR provides a convincing improvement to landmark identification, intraoperative navigation, and surgeon experience in transsphenoidal surgery, with a further positive effect on accuracy and time. It did not demonstrate a convincing positive effect on patient outcomes. No studies reported comparative mortalities, morbidities, or cost-benefit indications. CONCLUSION AR-guided transsphenoidal surgery, both endoscopic and microscopic, is associated with an overall improvement in the areas of intraoperative guidance and surgeon experience as compared with their conventional counterparts. However, literature on this area, particularly comparative data and evidence, is very limited. More studies with similar methodologies and quantitative outcomes are required to perform appropriate meta-analyses and to draw significant conclusions.
Collapse
Affiliation(s)
| | - Robert Vardanyan
- Faculty of Medicine, Imperial College London, London, United Kingdom
| | | | | | - Artur Khachikyan
- Department of Neurology and Neurosurgery, National Institute of Health, Yerevan, Armenia
| | - Nigel Mendoza
- Department of Neurosurgery, Imperial College NHS Healthcare Trust, London, United Kingdom
| | - Ramesh Nair
- Department of Neurosurgery, Imperial College NHS Healthcare Trust, London, United Kingdom
| | - Peter Vajkoczy
- Department of Neurosurgery, Charité - Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
11
|
Patient-specific virtual and mixed reality for immersive, experiential anatomy education and for surgical planning in temporal bone surgery. Auris Nasus Larynx 2021; 48:1081-1091. [PMID: 34059399 DOI: 10.1016/j.anl.2021.03.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 02/13/2021] [Accepted: 03/16/2021] [Indexed: 11/20/2022]
Abstract
OBJECTIVE The recent development of extended reality technology has attracted interest in medicine. We explored the use of patient-specific virtual reality (VR) and mixed reality (MR) temporal bone models in anatomical teaching, pre-operative surgical planning and intra-operative surgical referencing. METHODS VR and MR temporal bone models were created and visualized on head-mounted display (HMD) and MR headset respectively, by a novel webservice that allows users to convert computed tomography images to VR and MR images without specific knowledge of programming. Eleven otorhinolaryngology trainees and specialists were asked to manipulate the healthy VR temporal bone model and to assess its validity by filling out a questionnaire. Additionally, VR and MR pathological models of petrous apex cholesteatoma were utilized for surgical planning pre-operatively and for referring to the anatomy during the surgery. RESULTS Most participants were favorable about the VR model and considered HMD as superior to a flat computer screen. 91% of the participants agreed or somewhat agreed that VR through HMD is cost effective. In addition, the VR pathological model was used for planning and sharing the surgical approach during a pre-operative surgical conference. The MR headset was worn intra-operatively to clarify the relationship between the pathological lesion and vital anatomical structures. CONCLUSION Regardless of the participants' training level in otorhinolaryngology or VR experience, all participants agreed that the VR temporal bone model is useful for anatomical education. Furthermore, the creation of patient-specific VR and MR models using the webservice and their pre- and intra-operative usages indicated the potential of innovative adjunctive surgical instrument.
Collapse
|
12
|
Abstract
PURPOSE OF REVIEW Image guided navigation has had significant impact in head and neck surgery, and has been most prolific in endonasal surgeries. Although conventional image guidance involves static computed tomography (CT) images attained in the preoperative setting, the continual evolution of surgical navigation technologies is fast expanding to incorporate both real-time data and bioinformation that allows for improved precision in surgical guidance. With the rapid advances in technologies, this article allows for a timely review of the current and developing techniques in surgical navigation for head and neck surgery. RECENT FINDINGS Current advances for cross-sectional-based image-guided surgery include fusion of CT with other imaging modalities (e.g., magnetic resonance imaging and positron emission tomography) as well as the uptake in intraoperative real-time 'on the table' imaging (e.g., cone-beam CT). These advances, together with the integration of virtual/augmented reality, enable potential enhancements in surgical navigation. In addition to the advances in radiological imaging, the development of optical modalities such as fluorescence and spectroscopy techniques further allows the assimilation of biological data to improve navigation particularly for head and neck surgery. SUMMARY The steady development of radiological and optical imaging techniques shows great promise in changing the paradigm of head and neck surgery.
Collapse
|
13
|
Khanwalkar AR, Welch KC. Updates in techniques for improved visualization in sinus surgery. Curr Opin Otolaryngol Head Neck Surg 2021; 29:9-20. [PMID: 33315617 DOI: 10.1097/moo.0000000000000693] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
PURPOSE OF REVIEW Adequate visualization during endoscopic sinus surgery (ESS) is one of the most critical aspects of performing well tolerated and successful surgery. The topic of visualization encompasses a broad spectrum of preoperative and intraoperative manoeuvres the surgeon can perform that aid in the understanding of the patient's anatomy and in the delivery of efficient surgical care. RECENT FINDINGS Preoperative considerations to improve visualization include optimization of haemostasis through management of comorbidities (e.g. hypertension, coagulopathies), medication management (e.g. blood thinners) and systemic versus topical corticosteroids. New technologies allow preoperative visual mapping of surgical plans. Advances in knowledge of intraoperative anaesthesia have encouraged a move toward noninhaled anaesthetics to reduce bleeding. High definition cameras, angled endoscopes, 3D endoscopes and more recently augmented reality, image-guided surgery, and robotic surgery, represent the state of the art for high-quality visualization. Topical interventions, such as epinephrine, tranexamic acid and warm isotonic saline, can help to reduce bleeding and improve the operative field. Surgical manoeuvres, such as polyp debulking, septoplasty, carefully controlled tissue manipulation and a consistent repeatable approach remain fundamental to appropriate intraoperative surgical visualization. SUMMARY This chapter delineates medical, technical and technological means - preoperatively and intraoperatively - to achieve optimized visualization of the surgical field in ESS.
Collapse
Affiliation(s)
- Ashoke R Khanwalkar
- Department of Otolaryngology - Head and Neck Surgery, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | | |
Collapse
|