1
|
Han Z, Dou Q. A review on organ deformation modeling approaches for reliable surgical navigation using augmented reality. Comput Assist Surg (Abingdon) 2024; 29:2357164. [PMID: 39253945 DOI: 10.1080/24699322.2024.2357164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/11/2024] Open
Abstract
Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body. This is achieved through superimposing preoperative organ models onto the actual anatomy. Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy. To enable reliable navigation in augmented surgery, modeling of intraoperative deformation to obtain an accurate alignment of the preoperative organ model with the intraoperative anatomy is indispensable. Despite the existence of various methods proposed to model intraoperative organ deformation, there are still few literature reviews that systematically categorize and summarize these approaches. This review aims to fill this gap by providing a comprehensive and technical-oriented overview of modeling methods for intraoperative organ deformation in augmented reality in surgery. Through a systematic search and screening process, 112 closely relevant papers were included in this review. By presenting the current status of organ deformation modeling methods and their clinical applications, this review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.
Collapse
Affiliation(s)
- Zheng Han
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
2
|
Yuan Y, Wu Y, Fan X, Gong M, Ma W, Miao Q. EGST: Enhanced Geometric Structure Transformer for Point Cloud Registration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:6222-6234. [PMID: 37971922 DOI: 10.1109/tvcg.2023.3329578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
We explore the effect of geometric structure descriptors on extracting reliable correspondences and obtaining accurate registration for point cloud registration. The point cloud registration task involves the estimation of rigid transformation motion in unorganized point cloud, hence it is crucial to capture the contextual features of the geometric structure in point cloud. Recent coordinates-only methods ignore numerous geometric information in the point cloud which weaken ability to express the global context. We propose Enhanced Geometric Structure Transformer to learn enhanced contextual features of the geometric structure in point cloud and model the structure consistency between point clouds for extracting reliable correspondences, which encodes three explicit enhanced geometric structures and provides significant cues for point cloud registration. More importantly, we report empirical results that Enhanced Geometric Structure Transformer can learn meaningful geometric structure features using none of the following: (i) explicit positional embeddings, (ii) additional feature exchange module such as cross-attention, which can simplify network structure compared with plain Transformer. Extensive experiments on the synthetic dataset and real-world datasets illustrate that our method can achieve competitive results.
Collapse
|
3
|
Prasad K, Fassler C, Miller A, Aweeda M, Pruthi S, Fusco JC, Daniel B, Miga M, Wu JY, Topf MC. More than meets the eye: Augmented reality in surgical oncology. J Surg Oncol 2024; 130:405-418. [PMID: 39155686 DOI: 10.1002/jso.27790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Accepted: 07/09/2024] [Indexed: 08/20/2024]
Abstract
BACKGROUND AND OBJECTIVES In the field of surgical oncology, there has been a desire for innovative techniques to improve tumor visualization, resection, and patient outcomes. Augmented reality (AR) technology superimposes digital content onto the real-world environment, enhancing the user's experience by blending digital and physical elements. A thorough examination of AR technology in surgical oncology has yet to be performed. METHODS A scoping review of intraoperative AR in surgical oncology was conducted according to the guidelines and recommendations of The Preferred Reporting Items for Systematic Review and Meta-analyzes Extension for Scoping Reviews (PRISMA-ScR) framework. All original articles examining the use of intraoperative AR during surgical management of cancer were included. Exclusion criteria included virtual reality applications only, preoperative use only, fluorescence, AR not specific to surgical oncology, and study design (reviews, commentaries, abstracts). RESULTS A total of 2735 articles were identified of which 83 were included. Most studies (52) were performed on animals or phantom models, while the remaining included patients. A total of 1112 intraoperative AR surgical cases were performed across the studies. The most common anatomic site was brain (20 articles), followed by liver (16), renal (9), and head and neck (8). AR was most often used for intraoperative navigation or anatomic visualization of tumors or critical structures but was also used to identify osteotomy or craniotomy planes. CONCLUSIONS AR technology has been applied across the field of surgical oncology to aid in localization and resection of tumors.
Collapse
Affiliation(s)
- Kavita Prasad
- Department of Otolaryngology-Head & Neck Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
| | - Carly Fassler
- Department of Otolaryngology-Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Alexis Miller
- Department of Otolaryngology-Head & Neck Surgery, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Marina Aweeda
- Department of Otolaryngology-Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Sumit Pruthi
- Department of Radiology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Joseph C Fusco
- Department of Pediatric Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Bruce Daniel
- Department of Radiology, Stanford Health Care, Palo Alto, California, USA
| | - Michael Miga
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Jie Ying Wu
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Michael C Topf
- Department of Otolaryngology-Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
4
|
Al-Naser Y, Halka F, Alshadeedi F, Albahhar M, Athreya S. The applications of augmented reality in image-guided tumor ablations: A scoping review. J Med Imaging Radiat Sci 2024; 55:125-133. [PMID: 38290953 DOI: 10.1016/j.jmir.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 11/14/2023] [Accepted: 12/21/2023] [Indexed: 02/01/2024]
Abstract
BACKGROUND Interventional radiology employs minimally invasive image-guided procedures for diagnosing and treating various conditions. Among these procedures, alcohol and thermal ablation techniques have shown high efficacy. However, these procedures present challenges such as increased procedure time, radiation dose, and risk of tissue injury. This scoping review aims to explore how augmented reality (AR) can mitigate these challenges and improve the accuracy, precision, and efficiency of image-guided tumor ablation while improving patient outcomes. METHODS A scoping review of the literature was performed based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guideline to identify published literature investigating AR in image-guided tumor ablations. We conducted our electronic searches using PubMed, Scopus, Web of Sciences and CINAHL from inception to April 27th, 2023. The following Boolean terms were used for the search: ("augmented reality" OR "AR" OR "navigation system" OR "head mounted device" OR "HMD") AND ("tumor ablation" OR "radiofrequency tumor ablation" OR "microwave tumor ablation" OR "cryoablation"). We considered articles eligible for our scoping review if they met the following conditions: (1) published in English only, (2) focused on image-guided tumour ablations, (3) incorporated AR techniques in their methodology, (4) employed an aspect of AR in image-guided tumour ablations, and (5) exclusively involved human subjects. Publications were excluded if there was no mention of applying AR, or if the study investigated interventions other than image-guided tumour ablations. RESULTS Our search results yielded 1,676 articles in our initial search of the databases. Of those, 409 studies were removed as duplicates. 1,243 studies were excluded during the title and abstract screening. 24 studies were assessed for eligibility in the full-text stage. 19 studies were excluded, resulting in a final selection of only five studies that satisfied our inclusion criteria. The studies aimed to assess AR's efficacy in tumor ablations. Two studies compared an optical-based AR system with CT guidance. Two studies used a head-mounted AR device, while one used a dual-camera setup. Various tumor types were examined, including bone, abdominal soft tissue, breast, hepatic, renal, colorectal, and lung lesions. All studies showed positive results, including reduced radiation exposure, shorter procedures, and improved navigation, and targeting assistance. CONCLUSION AR systems enhance image-guided tumor ablations by improving the accuracy of ablation probe placements and increasing efficiency. They offer real-time guidance, enhanced visualization, and improved navigation, resulting in optimal needle placement. AR reduces radiation exposure and shortens procedure times compared to traditional CT-guided techniques. However, limitations like small sample sizes and technical challenges require further research. Despite this, AR shows potential benefits and larger, diverse studies are needed for validation.
Collapse
Affiliation(s)
- Yousif Al-Naser
- Medical Radiation Science, McMaster University, Hamilton, ON, Canada; Department of Diagnostic Imaging, Trillium Health Partners, Mississauga, ON, Canada.
| | | | | | - Mahmood Albahhar
- Department of Medical Imaging, Niagara Health, St Catharines, ON, Canada; Department of Radiology, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Sriharsha Athreya
- Department of Radiology, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada; Department of Diagnostic Imaging, Hamilton Health Science, Hamilton, ON, Canada
| |
Collapse
|
5
|
Lee KH, Li M, Varble N, Negussie AH, Kassin MT, Arrichiello A, Carrafiello G, Hazen LA, Wakim PG, Li X, Xu S, Wood BJ. Smartphone Augmented Reality Outperforms Conventional CT Guidance for Composite Ablation Margins in Phantom Models. J Vasc Interv Radiol 2024; 35:452-461.e3. [PMID: 37852601 DOI: 10.1016/j.jvir.2023.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 09/23/2023] [Accepted: 10/08/2023] [Indexed: 10/20/2023] Open
Abstract
PURPOSE To develop and evaluate a smartphone augmented reality (AR) system for a large 50-mm liver tumor ablation with treatment planning for composite overlapping ablation zones. MATERIALS AND METHODS A smartphone AR application was developed to display tumor, probe, projected probe paths, ablated zones, and real-time percentage of the ablated target tumor volume. Fiducial markers were attached to phantoms and an ablation probe hub for tracking. The system was evaluated with tissue-mimicking thermochromic phantoms and gel phantoms. Four interventional radiologists performed 2 trials each of 3 probe insertions per trial using AR guidance versus computed tomography (CT) guidance approaches in 2 gel phantoms. Insertion points and optimal probe paths were predetermined. On Gel Phantom 2, serial ablated zones were saved and continuously displayed after each probe placement/adjustment, enabling feedback and iterative planning. The percentages of tumor ablated for AR guidance versus CT guidance, and with versus without display of recorded ablated zones, were compared among interventional radiologists with pairwise t-tests. RESULTS The means of percentages of tumor ablated for CT freehand and AR guidance were 36% ± 7 and 47% ± 4 (P = .004), respectively. The mean composite percentages of tumor ablated for AR guidance were 43% ± 1 (without) and 50% ± 2 (with display of ablation zone) (P = .033). There was no strong correlation between AR-guided percentage of ablation and years of experience (r < 0.5), whereas there was a strong correlation between CT-guided percentage of ablation and years of experience (r > 0.9). CONCLUSIONS A smartphone AR guidance system for dynamic iterative large liver tumor ablation was accurate, performed better than conventional CT guidance, especially for less experienced interventional radiologists, and enhanced more standardized performance across experience levels for ablation of a 50-mm tumor.
Collapse
Affiliation(s)
- Katerina H Lee
- McGovern Medical School at UTHealth, Houston, Texas; Center for Interventional Oncology, National Institutes of Health, Bethesda, Maryland
| | - Ming Li
- Center for Interventional Oncology, National Institutes of Health, Bethesda, Maryland
| | - Nicole Varble
- Center for Interventional Oncology, National Institutes of Health, Bethesda, Maryland; Philips Research North America, Cambridge, Massachusetts
| | - Ayele H Negussie
- Center for Interventional Oncology, National Institutes of Health, Bethesda, Maryland
| | - Michael T Kassin
- Center for Interventional Oncology, National Institutes of Health, Bethesda, Maryland
| | - Antonio Arrichiello
- Center for Interventional Oncology, National Institutes of Health, Bethesda, Maryland
| | - Gianpaolo Carrafiello
- Department of Radiology, Foundation IRCCS Ca' Granda Ospedale Maggiore Policlinico, University of Milan, Milan, Italy
| | - Lindsey A Hazen
- Center for Interventional Oncology, National Institutes of Health, Bethesda, Maryland
| | - Paul G Wakim
- Biostatistics and Clinical Epidemiology Service, National Institutes of Health, Bethesda, Maryland
| | - Xiaobai Li
- Biostatistics and Clinical Epidemiology Service, National Institutes of Health, Bethesda, Maryland
| | - Sheng Xu
- Center for Interventional Oncology, National Institutes of Health, Bethesda, Maryland
| | - Bradford J Wood
- Center for Interventional Oncology, National Institutes of Health, Bethesda, Maryland.
| |
Collapse
|
6
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
7
|
Chen F, Chen L, Xu T, Ye H, Liao H, Zhang D. Precise angle estimation of capsule robot in ultrasound using heatmap guided two-stage network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107605. [PMID: 37390795 DOI: 10.1016/j.cmpb.2023.107605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/15/2023] [Accepted: 05/15/2023] [Indexed: 07/02/2023]
Abstract
PURPOSE A capsule robot can be controlled inside gastrointestinal (GI) tract by an external permanent magnet outside of human body for finishing non-invasive diagnosis and treatment. Locomotion control of capsule robot relies on the precise angle feedback that can be achieved by ultrasound imaging. However, ultrasound-based angle estimation of capsule robot is interfered by gastric wall tissue and the mixture of air, water, and digestive matter existing in the stomach. METHODS To tackle these issues, we introduce a heatmap guided two-stage network to detect the position and estimate the angle of the capsule robot in ultrasound images. Specifically, this network proposes the probability distribution module and skeleton extraction-based angle calculation to obtain accurate capsule robot position and angle estimation. RESULTS Extensive experiments were finished on the ultrasound image dataset of capsule robot within porcine stomach. Empirical results showed that our method obtained small position center error of 0.48 mm and high angle estimation accuracy of 96.32%. CONCLUSION Our method can provide precise angle feedback for locomotion control of capsule robot.
Collapse
Affiliation(s)
- Fang Chen
- Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing, China; College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China.
| | - Lingyu Chen
- Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing, China; College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Tianze Xu
- Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing, China; College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Haoran Ye
- Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing, China; College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, China
| | - Daoqiang Zhang
- Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing, China; College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| |
Collapse
|
8
|
Ma L, Huang T, Wang J, Liao H. Visualization, registration and tracking techniques for augmented reality guided surgery: a review. Phys Med Biol 2023; 68. [PMID: 36580681 DOI: 10.1088/1361-6560/acaf23] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/29/2022] [Indexed: 12/31/2022]
Abstract
Augmented reality (AR) surgical navigation has developed rapidly in recent years. This paper reviews and analyzes the visualization, registration, and tracking techniques used in AR surgical navigation systems, as well as the application of these AR systems in different surgical fields. The types of AR visualization are divided into two categories ofin situvisualization and nonin situvisualization. The rendering contents of AR visualization are various. The registration methods include manual registration, point-based registration, surface registration, marker-based registration, and calibration-based registration. The tracking methods consist of self-localization, tracking with integrated cameras, external tracking, and hybrid tracking. Moreover, we describe the applications of AR in surgical fields. However, most AR applications were evaluated through model experiments and animal experiments, and there are relatively few clinical experiments, indicating that the current AR navigation methods are still in the early stage of development. Finally, we summarize the contributions and challenges of AR in the surgical fields, as well as the future development trend. Despite the fact that AR-guided surgery has not yet reached clinical maturity, we believe that if the current development trend continues, it will soon reveal its clinical utility.
Collapse
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Jie Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| |
Collapse
|
9
|
3D reconstruction for maxillary anterior tooth crown based on shape and pose estimation networks. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02841-1. [PMID: 36754949 DOI: 10.1007/s11548-023-02841-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 01/19/2023] [Indexed: 02/10/2023]
Abstract
PURPOSE The design of a maxillary anterior tooth crown is crucial to post-treatment aesthetic appearance. Currently, the design is performed manually or by semi-automatic methods, both of which are time-consuming. As such, automatic methods could improve efficiency, but existing automatic methods ignore the relationships among crowns and are primarily used for occlusal surface reconstruction. In this study, the authors propose a novel method for automatically reconstructing a three-dimensional model of the maxillary anterior tooth crown. METHOD A pose estimation network (PEN) and a shape estimation network (SEN) are developed for jointly estimating the crown point cloud. PEN is a regression network used for estimating the crown pose, and SEN is based on an encoder-decoder architecture and used for estimating the initial crown point cloud. First, SEN adopts a transformer encoder to calculate the shape relationship among crowns to ensure that the shape of the reconstructed point cloud is precise. Second, the initial point cloud is subjected to pose transformation according to the estimated pose. Finally, the iterative method is used to form the crown mesh model based on the point cloud. RESULT The proposed method is evaluated on a dataset with 600 cases. Both SEN and PEN are converged within 1000 epochs. The average deviation between the reconstructed point cloud and the ground truth of the point cloud is 0.22 mm. The average deviation between the reconstructed crown mesh model and the ground truth of the crown model is 0.13 mm. CONCLUSION The results show that the proposed method can automatically and accurately reconstruct the three-dimensional model of the missing maxillary anterior tooth crown, which indicates the method has promising application prospects. Furthermore, the reconstruction time takes less than 11 s for one case, demonstrating improved work efficiency.
Collapse
|