1
|
Dong Q, Xiu W, Tang B, Hiyama E, Austin MT, Wu Y, Yuan X, Zhu C, Liu C, Ishibashi H, Tappa KK, Wang H, Sun C, Ma Y, Xi H, Wang J, Zhan J, Ihn K, Shimada M, Zhang M, Brindle ME, Thomas PB, Fumino S, Liu T, Lobe T, Rolle U, Wang S, Zhai X, Koga Y, Kinoshita Y, Bai Y, Li Z, Wen Z, Pan W, Sutyak KM, Giulianotti PC. International multidisciplinary consensus recommendations on clinical application of three-dimensional visualization in precision surgery for pediatric liver tumors. HPB (Oxford) 2025:S1365-182X(25)00082-6. [PMID: 40133134 DOI: 10.1016/j.hpb.2025.03.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2024] [Revised: 03/06/2025] [Accepted: 03/10/2025] [Indexed: 03/27/2025]
Abstract
BACKGROUND Pediatric liver tumors are predominantly primary malignant tumors, and complete tumor resection with sufficient preservation of liver tissue is crucial for improving prognosis. However, due to the delicate anatomical structure of the pediatric liver and the relatively large size of the tumors, especially in difficult cases, the surgical challenges are substantial. While precision liver surgery are widely applied in clinical practice, pediatric cases require more customized approaches. The application of three-dimensional (3D) visualization technology is crucial for enhancing surgical accuracy, allowing for precise preoperative planning and intraoperative guidance. METHODS This consensus was collaboratively developed by 36 experts from eight countries, using the Glaser's state-of-the-art method to review and refine the draft. RESULTS The final consensus resulted in 15 international multidisciplinary consensus recommendations on clinical application of 3D visualization in precision surgery for pediatric liver tumors. CONCLUSION This consensus will standardize the application of 3D visualization technology in precision surgery for pediatric liver tumors to improve outcomes and reduce risks.
Collapse
Affiliation(s)
- Qian Dong
- Department of Pediatric Surgery, Shandong Provincial Key Laboratory of Digital Medicine and Computer-assisted Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China.
| | - Wenli Xiu
- Department of Pediatric Surgery, Shandong Provincial Key Laboratory of Digital Medicine and Computer-assisted Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Benjie Tang
- Cuschieri Skills Centre, University of Dundee, Dundee, UK
| | - Eiso Hiyama
- Department of Pediatric Surgery, Hiroshima Univeristy Hospital, Natural Science Center for Basic Research and Development (N-BARD), Hiroshima University, Hiroshima, Japan
| | - Mary T Austin
- Department of Surgical Oncology, The University of Texas MD Anderson Cancer Center, TX, USA
| | - Yeming Wu
- Department of Pediatric Surgery, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaojun Yuan
- Department of Pediatric Hematology and Oncology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chengzhan Zhu
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Chengli Liu
- Department of Hepatobiliary Surgery, Air Force Medical Center of PLA, Beijing, China
| | - Hiroki Ishibashi
- Department of Pediatric Surgery & Pediatric Endoscopic Surgery, Tokushima University Hospital, Tokushima, Japan
| | - Karthik K Tappa
- The University of Texas M.D. Anderson Cancer Center, TX, USA
| | - Huanmin Wang
- Department of Pediatric Surgery, Beijing Children's Hospital, Beijing, China
| | - Chuandong Sun
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - YunTao Ma
- Department of General Surgery, Gansu Provincial Hospital, Lanzhou, Gansu, China
| | - Hongwei Xi
- Department of Pediatric Surgery, Children's Hospital of Shanxi, Shanxi, China
| | - Jian Wang
- Department of Surgery, Children's Hospital of Soochow University, Jiangsu, China
| | - Jianghua Zhan
- Department of Pediatric Surgery, Tianjin Children's Hospital, Tianjin, China
| | - Kyong Ihn
- Division of Pediatric Surgery, Severance Children's Hospital, Department of Surgery, Yonsei University College of Medicine, Severance Hospital, Seoul, Republic of Korea
| | - Mitsuo Shimada
- Department of Surgery, Tokushima University, Tokushima, Japan
| | - Mingman Zhang
- Department of Pediatric Surgery, Children's Hospital of Chongqing Medical University, Chongqing, China
| | - Mary E Brindle
- Departments of Surgery and Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
| | - Patrick B Thomas
- UNMC College of Medicine, University of Nebraska Medical Center, Nebraska, USA
| | - Shigehisa Fumino
- Department of Pediatric Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Tao Liu
- Gene Dysregulation Group, Children's Cancer Institute Australia, University of New South Wales, Sydney, Australia
| | - Thom Lobe
- Department of Surgery, University of Illinois at Chicago, UIC, Chicago, USA
| | - Udo Rolle
- University Hospital Frankfurt/M, Frankfurt, Germany
| | - Shan Wang
- Department of Surgical Oncology, Children's Hospital Affiliated to Chongqing Medical University, Chongqing, China
| | - Xiaowen Zhai
- Department of Pediatric Hematology and Oncology, Children's Hospital of Fudan University, Shanghai, China
| | - Yoshinori Koga
- Department of Pediatric Surgery, Kurume University School of Medicine, Fukuoka, Japan
| | - Yoshiaki Kinoshita
- Department of Pediatric Surgery, Niigata University Graduate School of Medical and Dental Sciences, Niigata City, Japan
| | - Yuzuo Bai
- Department of Pediatric Surgery, Shengjing Hospital Affiliated to China Medical University, Liaoning, China
| | - Zhaozhu Li
- Department of Pediatric Surgery, The Sixth Affiliated Hospital of Harbin Medical University, Heilongjiang, China
| | - Zhe Wen
- Department of Pediatric Surgery, Guangzhou Women and Children's Medical Center, Guangzhou, China
| | - Weikang Pan
- Department of Surgery, Boston Children's Hospital, Boston, USA
| | - Krysta M Sutyak
- Department of Pediatric Surgery, University of Texas Health Science Center at Houston, Center for Surgical Trials and Evidence-Based Practice (CSTEP), UTHSC at Houston, Houston, TX, USA
| | - Pier C Giulianotti
- Division of Minimally Invasive, General & Robotic Surgery, University of Illinois at Chicago, Chicago, USA
| |
Collapse
|
2
|
Suchow J, McDowell M, Huang J, Haberman J. A reflection on faces seen under mirror reversal. Perception 2024; 53:763-774. [PMID: 39351699 DOI: 10.1177/03010066241279606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2024]
Abstract
Much of our visual experience of faces, including our own, is mediated by technology, for example when a digital photo depicts a mirror reversal of reality. How does this difference in visual experience affect judgments about appearance? Here, we asked participants to view their likeness in photographs that were reversed (as when viewed in a mirror) or not reversed (as when viewed directly). Observers also perceptually adapted (or not) to the reversed or non-reversed images in a 2 × 2 design. Observers then rated how much each photograph resembled them and how much they liked their appearance in the photograph, later repeating the procedure for images of close friends. We found that non-reversed images are perceived as more "unlike" one's self and less pleasant than reversed images; the pattern disappears when evaluating close friends, where the non-reversed image is the more familiar, with adaptation having asymmetric effects. Experiment 1A was fully replicated seven years later. These results are likely driven by a strong, albeit malleable, visual representation of self, born of technology mediated experience and activated when an unfamiliar perspective exposes facial asymmetries. We conclude by considering the downstream effects of these preferences on consumer and social behavior.
Collapse
Affiliation(s)
- Jordan Suchow
- Stevens Institute of Technology, New Jersey, United States
| | - Malerie McDowell
- University of Alabama at Birmingham School of Social and Behavioral Sciences, Georgia, United States
| | | | | |
Collapse
|
3
|
Preim B, Meuschke M, Weis V. A Survey of Medical Visualization Through the Lens of Metaphors. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:6639-6664. [PMID: 37934633 DOI: 10.1109/tvcg.2023.3330546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2023]
Abstract
We provide an overview of metaphors that were used in medical visualization and related user interfaces. Metaphors are employed to translate concepts from a source domain to a target domain. The survey is grounded in a discussion of metaphor-based design involving the identification and reflection of candidate metaphors. We consider metaphors that have a source domain in one branch of medicine, e.g., the virtual mirror that solves problems in orthopedics and laparoscopy with a mirror that resembles the dentist's mirror. Other metaphors employ the physical world as the source domain, such as crepuscular rays that inspire a solution for access planning in tumor therapy. Aviation is another source of inspiration, leading to metaphors, such as surgical cockpits, surgical control towers, and surgery navigation according to an instrument flight. This paper should raise awareness for metaphors and their potential to focus the design of computer-assisted systems on useful features and a positive user experience. Limitations and potential drawbacks of a metaphor-based user interface design for medical applications are also considered.
Collapse
|
4
|
Lee B, Sedlmair M, Schmalstieg D. Design Patterns for Situated Visualization in Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:1324-1335. [PMID: 37883275 DOI: 10.1109/tvcg.2023.3327398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Situated visualization has become an increasingly popular research area in the visualization community, fueled by advancements in augmented reality (AR) technology and immersive analytics. Visualizing data in spatial proximity to their physical referents affords new design opportunities and considerations not present in traditional visualization, which researchers are now beginning to explore. However, the AR research community has an extensive history of designing graphics that are displayed in highly physical contexts. In this work, we leverage the richness of AR research and apply it to situated visualization. We derive design patterns which summarize common approaches of visualizing data in situ. The design patterns are based on a survey of 293 papers published in the AR and visualization communities, as well as our own expertise. We discuss design dimensions that help to describe both our patterns and previous work in the literature. This discussion is accompanied by several guidelines which explain how to apply the patterns given the constraints imposed by the real world. We conclude by discussing future research directions that will help establish a complete understanding of the design of situated visualization, including the role of interactivity, tasks, and workflows.
Collapse
|
5
|
Usevitch DE, Bronheim RS, Reyes MC, Babilonia C, Margalit A, Jain A, Armand M. Review of Enhanced Handheld Surgical Drills. Crit Rev Biomed Eng 2023; 51:29-50. [PMID: 37824333 PMCID: PMC10874117 DOI: 10.1615/critrevbiomedeng.2023049106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
The handheld drill has been used as a conventional surgical tool for centuries. Alongside the recent successes of surgical robots, the development of new and enhanced medical drills has improved surgeon ability without requiring the high cost and consuming setup times that plague medical robot systems. This work provides an overview of enhanced handheld surgical drill research focusing on systems that include some form of image guidance and do not require additional hardware that physically supports or guides drilling. Drilling is reviewed by main contribution divided into audio-, visual-, or hardware-enhanced drills. A vision for future work to enhance handheld drilling systems is also discussed.
Collapse
Affiliation(s)
- David E. Usevitch
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD, United States
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Rachel S. Bronheim
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Miguel C. Reyes
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Carlos Babilonia
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Adam Margalit
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Amit Jain
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Mehran Armand
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD, United States
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
6
|
In-situ or side-by-side? A user study on augmented reality maintenance instructions in blind areas. COMPUT IND 2023. [DOI: 10.1016/j.compind.2022.103795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
7
|
Navab N, Martin-Gomez A, Seibold M, Sommersperger M, Song T, Winkler A, Yu K, Eck U. Medical Augmented Reality: Definition, Principle Components, Domain Modeling, and Design-Development-Validation Process. J Imaging 2022; 9:4. [PMID: 36662102 PMCID: PMC9866223 DOI: 10.3390/jimaging9010004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/15/2022] [Accepted: 12/19/2022] [Indexed: 12/28/2022] Open
Abstract
Three decades after the first set of work on Medical Augmented Reality (MAR) was presented to the international community, and ten years after the deployment of the first MAR solutions into operating rooms, its exact definition, basic components, systematic design, and validation still lack a detailed discussion. This paper defines the basic components of any Augmented Reality (AR) solution and extends them to exemplary Medical Augmented Reality Systems (MARS). We use some of the original MARS applications developed at the Chair for Computer Aided Medical Procedures and deployed into medical schools for teaching anatomy and into operating rooms for telemedicine and surgical guidance throughout the last decades to identify the corresponding basic components. In this regard, the paper is not discussing all past or existing solutions but only aims at defining the principle components and discussing the particular domain modeling for MAR and its design-development-validation process, and providing exemplary cases through the past in-house developments of such solutions.
Collapse
Affiliation(s)
- Nassir Navab
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Alejandro Martin-Gomez
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Matthias Seibold
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, CH-8008 Zurich, Switzerland
| | - Michael Sommersperger
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Tianyu Song
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Alexander Winkler
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University Hospital, DE-80336 Munich, Germany
| | - Kevin Yu
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- medPhoton GmbH, AT-5020 Salzburg, Austria
| | - Ulrich Eck
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| |
Collapse
|
8
|
Schütz L, Weber E, Niu W, Daniel B, McNab J, Navab N, Leuze C. Audiovisual augmentation for coil positioning in transcranial magnetic stimulation. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2154277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Affiliation(s)
- Laura Schütz
- Wu Tsai Visualization Lab, Stanford University, Stanford, California, USA
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, Munich, Germany
| | - Emmanuelle Weber
- Wu Tsai Visualization Lab, Stanford University, Stanford, California, USA
- McNab Lab, Department of Radiology, Stanford University, Stanford, CA, USA
| | - Wally Niu
- Wu Tsai Visualization Lab, Stanford University, Stanford, California, USA
- Incubator for Medical Mixed and Extended Reality at Stanford, Department of Radiology, Stanford University, Stanford, CA, USA
| | - Bruce Daniel
- Incubator for Medical Mixed and Extended Reality at Stanford, Department of Radiology, Stanford University, Stanford, CA, USA
| | - Jennifer McNab
- Wu Tsai Visualization Lab, Stanford University, Stanford, California, USA
- McNab Lab, Department of Radiology, Stanford University, Stanford, CA, USA
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, Munich, Germany
| | - Christoph Leuze
- Wu Tsai Visualization Lab, Stanford University, Stanford, California, USA
- Incubator for Medical Mixed and Extended Reality at Stanford, Department of Radiology, Stanford University, Stanford, CA, USA
| |
Collapse
|
9
|
Yu K, Zacharis K, Eck U, Navab N. Projective Bisector Mirror (PBM): Concept and Rationale. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3694-3704. [PMID: 36048998 DOI: 10.1109/tvcg.2022.3203108] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Our world is full of cameras, whether they are installed in the environment or integrated into mobile devices such as mobile phones or head-mounted displays. Displaying external camera views in our egocentric view with a picture-in-picture approach allows us to understand their view; however, it would not allow us to correlate their viewpoint with our perceived reality. We introduce Projective Bisector Mirrors for visualizing a camera view comprehensibly in the egocentric view of an observer with the metaphor of a virtual mirror. Our concept projects the image of a capturing camera onto the bisecting plane between the capture and the observer camera. We present extensive mathematical descriptions of this novel paradigm for multi-view visualization, discuss the effects of tracking errors and provide concrete implementation for multiple exemplary use-cases.
Collapse
|
10
|
Mishra R, Narayanan MK, Umana GE, Montemurro N, Chaurasia B, Deora H. Virtual Reality in Neurosurgery: Beyond Neurosurgical Planning. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:1719. [PMID: 35162742 PMCID: PMC8835688 DOI: 10.3390/ijerph19031719] [Citation(s) in RCA: 91] [Impact Index Per Article: 30.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Revised: 01/29/2022] [Accepted: 01/30/2022] [Indexed: 02/04/2023]
Abstract
BACKGROUND While several publications have focused on the intuitive role of augmented reality (AR) and virtual reality (VR) in neurosurgical planning, the aim of this review was to explore other avenues, where these technologies have significant utility and applicability. METHODS This review was conducted by searching PubMed, PubMed Central, Google Scholar, the Scopus database, the Web of Science Core Collection database, and the SciELO citation index, from 1989-2021. An example of a search strategy used in PubMed Central is: "Virtual reality" [All Fields] AND ("neurosurgical procedures" [MeSH Terms] OR ("neurosurgical" [All Fields] AND "procedures" [All Fields]) OR "neurosurgical procedures" [All Fields] OR "neurosurgery" [All Fields] OR "neurosurgery" [MeSH Terms]). Using this search strategy, we identified 487 (PubMed), 1097 (PubMed Central), and 275 citations (Web of Science Core Collection database). RESULTS Articles were found and reviewed showing numerous applications of VR/AR in neurosurgery. These applications included their utility as a supplement and augment for neuronavigation in the fields of diagnosis for complex vascular interventions, spine deformity correction, resident training, procedural practice, pain management, and rehabilitation of neurosurgical patients. These technologies have also shown promise in other area of neurosurgery, such as consent taking, training of ancillary personnel, and improving patient comfort during procedures, as well as a tool for training neurosurgeons in other advancements in the field, such as robotic neurosurgery. CONCLUSIONS We present the first review of the immense possibilities of VR in neurosurgery, beyond merely planning for surgical procedures. The importance of VR and AR, especially in "social distancing" in neurosurgery training, for economically disadvantaged sections, for prevention of medicolegal claims and in pain management and rehabilitation, is promising and warrants further research.
Collapse
Affiliation(s)
- Rakesh Mishra
- Department of Neurosurgery, Institute of Medical Sciences, Banaras Hindu University, Varanasi 221005, India;
| | | | - Giuseppe E. Umana
- Trauma and Gamma-Knife Center, Department of Neurosurgery, Cannizzaro Hospital, 95100 Catania, Italy;
| | - Nicola Montemurro
- Department of Neurosurgery, Azienda Ospedaliera Universitaria Pisana (AOUP), University of Pisa, 56100 Pisa, Italy
| | - Bipin Chaurasia
- Department of Neurosurgery, Bhawani Hospital, Birgunj 44300, Nepal;
| | - Harsh Deora
- Department of Neurosurgery, National Institute of Mental Health and Neurosciences, Bengaluru 560029, India;
| |
Collapse
|
11
|
Wendler T, van Leeuwen FWB, Navab N, van Oosterom MN. How molecular imaging will enable robotic precision surgery : The role of artificial intelligence, augmented reality, and navigation. Eur J Nucl Med Mol Imaging 2021; 48:4201-4224. [PMID: 34185136 PMCID: PMC8566413 DOI: 10.1007/s00259-021-05445-6] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 06/01/2021] [Indexed: 02/08/2023]
Abstract
Molecular imaging is one of the pillars of precision surgery. Its applications range from early diagnostics to therapy planning, execution, and the accurate assessment of outcomes. In particular, molecular imaging solutions are in high demand in minimally invasive surgical strategies, such as the substantially increasing field of robotic surgery. This review aims at connecting the molecular imaging and nuclear medicine community to the rapidly expanding armory of surgical medical devices. Such devices entail technologies ranging from artificial intelligence and computer-aided visualization technologies (software) to innovative molecular imaging modalities and surgical navigation (hardware). We discuss technologies based on their role at different steps of the surgical workflow, i.e., from surgical decision and planning, over to target localization and excision guidance, all the way to (back table) surgical verification. This provides a glimpse of how innovations from the technology fields can realize an exciting future for the molecular imaging and surgery communities.
Collapse
Affiliation(s)
- Thomas Wendler
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
| | - Fijs W. B. van Leeuwen
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Orsi Academy, Melle, Belgium
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
- Chair for Computer Aided Medical Procedures Laboratory for Computational Sensing + Robotics, Johns-Hopkins University, Baltimore, MD USA
| | - Matthias N. van Oosterom
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| |
Collapse
|
12
|
Augmented and virtual reality in spine surgery, current applications and future potentials. Spine J 2021; 21:1617-1625. [PMID: 33774210 DOI: 10.1016/j.spinee.2021.03.018] [Citation(s) in RCA: 91] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 03/17/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND CONTEXT The field of artificial intelligence (AI) is rapidly advancing, especially with recent improvements in deep learning (DL) techniques. Augmented (AR) and virtual reality (VR) are finding their place in healthcare, and spine surgery is no exception. The unique capabilities and advantages of AR and VR devices include their low cost, flexible integration with other technologies, user-friendly features and their application in navigation systems, which makes them beneficial across different aspects of spine surgery. Despite the use of AR for pedicle screw placement, targeted cervical foraminotomy, bone biopsy, osteotomy planning, and percutaneous intervention, the current applications of AR and VR in spine surgery remain limited. PURPOSE The primary goal of this study was to provide the spine surgeons and clinical researchers with the general information about the current applications, future potentials, and accessibility of AR and VR systems in spine surgery. STUDY DESIGN/SETTING We reviewed titles of more than 250 journal papers from google scholar and PubMed with search words: augmented reality, virtual reality, spine surgery, and orthopaedic, out of which 89 related papers were selected for abstract review. Finally, full text of 67 papers were analyzed and reviewed. METHODS The papers were divided into four groups: technological papers, applications in surgery, applications in spine education and training, and general application in orthopaedic. A team of two reviewers performed paper reviews and a thorough web search to ensure the most updated state of the art in each of four group is captured in the review. RESULTS In this review we discuss the current state of the art in AR and VR hardware, their preoperative applications and surgical applications in spine surgery. Finally, we discuss the future potentials of AR and VR and their integration with AI, robotic surgery, gaming, and wearables. CONCLUSIONS AR and VR are promising technologies that will soon become part of standard of care in spine surgery.
Collapse
|
13
|
Dennler C, Safa NA, Bauer DE, Wanivenhaus F, Liebmann F, Götschi T, Farshad M. Augmented Reality Navigated Sacral-Alar-Iliac Screw Insertion. Int J Spine Surg 2021; 15:161-168. [PMID: 33900970 DOI: 10.14444/8021] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
Abstract
BACKGROUND Sacral-alar-iliac (SAI) screws are increasingly used for lumbo-pelvic fixation procedures. Insertion of SAI screws is technically challenging, and surgeons often rely on costly and time-consuming navigation systems. We investigated the accuracy and precision of an augmented reality (AR)-based and commercially available head-mounted device requiring minimal infrastructure. METHODS A pelvic sawbone model served to drill pilot holes of 80 SAI screw trajectories by 2 surgeons, randomly either freehand (FH) without any kind of navigation or with AR navigation. The number of primary pilot hole perforations, simulated screw perforation, minimal axis/outer cortical wall distance, true sagittal cranio-caudal inclination angle (tSCCIA), true axial medio-lateral angle, and maximal screw length (MSL) were measured and compared to predefined optimal values. RESULTS In total, 1/40 (2.5%) of AR-navigated screw hole trajectories showed a perforation before passing the inferior gluteal line compared to 24/40 (60%) of FH screw hole trajectories (P < .05). The differences between FH- and AR-guided holes compared to optimal values were significant for tSCCIA with -10.8° ± 11.77° and MSL -65.29 ± 15 mm vs 55.04 ± 6.76 mm (P = .001). CONCLUSIONS In this study, the additional anatomical information provided by the AR headset and the superimposed operative plan improved the precision of drilling pilot holes for SAI screws in a laboratory setting compared to the conventional FH technique. Further technical development and validation studies are currently being performed to investigate potential clinical benefits of the AR-based navigation approach described here. LEVEL OF EVIDENCE 4.
Collapse
Affiliation(s)
- Cyrill Dennler
- Department of Orthopedics, University Hospital Balgrist, University of Zürich, Zürich, Switzerland
| | - Nico Akhavan Safa
- Department of Orthopedics, University Hospital Balgrist, University of Zürich, Zürich, Switzerland
| | - David Ephraim Bauer
- Department of Orthopedics, University Hospital Balgrist, University of Zürich, Zürich, Switzerland
| | - Florian Wanivenhaus
- Department of Orthopedics, University Hospital Balgrist, University of Zürich, Zürich, Switzerland
| | - Florentin Liebmann
- Computer Assisted Research and Development Group, University Hospital Balgrist, University of Zürich, Zürich Switzerland.,Laboratory for Orthopaedic Biomechanics, ETH Zürich, Zürich, Switzerland
| | - Tobias Götschi
- Department of Orthopedics, University Hospital Balgrist, University of Zürich, Zürich, Switzerland
| | - Mazda Farshad
- Department of Orthopedics, University Hospital Balgrist, University of Zürich, Zürich, Switzerland
| |
Collapse
|
14
|
Bari H, Wadhwani S, Dasari BVM. Role of artificial intelligence in hepatobiliary and pancreatic surgery. World J Gastrointest Surg 2021; 13:7-18. [PMID: 33552391 PMCID: PMC7830072 DOI: 10.4240/wjgs.v13.i1.7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 12/08/2020] [Accepted: 12/17/2020] [Indexed: 02/06/2023] Open
Abstract
Over the past decade, enhanced preoperative imaging and visualization, improved delineation of the complex anatomical structures of the liver and pancreas, and intra-operative technological advances have helped deliver the liver and pancreatic surgery with increased safety and better postoperative outcomes. Artificial intelligence (AI) has a major role to play in 3D visualization, virtual simulation, augmented reality that helps in the training of surgeons and the future delivery of conventional, laparoscopic, and robotic hepatobiliary and pancreatic (HPB) surgery; artificial neural networks and machine learning has the potential to revolutionize individualized patient care during the preoperative imaging, and postoperative surveillance. In this paper, we reviewed the existing evidence and outlined the potential for applying AI in the perioperative care of patients undergoing HPB surgery.
Collapse
Affiliation(s)
- Hassaan Bari
- Department of HPB and Liver Transplantation Surgery, Queen Elizabeth Hospital, Birmingham B15 2TH, United Kingdom
| | - Sharan Wadhwani
- Department of Radiology, Queen Elizabeth Hospital, Birmingham B15 2TH, United Kingdom
| | - Bobby V M Dasari
- Department of HPB and Liver Transplantation Surgery, Queen Elizabeth Hospital, Birmingham B15 2TH, United Kingdom
| |
Collapse
|
15
|
Yu D, Zhou Q, Newn J, Dingler T, Velloso E, Goncalves J. Fully-Occluded Target Selection in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3402-3413. [PMID: 32986552 DOI: 10.1109/tvcg.2020.3023606] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The presence of fully-occluded targets is common within virtual environments, ranging from a virtual object located behind a wall to a datapoint of interest hidden in a complex visualization. However, efficient input techniques for locating and selecting these targets are mostly underexplored in virtual reality (VR) systems. In this paper, we developed an initial set of seven techniques techniques for fully-occluded target selection in VR. We then evaluated their performance in a user study and derived a set of design implications for simple and more complex tasks from our results. Based on these insights, we refined the most promising techniques and conducted a second, more comprehensive user study. Our results show how factors, such as occlusion layers, target depths, object densities, and the estimation of target locations, can affect technique performance. Our findings from both studies and distilled recommendations can inform the design of future VR systems that offer selections for fully-occluded targets.
Collapse
|
16
|
Tuladhar S, AlSallami N, Alsadoon A, Prasad PWC, Alsadoon OH, Haddad S, Alrubaie A. A recent review and a taxonomy for hard and soft tissue visualization-based mixed reality. Int J Med Robot 2020; 16:1-22. [PMID: 32388923 DOI: 10.1002/rcs.2120] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2020] [Revised: 04/28/2020] [Accepted: 04/30/2020] [Indexed: 11/10/2022]
Abstract
BACKGROUND Mixed reality (MR) visualization is gaining popularity in image-guided surgery (IGS) systems, especially for hard and soft tissue surgeries. However, a few MR systems are implemented in real time. Some factors are limiting MR technology and creating a difficulty in setting up and evaluating the MR system in real environments. Some of these factors include: the end users are not considered, the limitations in the operating room, and the medical images are not fully unified into the operating interventions. METHODOLOGY The purpose of this article is to use Data, Visualization processing, and View (DVV) taxonomy to evaluate the current MR systems. DVV includes all the components required to be considered and validated for the MR used in hard and soft tissue surgeries. This taxonomy helps the developers and end users like researchers and surgeons to enhance MR system for the surgical field. RESULTS We evaluated, validated, and verified the taxonomy based on system comparison, completeness, and acceptance criteria. Around 24 state-of-the-art solutions that are picked relate to MR visualization, which is then used to demonstrate and validate this taxonomy. The results showed that most of the findings are evaluated and others are validated. CONCLUSION The DVV taxonomy acts as a great resource for MR visualization in IGS. State-of-the-art solutions are classified, evaluated, validated, and verified to elaborate the process of MR visualization during surgery. The DVV taxonomy provides the benefits to the end users and future improvements in MR.
Collapse
Affiliation(s)
- Selina Tuladhar
- School of Computing and Mathematics, Charles Sturt University, Sydney, New South Wales, Australia
| | - Nada AlSallami
- Computer Science Department, Worcester State University, Worcester, Massachusetts, USA
| | - Abeer Alsadoon
- School of Computing and Mathematics, Charles Sturt University, Sydney, New South Wales, Australia.,Department of Information Technology, Study Group Australia, Sydney, New South Wales, Australia
| | - P W C Prasad
- School of Computing and Mathematics, Charles Sturt University, Sydney, New South Wales, Australia
| | - Omar H Alsadoon
- Department of Islamic Sciences, Al Iraqia University, Baghdad, Iraq
| | - Sami Haddad
- Department of Oral and Maxillofacial Services, Greater Western Sydney Area Health Services, Sydney, New South Wales, Australia.,Department of Oral and Maxillofacial Services, Central Coast Area Health, Gosford, New South Wales, Australia
| | - Ahmad Alrubaie
- Faculty of Medicine, University of New South Wales, Sydney, New South Wales, Australia
| |
Collapse
|
17
|
Liu H, Wu J, Tang Y, Li H, Wang W, Li C, Zhou Y. Percutaneous placement of lumbar pedicle screws via intraoperative CT image-based augmented reality-guided technology. J Neurosurg Spine 2020; 32:542-547. [PMID: 31860809 DOI: 10.3171/2019.10.spine19969] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 10/08/2019] [Indexed: 11/06/2022]
Abstract
OBJECTIVE The authors aimed to assess, in a bone-agar experimental setting, the feasibility and accuracy of percutaneous lumbar pedicle screw placements using an intraoperative CT image-based augmented reality (AR)-guided method compared to placements using a radiograph-guided method. They also compared two AR hologram alignment methods. METHODS Twelve lumbar spine sawbones were completely embedded in hardened opaque agar, and a cubic marker was fixed on each phantom. After intraoperative CT, a 3D model of each phantom was generated, and a specialized application was deployed into an AR headset (Microsoft HoloLens). One hundred twenty pedicle screws, simulated by Kirschner wires (K-wires), were placed by two experienced surgeons, who each placed a total of 60 screws: 20 placed with a radiograph-guided technique, 20 with an AR technique in which the hologram was manually aligned, and 20 with an AR technique in which the hologram was automatically aligned. For each K-wire, the insertion path was expanded to a 6.5-mm diameter to simulate a lumbar pedicle screw. CT imaging of each phantom was performed after all K-wire placements, and the operative time required for each K-wire placement was recorded. An independent radiologist rated all images of K-wire placements. Outcomes were classified as grade I (no pedicle perforation), grade II (screw perforation of the cortex by up to 2 mm), or grade III (screw perforation of the cortex by > 2 mm). In a clinical situation, placements scored as grade I or II would be acceptable and safe for patients. RESULTS Among all screw placements, 75 (94%) of 80 AR-guided placements and 40 (100%) of 40 radiograph-guided placements were acceptable (i.e., grade I or II; p = 0.106). Radiograph-guided placements had more grade I outcomes than the AR-guided method (p < 0.0001). The accuracy of the two AR alignment methods (p = 0.526) was not statistically significantly different, and neither was it different between the AR and radiograph groups (p < 0.0001). AR-guided placements required less time than the radiograph-guided placements (mean ± standard deviation, 131.76 ± 24.57 vs 181.43 ± 15.82 seconds, p < 0.0001). Placements performed using the automatic-alignment method required less time than those using the manual-alignment method (124.20 ± 23.80 vs 139.33 ± 23.21 seconds, p = 0.0081). CONCLUSIONS In bone-agar experimental settings, AR-guided percutaneous lumbar pedicle screw placements were acceptable and more efficient than radiograph-guided placements. In a comparison of the two AR-guided placements, the automatic-alignment method was as accurate as the manual method but more efficient. Because of some limitations, the AR-guided system cannot be recommended in a clinical setting until there is significant improvement of this technology.
Collapse
|
18
|
Fang C, Zhang P, Qi X. Digital and intelligent liver surgery in the new era: Prospects and dilemmas. EBioMedicine 2019; 41:693-701. [PMID: 30773479 PMCID: PMC6442371 DOI: 10.1016/j.ebiom.2019.02.017] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 01/29/2019] [Accepted: 02/07/2019] [Indexed: 02/06/2023] Open
Abstract
Despite tremendous advances in traditional imaging technology over the past few decades, the intraoperative identification of lesions is still based on naked eye observation or pre-operative image evaluation. However, these two-dimensional image data cannot objectively reflect the complex anatomical structure of the liver and the detailed morphological features of the lesion, which directly limits the clinical application value of these imaging data in surgery in that it cannot improve the curative efficacy of surgery and the prognosis of the patient. This traditional mode of diagnosis and treatment has been changed by digital medical imaging technology in the new era with its significant function of accurate and efficient diagnosis of diseases, selection of reasonable treatment schemes, improvement of radical resection rate and reduction of surgical risk. In this paper, we reviewed the latest application of digital intelligent diagnosis and treatment technology related to liver surgery in the hope that it may help to achieve accurate treatment of liver surgery diseases.
Collapse
Affiliation(s)
- Chihua Fang
- CHESS, The First Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou 510282, China.
| | - Peng Zhang
- CHESS, The First Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou 510282, China
| | - Xiaolong Qi
- CHESS Frontier Center Working Party, The First Hospital of Lanzhou University, Lanzhou University, Lanzhou 730000, China.
| |
Collapse
|
19
|
Drouin S, DiGiovanni DA, Kersten-Oertel MA, Collins L. Interaction driven enhancement of depth perception in angiographic volumes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 26:2247-2257. [PMID: 30530366 DOI: 10.1109/tvcg.2018.2884940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
User interaction has the potential to greatly facilitate the exploration and understanding of 3D medical images for diagnosis and treatment. However, in certain specialized environments such as in an operating room (OR), technical and physical constraints such as the need to enforce strict sterility rules, make interaction challenging. In this paper, we propose to facilitate the intraoperative exploration of angiographic volumes by leveraging the motion of a tracked surgical pointer, a tool that is already manipulated by the surgeon when using a navigation system in the OR. We designed and implemented three interactive rendering techniques based on this principle. The benefit of each of these techniques is compared to its non-interactive counterpart in a psychophysics experiment where 20 medical imaging experts were asked to perform a reaching/targeting task while visualizing a 3D volume of angiographic data. The study showed a significant improvement of the appreciation of local vascular structure when using dynamic techniques, while not having a negative impact on the appreciation of the global structure and only a marginal impact on the execution speed. A qualitative evaluation of the different techniques showed a preference for dynamic chroma-depth in accordance with the objective metrics but a discrepancy between objective and subjective measures for dynamic aerial perspective and shading.
Collapse
|
20
|
Augmented visualization with depth perception cues to improve the surgeon's performance in minimally invasive surgery. Med Biol Eng Comput 2018; 57:995-1013. [PMID: 30511205 DOI: 10.1007/s11517-018-1929-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 11/03/2018] [Indexed: 01/14/2023]
Abstract
Minimally invasive techniques, such as laparoscopy and radiofrequency ablation of tumors, bring important advantages in surgery: by minimizing incisions on the patient's body, they can reduce the hospitalization period and the risk of postoperative complications. Unfortunately, they come with drawbacks for surgeons, who have a restricted vision of the operation area through an indirect access and 2D images provided by a camera inserted in the body. Augmented reality provides an "X-ray vision" of the patient anatomy thanks to the visualization of the internal organs of the patient. In this way, surgeons are free from the task of mentally associating the content from CT images to the operative scene. We present a navigation system that supports surgeons in preoperative and intraoperative phases and an augmented reality system that superimposes virtual organs on the patient's body together with depth and distance information. We implemented a combination of visual and audio cues allowing the surgeon to improve the intervention precision and avoid the risk of damaging anatomical structures. The test scenarios proved the good efficacy and accuracy of the system. Moreover, tests in the operating room suggested some modifications to the tracking system to make it more robust with respect to occlusions. Graphical Abstract Augmented visualization in minimally invasive surgery.
Collapse
|
21
|
Zhang X, Chen G, Liao H. High-Quality See-Through Surgical Guidance System Using Enhanced 3-D Autostereoscopic Augmented Reality. IEEE Trans Biomed Eng 2017; 64:1815-1825. [DOI: 10.1109/tbme.2016.2624632] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
22
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
23
|
Augmented Endoscopic Images Overlaying Shape Changes in Bone Cutting Procedures. PLoS One 2016; 11:e0161815. [PMID: 27584732 PMCID: PMC5008631 DOI: 10.1371/journal.pone.0161815] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2015] [Accepted: 08/12/2016] [Indexed: 11/19/2022] Open
Abstract
In microendoscopic discectomy for spinal disorders, bone cutting procedures are performed in tight spaces while observing a small portion of the target structures. Although optical tracking systems are able to measure the tip of the surgical tool during surgery, the poor shape information available during surgery makes accurate cutting difficult, even if preoperative computed tomography and magnetic resonance images are used for reference. Shape estimation and visualization of the target structures are essential for accurate cutting. However, time-varying shape changes during cutting procedures are still challenging issues for intraoperative navigation. This paper introduces a concept of endoscopic image augmentation that overlays shape changes to support bone cutting procedures. This framework handles the history of the location of the measured drill tip as a volume label and visualizes the remains to be cut overlaid on the endoscopic image in real time. A cutting experiment was performed with volunteers, and the feasibility of this concept was examined using a clinical navigation system. The efficacy of the cutting aid was evaluated with respect to the shape similarity, total moved distance of a cutting tool, and required cutting time. The results of the experiments showed that cutting performance was significantly improved by the proposed framework.
Collapse
|
24
|
Personalized, relevance-based Multimodal Robotic Imaging and augmented reality for Computer Assisted Interventions. Med Image Anal 2016; 33:64-71. [PMID: 27475417 DOI: 10.1016/j.media.2016.06.021] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Revised: 06/12/2016] [Accepted: 06/15/2016] [Indexed: 11/21/2022]
Abstract
In the last decade, many researchers in medical image computing and computer assisted interventions across the world focused on the development of the Virtual Physiological Human (VPH), aiming at changing the practice of medicine from classification and treatment of diseases to that of modeling and treating patients. These projects resulted in major advancements in segmentation, registration, morphological, physiological and biomechanical modeling based on state of art medical imaging as well as other sensory data. However, a major issue which has not yet come into the focus is personalizing intra-operative imaging, allowing for optimal treatment. In this paper, we discuss the personalization of imaging and visualization process with particular focus on satisfying the challenging requirements of computer assisted interventions. We discuss such requirements and review a series of scientific contributions made by our research team to tackle some of these major challenges.
Collapse
|
25
|
Wang J, Suenaga H, Yang L, Kobayashi E, Sakuma I. Video see-through augmented reality for oral and maxillofacial surgery. Int J Med Robot 2016; 13. [PMID: 27283505 DOI: 10.1002/rcs.1754] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Revised: 03/26/2016] [Accepted: 04/29/2016] [Indexed: 11/11/2022]
Abstract
BACKGROUND Oral and maxillofacial surgery has not been benefitting from image guidance techniques owing to the limitations in image registration. METHODS A real-time markerless image registration method is proposed by integrating a shape matching method into a 2D tracking framework. The image registration is performed by matching the patient's teeth model with intraoperative video to obtain its pose. The resulting pose is used to overlay relevant models from the same CT space on the camera video for augmented reality. RESULTS The proposed system was evaluated on mandible/maxilla phantoms, a volunteer and clinical data. Experimental results show that the target overlay error is about 1 mm, and the frame rate of registration update yields 3-5 frames per second with a 4 K camera. CONCLUSIONS The significance of this work lies in its simplicity in clinical setting and the seamless integration into the current medical procedure with satisfactory response time and overlay accuracy. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China.,Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Hideyuki Suenaga
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| | - Liangjing Yang
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
26
|
Human-PnP: Ergonomic AR Interaction Paradigm for Manual Placement of Rigid Bodies. AUGMENTED ENVIRONMENTS FOR COMPUTER-ASSISTED INTERVENTIONS 2015. [DOI: 10.1007/978-3-319-24601-7_6] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
27
|
Abhari K, Baxter JSH, Chen ECS, Khan AR, Peters TM, de Ribaupierre S, Eagleson R. Training for planning tumour resection: augmented reality and human factors. IEEE Trans Biomed Eng 2014; 62:1466-77. [PMID: 25546854 DOI: 10.1109/tbme.2014.2385874] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Planning surgical interventions is a complex task, demanding a high degree of perceptual, cognitive, and sensorimotor skills to reduce intra- and post-operative complications. This process requires spatial reasoning to coordinate between the preoperatively acquired medical images and patient reference frames. In the case of neurosurgical interventions, traditional approaches to planning tend to focus on providing a means for visualizing medical images, but rarely support transformation between different spatial reference frames. Thus, surgeons often rely on their previous experience and intuition as their sole guide is to perform mental transformation. In case of junior residents, this may lead to longer operation times or increased chance of error under additional cognitive demands. In this paper, we introduce a mixed augmented-/virtual-reality system to facilitate training for planning a common neurosurgical procedure, brain tumour resection. The proposed system is designed and evaluated with human factors explicitly in mind, alleviating the difficulty of mental transformation. Our results indicate that, compared to conventional planning environments, the proposed system greatly improves the nonclinicians' performance, independent of the sensorimotor tasks performed ( ). Furthermore, the use of the proposed system by clinicians resulted in a significant reduction in time to perform clinically relevant tasks ( ). These results demonstrate the role of mixed-reality systems in assisting residents to develop necessary spatial reasoning skills needed for planning brain tumour resection, improving patient outcomes.
Collapse
|
28
|
Guillaumee M, Vahdati SP, Tremblay E, Mader A, Bernasconi G, Cadarso VJ, Grossenbacher J, Brugger J, Sprague R, Moser C. Curved Holographic Combiner for Color Head Worn Display. ACTA ACUST UNITED AC 2014. [DOI: 10.1109/jdt.2013.2277933] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
29
|
Design and validation of an augmented reality system for laparoscopic surgery in a real environment. BIOMED RESEARCH INTERNATIONAL 2013; 2013:758491. [PMID: 24236293 PMCID: PMC3819885 DOI: 10.1155/2013/758491] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2013] [Revised: 09/12/2013] [Accepted: 09/16/2013] [Indexed: 01/19/2023]
Abstract
Purpose. This work presents the protocol carried out in the development and validation of an augmented reality system which was installed in an operating theatre to help surgeons with trocar placement during laparoscopic surgery. The purpose of this validation is to demonstrate the improvements that this system can provide to the field of medicine, particularly surgery. Method. Two experiments that were noninvasive for both the patient and the surgeon were designed. In one of these experiments the augmented reality system was used, the other one was the control experiment, and the system was not used. The type of operation selected for all cases was a cholecystectomy due to the low degree of complexity and complications before, during, and after the surgery. The technique used in the placement of trocars was the French technique, but the results can be extrapolated to any other technique and operation. Results and Conclusion. Four clinicians and ninety-six measurements obtained of twenty-four patients (randomly assigned in each experiment) were involved in these experiments. The final results show an improvement in accuracy and variability of 33% and 63%, respectively, in comparison to traditional methods, demonstrating that the use of an augmented reality system offers advantages for trocar placement in laparoscopic surgery.
Collapse
|
30
|
MOHARERI OMID, RAD AHMADB. A VISION-BASED LOCATION POSITIONING SYSTEM VIA AUGMENTED REALITY: AN APPLICATION IN HUMANOID ROBOT NAVIGATION. INT J HUM ROBOT 2013. [DOI: 10.1142/s0219843613500199] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, we present a vision-based localization system using mobile augmented reality (MAR) and mobile audio augmented reality (MAAR) techniques, applicable to both humans and humanoid robots navigation in indoor environments. In the first stage, we propose a system that recognizes the location of a user from the image sequence of an indoor environment using its onboard camera. The location information is added to the user's view in the form of 3D objects and audio sounds with location information and navigation instruction content via augmented reality (AR). The location is recognized by using the prior knowledge about the layout of the environment and the location of the AR markers. The image sequence can be obtained using a smart phone's camera and the marker detection, 3D object placement and audio augmentation will be performed by the phone's operating processor and graphical/audio modules. Using this system will majorly reduce the hardware complexity of such navigation systems, as it replaces a system consisting of a mobile PC, wireless camera, head-mounted displays (HMD) and a remote PC with a smart phone with camera. In the second stage, the same algorithm is employed as a novel vision-based autonomous humanoid robot localization and navigation approach. The proposed technique is implemented on a humanoid robot NAO and improves the robot's navigation and localization performance previously done using an extended Kalman filter (EKF) by presenting location-based information to the robot through different AR markers placed in the robot environment.
Collapse
Affiliation(s)
- OMID MOHARERI
- Electrical and Computer Engineering Department, University of British Columbia, 2332 Main Mall, Vancouver, BC V6T 1Z4, Canada
| | - AHMAD B. RAD
- School of Engineering Science, Simon Fraser University, 250-13450-102nd Avenue, Surrey, BC V3T 0A3, Canada
| |
Collapse
|
31
|
Abe Y, Sato S, Kato K, Hyakumachi T, Yanagibashi Y, Ito M, Abumi K. A novel 3D guidance system using augmented reality for percutaneous vertebroplasty: technical note. J Neurosurg Spine 2013; 19:492-501. [PMID: 23952323 DOI: 10.3171/2013.7.spine12917] [Citation(s) in RCA: 91] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Augmented reality (AR) is an imaging technology by which virtual objects are overlaid onto images of real objects captured in real time by a tracking camera. This study aimed to introduce a novel AR guidance system called virtual protractor with augmented reality (VIPAR) to visualize a needle trajectory in 3D space during percutaneous vertebroplasty (PVP). The AR system used for this study comprised a head-mount display (HMD) with a tracking camera and a marker sheet. An augmented scene was created by overlaying the preoperatively generated needle trajectory path onto a marker detected on the patient using AR software, thereby providing the surgeon with augmented views in real time through the HMD. The accuracy of the system was evaluated by using a computer-generated simulation model in a spine phantom and also evaluated clinically in 5 patients. In the 40 spine phantom trials, the error of the insertion angle (EIA), defined as the difference between the attempted angle and the insertion angle, was evaluated using 3D CT scanning. Computed tomography analysis of the 40 spine phantom trials showed that the EIA in the axial plane significantly improved when VIPAR was used compared with when it was not used (0.96° ± 0.61° vs 4.34° ± 2.36°, respectively). The same held true for EIA in the sagittal plane (0.61° ± 0.70° vs 2.55° ± 1.93°, respectively). In the clinical evaluation of the AR system, 5 patients with osteoporotic vertebral fractures underwent VIPAR-guided PVP from October 2011 to May 2012. The postoperative EIA was evaluated using CT. The clinical results of the 5 patients showed that the EIA in all 10 needle insertions was 2.09° ± 1.3° in the axial plane and 1.98° ± 1.8° in the sagittal plane. There was no pedicle breach or leakage of polymethylmethacrylate. VIPAR was successfully used to assist in needle insertion during PVP by providing the surgeon with an ideal insertion point and needle trajectory through the HMD. The findings indicate that AR guidance technology can become a useful assistive device during spine surgeries requiring percutaneous procedures.
Collapse
Affiliation(s)
- Yuichiro Abe
- Department of Orthopedic Surgery, Eniwa Hospital, Eniwa, Hokkaido
| | | | | | | | | | | | | |
Collapse
|
32
|
Kersten-Oertel M, Jannin P, Collins DL. The state of the art of visualization in mixed reality image guided surgery. Comput Med Imaging Graph 2013; 37:98-112. [PMID: 23490236 DOI: 10.1016/j.compmedimag.2013.01.009] [Citation(s) in RCA: 106] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2012] [Revised: 01/04/2013] [Accepted: 01/23/2013] [Indexed: 11/26/2022]
Abstract
This paper presents a review of the state of the art of visualization in mixed reality image guided surgery (IGS). We used the DVV (data, visualization processing, view) taxonomy to classify a large unbiased selection of publications in the field. The goal of this work was not only to give an overview of current visualization methods and techniques in IGS but more importantly to analyze the current trends and solutions used in the domain. In surveying the current landscape of mixed reality IGS systems, we identified a strong need to assess which of the many possible data sets should be visualized at particular surgical steps, to focus on novel visualization processing techniques and interface solutions, and to evaluate new systems.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- Department of Biomedical Engineering, McGill University, McConnell Brain Imaging Center, Montreal Neurological Institute, Montréal, Canada.
| | | | | |
Collapse
|
33
|
Vitiello V, Lee SL, Cundy TP, Yang GZ. Emerging robotic platforms for minimally invasive surgery. IEEE Rev Biomed Eng 2012; 6:111-26. [PMID: 23288354 DOI: 10.1109/rbme.2012.2236311] [Citation(s) in RCA: 138] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Recent technological advances in surgery have resulted in the development of a range of new techniques that have reduced patient trauma, shortened hospitalization, and improved diagnostic accuracy and therapeutic outcome. Despite the many appreciated benefits of minimally invasive surgery (MIS) compared to traditional approaches, there are still significant drawbacks associated with conventional MIS including poor instrument control and ergonomics caused by rigid instrumentation and its associated fulcrum effect. The use of robot assistance has helped to realize the full potential of MIS with improved consistency, safety and accuracy. The development of articulated, precision tools to enhance the surgeon's dexterity has evolved in parallel with advances in imaging and human-robot interaction. This has improved hand-eye coordination and manual precision down to micron scales, with the capability of navigating through complex anatomical pathways. In this review paper, clinical requirements and technical challenges related to the design of robotic platforms for flexible access surgery are discussed. Allied technical approaches and engineering challenges related to instrument design, intraoperative guidance, and intelligent human-robot interaction are reviewed. We also highlight emerging designs and research opportunities in the field by assessing the current limitations and open technical challenges for the wider clinical uptake of robotic platforms in MIS.
Collapse
|
34
|
Kersten-Oertel M, Jannin P, Collins DL. DVV: a taxonomy for mixed reality visualization in image guided surgery. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:332-352. [PMID: 21383411 DOI: 10.1109/tvcg.2011.50] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- McConell Brain Imaging Center at the Montreal Neurological Institute (MNI), 3801 University St, Montre´al, QC H3A 2B4, Canada.
| | | | | |
Collapse
|
35
|
Athanasiou T, Ashrafian H, Rowland SP, Casula R. Robotic cardiac surgery: advanced minimally invasive technology hindered by barriers to adoption. Future Cardiol 2012; 7:511-22. [PMID: 21797747 DOI: 10.2217/fca.11.40] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Robotic cardiac surgery utilizes the most advanced surgical technology to offer patients a minimally invasive alternative to open surgery in the treatment of a broad range of cardiac pathologies. Although robotics may offer substantial benefits to physicians, patients and healthcare institutions, there are important barriers to its adoption that includes inadequate funding, competition from alternate therapies and challenges in training. There is a growing body of evidence to demonstrate the efficacy of robotic cardiac surgery. Technological innovations are improving patient safety and expanding the indications for robotic cardiac surgery beyond the treatment of mitral valve and coronary artery disease. Robotic cardiac surgery is rapidly becoming a feasible, safe and effective option for the definitive treatment of cardiac disease in the context of 21st century challenges to healthcare provision such as diabetes, obesity and an aging population.
Collapse
Affiliation(s)
- Thanos Athanasiou
- Department of Surgery & Cancer, Imperial College London, London W2 1NY, UK.
| | | | | | | |
Collapse
|
36
|
Transcervical heller myotomy using flexible endoscopy. J Gastrointest Surg 2010; 14:1902-9. [PMID: 20721635 DOI: 10.1007/s11605-010-1290-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2010] [Accepted: 08/05/2010] [Indexed: 01/31/2023]
Abstract
INTRODUCTION Esophageal achalasia is most commonly treated by laparoscopic myotomy. Transesophageal approaches using flexible endoscopy have recently been described. We hypothesized that using techniques and flexible instruments from our NOTES experience through a small cervical incision would be a safer and less traumatic route for esophageal myotomy. The purpose of this study was to evaluate the feasibility, safety, and success rate of using flexible endoscopes to perform anterior or posterior Heller myotomy via a transcervical approach. METHODS This animal (porcine) and human cadaver study was conducted at the Legacy Research and Technology Center. Mediastinal operations on ten live, anesthetized pigs and two human cadavers were performed using standard flexible endoscopes through a small incision at the supra-sternal notch. The esophagus was dissected to the phreno-esophageal junction using balloon dilatation in the peri-esophageal space followed by either anterior or posterior distal esophageal myotomy. Success rate was recorded of esophageal dissection to the diaphragm and proximal stomach, anterior and posterior myotomy, perforation, and complication rates. RESULTS Dissection of the esophagus to the diaphragm and performing esophageal myotomy was achieved in 100% of attempts. Posterior Heller myotomy was always extendable onto the gastric wall, while anterior gastric extension of the myotomy was found to be more difficult (4/4 and 2/8, respectively; P = 0.061). CONCLUSION Heller myotomy through a small cervical incision using flexible endoscopes is feasible. A complete Heller myotomy was performed with a higher success rate posteriorly possibly due to less anatomic interference.
Collapse
|
37
|
Mediastinal surgery in connective tissue tunnels using flexible endoscopy. Surg Endosc 2010; 24:2120-7. [DOI: 10.1007/s00464-010-0908-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2009] [Accepted: 01/14/2010] [Indexed: 11/27/2022]
|
38
|
Linte CA, White J, Eagleson R, Guiraudon GM, Peters TM. Virtual and Augmented Medical Imaging Environments: Enabling Technology for Minimally Invasive Cardiac Interventional Guidance. IEEE Rev Biomed Eng 2010; 3:25-47. [DOI: 10.1109/rbme.2010.2082522] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|