1
|
Tarascó J, Caballero A, Moreno P, Velázquez M, Balibrea JM. Implementation of a multimedia application to provide an immersive experience to assistants and viewers during robotic surgery. J Robot Surg 2025; 19:79. [PMID: 39987367 DOI: 10.1007/s11701-025-02244-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Accepted: 02/12/2025] [Indexed: 02/24/2025]
Affiliation(s)
- Jordi Tarascó
- Endocrine, Metabolic and Bariatric Surgery Unit, Germans Trias i Pujol University Hospital, Carretera del Canyet S/N, 08916, Badalona, Spain
- Department of Surgery, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Albert Caballero
- Endocrine, Metabolic and Bariatric Surgery Unit, Germans Trias i Pujol University Hospital, Carretera del Canyet S/N, 08916, Badalona, Spain
- Department of Surgery, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Pau Moreno
- Endocrine, Metabolic and Bariatric Surgery Unit, Germans Trias i Pujol University Hospital, Carretera del Canyet S/N, 08916, Badalona, Spain
- Department of Surgery, Universitat Autònoma de Barcelona, Barcelona, Spain
| | | | - José M Balibrea
- Endocrine, Metabolic and Bariatric Surgery Unit, Germans Trias i Pujol University Hospital, Carretera del Canyet S/N, 08916, Badalona, Spain.
- Department of Surgery, Universitat Autònoma de Barcelona, Barcelona, Spain.
- iVascular®-UAB Surgical Research Chair, Barcelona, Spain.
| |
Collapse
|
2
|
Chang YZ, Wu CT. Application of extended reality in pediatric neurosurgery: A comprehensive review. Biomed J 2024:100822. [PMID: 39657864 DOI: 10.1016/j.bj.2024.100822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 11/04/2024] [Accepted: 12/03/2024] [Indexed: 12/12/2024] Open
Abstract
The integration of Extended Reality (XR) technologies, including Augmented Reality (AR) and Virtual Reality (VR), represents a significant advancement in pediatric neurosurgery. These technologies offer immersive and interactive 3D visualization capabilities, which enhance the precision and accuracy of surgical procedures. This comprehensive review systematically examines the current applications of XR in pediatric neurosurgery. The review adheres to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, which provide criteria for reporting systematic reviews and meta-analyses. It also utilizes the PICOS (Population, Intervention, Comparison, Outcome, Study design) framework to formulate research questions and structure literature searches. A thorough search of multiple databases yielded 1,434 relevant articles, supplemented by an additional 55 articles obtained through manual searches. The review includes a detailed analysis of the XR workflow, its surgical applications, and associated outcomes. It emphasizes the practical benefits of XR in preoperative planning, intraoperative navigation, and postoperative assessment. Furthermore, the paper discusses the challenges, opportunities, and future prospects of XR in pediatric neurosurgery, including its effects on surgical outcomes, medical education, and patient care. By synthesizing technological developments with clinical applications, this review provides a comprehensive understanding of the multifaceted roles of AR and VR in pediatric neurosurgical practice. It covers innovative methods, applicable scenarios, datasets, and metrics, along with a comparative analysis of state-of-the-art techniques, considering differences in input data. Ultimately, this review aims to present an overview of the current landscape of XR in pediatric neurosurgery to inform future research and clinical practice.
Collapse
Affiliation(s)
- Yau-Zen Chang
- Department of Mechanical Engineering, Chang Gung University, Taoyuan, Taiwan; Department of Neurosurgery, Chang Gung Memorial Hospital, Taoyuan, Taiwan; Department of Mechanical Engineering, Ming Chi University of Technology, New Taipei City, Taiwan.
| | - Chieh-Tsai Wu
- Department of Neurosurgery, Chang Gung Memorial Hospital, Taoyuan, Taiwan; College of Medicine, Chang Gung University, Taoyuan, Taiwan.
| |
Collapse
|
3
|
Keramati H, Lu X, Cabanag M, Wu L, Kushwaha V, Beier S. Applications and advances of immersive technology in cardiology. Curr Probl Cardiol 2024; 49:102762. [PMID: 39067719 DOI: 10.1016/j.cpcardiol.2024.102762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Accepted: 07/23/2024] [Indexed: 07/30/2024]
Abstract
Different forms of immersive technology, such as Virtual Reality (VR) and Augmented Reality (AR), are getting increasingly invested in medicine. Advances in head-mounted display technology, processing, and rendering power have demonstrated the increasing utility of immersive technology in medicine and the healthcare environment. There are a growing number of publications on using immersive technology in cardiology. We reviewed the articles published within the last decade that reported case studies or research that uses or investigates the application of immersive technology in the broad field of cardiology - from education to preoperative planning and intraoperative guidance. We summarized the advantages and disadvantages of using AR and VR for various application categories. Our review highlights the need for a robust assessment of the effectiveness of the methods and discusses the technical limitations that hinder the complete integration of AR and VR in cardiology, including cost-effectiveness and regulatory compliance. Despite the limitations and gaps that have inhibited us from benefiting from immersive technologies' full potential in cardiology settings to date, its promising, impactful future for standard cardiovascular care is undoubted.
Collapse
Affiliation(s)
- Hamed Keramati
- School of Mechanical and Manufacturing Engineering, Faculty of Engineering, The University of New South Wales, Sydney 2052, NSW, Australia.
| | - Xueqing Lu
- Learning and Digital Environments, Deputy Vice-Chancellor Education and Student Experience, The University of New South Wales, Sydney 2052, NSW, Australia
| | - Matt Cabanag
- School of Art and Design, Faculty of Arts, Design and Architecture, The University of New South Wales, Sydney 2052, NSW, Australia
| | - Liao Wu
- School of Mechanical and Manufacturing Engineering, Faculty of Engineering, The University of New South Wales, Sydney 2052, NSW, Australia
| | - Virag Kushwaha
- Eastern Heart Clinic, Prince of Wales Hospital, Barker Street Randwick, NSW 2031, Australia; Faculty of Medicine, The University of New South Wales, Kensington, Sydney 2033, NSW, Australia
| | - Susann Beier
- School of Mechanical and Manufacturing Engineering, Faculty of Engineering, The University of New South Wales, Sydney 2052, NSW, Australia
| |
Collapse
|
4
|
Chen Z, Cruciani L, Fan K, Fontana M, Lievore E, De Cobelli O, Musi G, Ferrigno G, De Momi E. Towards safer robot-assisted surgery: A markerless augmented reality framework. Neural Netw 2024; 178:106469. [PMID: 38925030 DOI: 10.1016/j.neunet.2024.106469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 06/01/2024] [Accepted: 06/16/2024] [Indexed: 06/28/2024]
Abstract
Robot-assisted surgery is rapidly developing in the medical field, and the integration of augmented reality shows the potential to improve the operation performance of surgeons by providing more visual information. In this paper, we proposed a markerless augmented reality framework to enhance safety by avoiding intra-operative bleeding, which is a high risk caused by collision between surgical instruments and delicate blood vessels (arteries or veins). Advanced stereo reconstruction and segmentation networks are compared to find the best combination to reconstruct the intra-operative blood vessel in 3D space for registration with the pre-operative model, and the minimum distance detection between the instruments and the blood vessel is implemented. A robot-assisted lymphadenectomy is emulated on the da Vinci Research Kit in a dry lab, and ten human subjects perform this operation to explore the usability of the proposed framework. The result shows that the augmented reality framework can help the users to avoid the dangerous collision between the instruments and the delicate blood vessel while not introducing an extra load. It provides a flexible framework that integrates augmented reality into the medical robotic platform to enhance safety during surgery.
Collapse
Affiliation(s)
- Ziyang Chen
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy
| | - Laura Cruciani
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy
| | - Ke Fan
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy.
| | - Matteo Fontana
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy
| | - Elena Lievore
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy
| | - Ottavio De Cobelli
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy; University of Milan, Department of Oncology and Onco-haematology, Faculty of Medicine and Surgery, Milan, 20122, Italy
| | - Gennaro Musi
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy; University of Milan, Department of Oncology and Onco-haematology, Faculty of Medicine and Surgery, Milan, 20122, Italy
| | - Giancarlo Ferrigno
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy
| | - Elena De Momi
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy; European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy
| |
Collapse
|
5
|
Li C, Zhang G, Zhao B, Xie D, Du H, Duan X, Hu Y, Zhang L. Advances of surgical robotics: image-guided classification and application. Natl Sci Rev 2024; 11:nwae186. [PMID: 39144738 PMCID: PMC11321255 DOI: 10.1093/nsr/nwae186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/19/2024] [Accepted: 05/07/2024] [Indexed: 08/16/2024] Open
Abstract
Surgical robotics application in the field of minimally invasive surgery has developed rapidly and has been attracting increasingly more research attention in recent years. A common consensus has been reached that surgical procedures are to become less traumatic and with the implementation of more intelligence and higher autonomy, which is a serious challenge faced by the environmental sensing capabilities of robotic systems. One of the main sources of environmental information for robots are images, which are the basis of robot vision. In this review article, we divide clinical image into direct and indirect based on the object of information acquisition, and into continuous, intermittent continuous, and discontinuous according to the target-tracking frequency. The characteristics and applications of the existing surgical robots in each category are introduced based on these two dimensions. Our purpose in conducting this review was to analyze, summarize, and discuss the current evidence on the general rules on the application of image technologies for medical purposes. Our analysis gives insight and provides guidance conducive to the development of more advanced surgical robotics systems in the future.
Collapse
Affiliation(s)
- Changsheng Li
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Gongzi Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Baoliang Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Dongsheng Xie
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Hailong Du
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Xingguang Duan
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Lihai Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
6
|
Ding H, Sun W, Zheng G. Robot-Assisted Augmented Reality (AR)-Guided Surgical Navigation for Periacetabular Osteotomy. SENSORS (BASEL, SWITZERLAND) 2024; 24:4754. [PMID: 39066150 PMCID: PMC11280818 DOI: 10.3390/s24144754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 07/11/2024] [Accepted: 07/20/2024] [Indexed: 07/28/2024]
Abstract
Periacetabular osteotomy (PAO) is an effective approach for the surgical treatment of developmental dysplasia of the hip (DDH). However, due to the complex anatomical structure around the hip joint and the limited field of view (FoV) during the surgery, it is challenging for surgeons to perform a PAO surgery. To solve this challenge, we propose a robot-assisted, augmented reality (AR)-guided surgical navigation system for PAO. The system mainly consists of a robot arm, an optical tracker, and a Microsoft HoloLens 2 headset, which is a state-of-the-art (SOTA) optical see-through (OST) head-mounted display (HMD). For AR guidance, we propose an optical marker-based AR registration method to estimate a transformation from the optical tracker coordinate system (COS) to the virtual space COS such that the virtual models can be superimposed on the corresponding physical counterparts. Furthermore, to guide the osteotomy, the developed system automatically aligns a bone saw with osteotomy planes planned in preoperative images. Then, it provides surgeons with not only virtual constraints to restrict movement of the bone saw but also AR guidance for visual feedback without sight diversion, leading to higher surgical accuracy and improved surgical safety. Comprehensive experiments were conducted to evaluate both the AR registration accuracy and osteotomy accuracy of the developed navigation system. The proposed AR registration method achieved an average mean absolute distance error (mADE) of 1.96 ± 0.43 mm. The robotic system achieved an average center translation error of 0.96 ± 0.23 mm, an average maximum distance of 1.31 ± 0.20 mm, and an average angular deviation of 3.77 ± 0.85°. Experimental results demonstrated both the AR registration accuracy and the osteotomy accuracy of the developed system.
Collapse
Affiliation(s)
| | | | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (H.D.); (W.S.)
| |
Collapse
|
7
|
De Jesus Encarnacion Ramirez M, Chmutin G, Nurmukhametov R, Soto GR, Kannan S, Piavchenko G, Nikolenko V, Efe IE, Romero AR, Mukengeshay JN, Simfukwe K, Mpoyi Cherubin T, Nicolosi F, Sharif S, Roa JC, Montemurro N. Integrating Augmented Reality in Spine Surgery: Redefining Precision with New Technologies. Brain Sci 2024; 14:645. [PMID: 39061386 PMCID: PMC11274952 DOI: 10.3390/brainsci14070645] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 06/04/2024] [Accepted: 06/11/2024] [Indexed: 07/28/2024] Open
Abstract
INTRODUCTION The integration of augmented reality (AR) in spine surgery marks a significant advancement, enhancing surgical precision and patient outcomes. AR provides immersive, three-dimensional visualizations of anatomical structures, facilitating meticulous planning and execution of spine surgeries. This technology not only improves spatial understanding and real-time navigation during procedures but also aims to reduce surgical invasiveness and operative times. Despite its potential, challenges such as model accuracy, user interface design, and the learning curve for new technology must be addressed. AR's application extends beyond the operating room, offering valuable tools for medical education and improving patient communication and satisfaction. MATERIAL AND METHODS A literature review was conducted by searching PubMed and Scopus databases using keywords related to augmented reality in spine surgery, covering publications from January 2020 to January 2024. RESULTS In total, 319 articles were identified through the initial search of the databases. After screening titles and abstracts, 11 articles in total were included in the qualitative synthesis. CONCLUSION Augmented reality (AR) is becoming a transformative force in spine surgery, enhancing precision, education, and outcomes despite hurdles like technical limitations and integration challenges. AR's immersive visualizations and educational innovations, coupled with its potential synergy with AI and machine learning, indicate a bright future for surgical care. Despite the existing obstacles, AR's impact on improving surgical accuracy and safety marks a significant leap forward in patient treatment and care.
Collapse
Affiliation(s)
| | - Gennady Chmutin
- Department of Neurosurgery, Russian People’s Friendship University, 117198 Moscow, Russia
| | - Renat Nurmukhametov
- Department of Neurosurgery, Russian People’s Friendship University, 117198 Moscow, Russia
| | - Gervith Reyes Soto
- Department of Head and Neck, Unidad de Neurociencias, Instituto Nacional de Cancerología, Mexico City 14080, Mexico
| | - Siddarth Kannan
- School of Medicine, University of Central Lancashire, Preston PR0 2AA, UK
| | - Gennadi Piavchenko
- Department of Human Anatomy and Histology, Sechenov University, 119911 Moscow, Russia
| | - Vladmir Nikolenko
- Department of Neurosurgery, I.M. Sechenov First Moscow State Medical University (Sechenov University), 119991 Moscow, Russia
| | - Ibrahim E. Efe
- Department of Neurosurgery, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, 10178 Berlin, Germany
| | | | | | - Keith Simfukwe
- Department of Neurosurgery, Russian People’s Friendship University, 117198 Moscow, Russia
| | | | - Federico Nicolosi
- Department of Medicine and Surgery, Neurosurgery, University of Milano-Bicocca, 20126 Milan, Italy
| | - Salman Sharif
- Department of Neurosurgery, Liaquat National Hospital and Medical College, Karachi 05444, Pakistan
| | - Juan Carlos Roa
- Department of Pathology, School of Medicine, Pontificia Universidad Católica de Chile, Santiago 8330024, Chile
| | - Nicola Montemurro
- Department of Neurosurgery, Azienda Ospedaliero Universitaria Pisana (AOUP), 56100 Pisa, Italy
| |
Collapse
|
8
|
Li W, Zhang X, Shi P, Li S, Li P, Yu H. Across Sessions and Subjects Domain Adaptation for Building Robust Myoelectric Interface. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2005-2015. [PMID: 38147425 DOI: 10.1109/tnsre.2023.3347540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
Gesture interaction via surface electromyography (sEMG) signal is a promising approach for advanced human-computer interaction systems. However, improving the performance of the myoelectric interface is challenging due to the domain shift caused by the signal's inherent variability. To enhance the interface's robustness, we propose a novel adaptive information fusion neural network (AIFNN) framework, which could effectively reduce the effects of multiple scenarios. Specifically, domain adversarial training is established to inhibit the shared network's weights from exploiting domain-specific representation, thus allowing for the extraction of domain-invariant features. Effectively, classification loss, domain diversence loss and domain discrimination loss are employed, which improve classification performance while reduce distribution mismatches between the two domains. To simulate the application of myoelectric interface, experiments were carried out involving three scenarios (intra-session, inter-session and inter-subject scenarios). Ten non-disabled subjects were recruited to perform sixteen gestures for ten consecutive days. The experimental results indicated that the performance of AIFNN was better than two other state-of-the-art transfer learning approaches, namely fine-tuning (FT) and domain adversarial network (DANN). This study demonstrates the capability of AIFNN to maintain robustness over time and generalize across users in practical myoelectric interface implementations. These findings could serve as a foundation for future deployments.
Collapse
|
9
|
Zhang X, Zhang Y, Yang J, Du H. A prostate seed implantation robot system based on human-computer interactions: Augmented reality and voice control. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5947-5971. [PMID: 38872565 DOI: 10.3934/mbe.2024262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
The technology of robot-assisted prostate seed implantation has developed rapidly. However, during the process, there are some problems to be solved, such as non-intuitive visualization effects and complicated robot control. To improve the intelligence and visualization of the operation process, a voice control technology of prostate seed implantation robot in augmented reality environment was proposed. Initially, the MRI image of the prostate was denoised and segmented. The three-dimensional model of prostate and its surrounding tissues was reconstructed by surface rendering technology. Combined with holographic application program, the augmented reality system of prostate seed implantation was built. An improved singular value decomposition three-dimensional registration algorithm based on iterative closest point was proposed, and the results of three-dimensional registration experiments verified that the algorithm could effectively improve the three-dimensional registration accuracy. A fusion algorithm based on spectral subtraction and BP neural network was proposed. The experimental results showed that the average delay of the fusion algorithm was 1.314 s, and the overall response time of the integrated system was 1.5 s. The fusion algorithm could effectively improve the reliability of the voice control system, and the integrated system could meet the responsiveness requirements of prostate seed implantation.
Collapse
Affiliation(s)
- Xinran Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Yongde Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
- Foshan Baikang Robot Technology Co., Ltd., Foshan 528237, China
| | - Jianzhi Yang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Haiyan Du
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
| |
Collapse
|
10
|
Ai L, Liu Y, Armand M, Kheradmand A, Martin-Gomez A. On the Fly Robotic-Assisted Medical Instrument Planning and Execution Using Mixed Reality. 2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA) 2024:13192-13199. [DOI: 10.1109/icra57147.2024.10611515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Affiliation(s)
- Letian Ai
- Johns Hopkins University,Biomechanical- and Image-Guided Surgical Systems (BIGSS) Laboratory Within LCSR,Baltimore,MD,USA
| | - Yihao Liu
- Johns Hopkins University,Biomechanical- and Image-Guided Surgical Systems (BIGSS) Laboratory Within LCSR,Baltimore,MD,USA
| | - Mehran Armand
- Johns Hopkins University,Biomechanical- and Image-Guided Surgical Systems (BIGSS) Laboratory Within LCSR,Baltimore,MD,USA
| | - Amir Kheradmand
- Johns Hopkins School of Medicine,Department of Neurology and Department of Neuroscience,Baltimore,MD,USA
| | - Alejandro Martin-Gomez
- Johns Hopkins University,Biomechanical- and Image-Guided Surgical Systems (BIGSS) Laboratory Within LCSR,Baltimore,MD,USA
| |
Collapse
|
11
|
Schmidt A, Mohareri O, DiMaio S, Yip MC, Salcudean SE. Tracking and mapping in medical computer vision: A review. Med Image Anal 2024; 94:103131. [PMID: 38442528 DOI: 10.1016/j.media.2024.103131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 02/08/2024] [Accepted: 02/29/2024] [Indexed: 03/07/2024]
Abstract
As computer vision algorithms increase in capability, their applications in clinical systems will become more pervasive. These applications include: diagnostics, such as colonoscopy and bronchoscopy; guiding biopsies, minimally invasive interventions, and surgery; automating instrument motion; and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. After which, we review datasets provided in the field and the clinical needs that motivate their design. Then, we delve into the algorithmic side, and summarize recent developments. This summary should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We maintain focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. With the field summarized, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications. We then provide some research directions and questions. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.
Collapse
Affiliation(s)
- Adam Schmidt
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada.
| | - Omid Mohareri
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Simon DiMaio
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Michael C Yip
- Department of Electrical and Computer Engineering, University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada
| |
Collapse
|
12
|
Taha BA, Addie AJ, Kadhim AC, Azzahran AS, Haider AJ, Chaudhary V, Arsad N. Photonics-powered augmented reality skin electronics for proactive healthcare: multifaceted opportunities. Mikrochim Acta 2024; 191:250. [PMID: 38587660 DOI: 10.1007/s00604-024-06314-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 03/18/2024] [Indexed: 04/09/2024]
Abstract
Rapid technological advancements have created opportunities for new solutions in various industries, including healthcare. One exciting new direction in this field of innovation is the combination of skin-based technologies and augmented reality (AR). These dermatological devices allow for the continuous and non-invasive measurement of vital signs and biomarkers, enabling the real-time diagnosis of anomalies, which have applications in telemedicine, oncology, dermatology, and early diagnostics. Despite its many potential benefits, there is a substantial information vacuum regarding using flexible photonics in conjunction with augmented reality for medical purposes. This review explores the current state of dermal augmented reality and flexible optics in skin-conforming sensing platforms by examining the obstacles faced thus far, including technical hurdles, demanding clinical validation standards, and problems with user acceptance. Our main areas of interest are skills, chiroptical properties, and health platform applications, such as optogenetic pixels, spectroscopic imagers, and optical biosensors. My skin-enhanced spherical dichroism and powerful spherically polarized light enable thorough physical inspection with these augmented reality devices: diabetic tracking, skin cancer diagnosis, and cardiovascular illness: preventative medicine, namely blood pressure screening. We demonstrate how to accomplish early prevention using case studies and emergency detection. Finally, it addresses real-world obstacles that hinder fully realizing these materials' extraordinary potential in advancing proactive and preventative personalized medicine, including technical constraints, clinical validation gaps, and barriers to widespread adoption.
Collapse
Affiliation(s)
- Bakr Ahmed Taha
- Photonics Technology Lab, Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, UKM, 43600, Bangi, Malaysia.
| | - Ali J Addie
- Center of Advanced Materials/Directorate of Materials Research/Ministry of Science and Technology, Baghdad, Iraq
| | - Ahmed C Kadhim
- Communication Engineering Department, University of Technology, Baghdad, Iraq
| | - Ahmad S Azzahran
- Electrical Engineering Department, Northern Border University, Arar, Kingdom of Saudi Arabia.
| | - Adawiya J Haider
- Applied Sciences Department/Laser Science and Technology Branch, University of Technology, Baghdad, Iraq
| | - Vishal Chaudhary
- Research Cell &, Department of Physics, Bhagini Nivedita College, University of Delhi, New Delhi, 110045, India
| | - Norhana Arsad
- Photonics Technology Lab, Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, UKM, 43600, Bangi, Malaysia.
| |
Collapse
|
13
|
Hofman J, De Backer P, Manghi I, Simoens J, De Groote R, Van Den Bossche H, D'Hondt M, Oosterlinck T, Lippens J, Van Praet C, Ferraguti F, Debbaut C, Li Z, Kutter O, Mottrie A, Decaestecker K. First-in-human real-time AI-assisted instrument deocclusion during augmented reality robotic surgery. Healthc Technol Lett 2024; 11:33-39. [PMID: 38638494 PMCID: PMC11022222 DOI: 10.1049/htl2.12056] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 11/21/2023] [Indexed: 04/20/2024] Open
Abstract
The integration of Augmented Reality (AR) into daily surgical practice is withheld by the correct registration of pre-operative data. This includes intelligent 3D model superposition whilst simultaneously handling real and virtual occlusions caused by the AR overlay. Occlusions can negatively impact surgical safety and as such deteriorate rather than improve surgical care. Robotic surgery is particularly suited to tackle these integration challenges in a stepwise approach as the robotic console allows for different inputs to be displayed in parallel to the surgeon. Nevertheless, real-time de-occlusion requires extensive computational resources which further complicates clinical integration. This work tackles the problem of instrument occlusion and presents, to the authors' best knowledge, the first-in-human on edge deployment of a real-time binary segmentation pipeline during three robot-assisted surgeries: partial nephrectomy, migrated endovascular stent removal, and liver metastasectomy. To this end, a state-of-the-art real-time segmentation and 3D model pipeline was implemented and presented to the surgeon during live surgery. The pipeline allows real-time binary segmentation of 37 non-organic surgical items, which are never occluded during AR. The application features real-time manual 3D model manipulation for correct soft tissue alignment. The proposed pipeline can contribute towards surgical safety, ergonomics, and acceptance of AR in minimally invasive surgery.
Collapse
Affiliation(s)
| | - Pieter De Backer
- ORSI AcademyMelleBelgium
- Faculty of Medicine and Health Sciences, Department of Human Structure and RepairGhent UniversityGhentBelgium
- IBiTech‐Biommeda, Faculty of Engineering and Architecture, and CRIGGhent UniversityGhentBelgium
- Department of UrologyGhent University HospitalGhentBelgium
| | - Ilaria Manghi
- Department of Sciences and Methods for EngineeringUniversity of Modena and Reggio EmiliaModenaItaly
| | | | - Ruben De Groote
- ORSI AcademyMelleBelgium
- Department of UrologyOLV HospitalAalstBelgium
| | | | - Mathieu D'Hondt
- Department of Digestive and Hepatobiliary/Pancreatic SurgeryAZ Groeninge HospitalKortrijkBelgium
| | | | - Julie Lippens
- Faculty of Medicine and Health Sciences, Department of Human Structure and RepairGhent UniversityGhentBelgium
| | | | - Federica Ferraguti
- Department of Sciences and Methods for EngineeringUniversity of Modena and Reggio EmiliaModenaItaly
| | - Charlotte Debbaut
- IBiTech‐Biommeda, Faculty of Engineering and Architecture, and CRIGGhent UniversityGhentBelgium
| | | | | | - Alexandre Mottrie
- ORSI AcademyMelleBelgium
- Department of UrologyOLV HospitalAalstBelgium
| | - Karel Decaestecker
- Faculty of Medicine and Health Sciences, Department of Human Structure and RepairGhent UniversityGhentBelgium
- Department of UrologyAZ Maria Middelares HospitalGhentBelgium
| |
Collapse
|
14
|
Chatterjee S, Das S, Ganguly K, Mandal D. Advancements in robotic surgery: innovations, challenges and future prospects. J Robot Surg 2024; 18:28. [PMID: 38231455 DOI: 10.1007/s11701-023-01801-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 12/16/2023] [Indexed: 01/18/2024]
Abstract
The use of robots has revolutionized healthcare, wherein further innovations have led to improved precision and accuracy. Conceived in the late 1960s, robot-assisted surgeries have evolved to become an integral part of various surgical specialties. Modern robotic surgical systems are equipped with highly dexterous arms and miniaturized instruments that reduce tremors and enable delicate maneuvers. Implementation of advanced materials and designs along with the integration of imaging and visualization technologies have enhanced surgical accuracy and made robots safer and more adaptable to various procedures. Further, the haptic feedback system allows surgeons to determine the consistency of the tissues they are operating upon, without physical contact, thereby preventing injuries due to the application of excess force. With the implementation of teleoperation, surgeons can now overcome geographical limitations and provide specialized healthcare remotely. The use of artificial intelligence (AI) and machine learning (ML) aids in surgical decision-making by improving the recognition of minute and complex anatomical structures. All these advancements have led to faster recovery and fewer complications in patients. However, the substantial cost of robotic systems, their maintenance, the size of the systems and proper surgeon training pose major challenges. Nevertheless, with future advancements such as AI-driven automation, nanorobots, microscopic incision surgeries, semi-automated telerobotic systems, and the impact of 5G connectivity on remote surgery, the growth curve of robotic surgery points to innovation and stands as a testament to the persistent pursuit of progress in healthcare.
Collapse
Affiliation(s)
- Swastika Chatterjee
- Department of Biomedical Engineering, JIS College of Engineering, Kalyani, West Bengal, India
| | | | - Karabi Ganguly
- Department of Biomedical Engineering, JIS College of Engineering, Kalyani, West Bengal, India
| | - Dibyendu Mandal
- Department of Biomedical Engineering, JIS College of Engineering, Kalyani, West Bengal, India.
| |
Collapse
|
15
|
Chen J, Fu Y, Lu W, Pan Y. Augmented reality-enabled human-robot collaboration to balance construction waste sorting efficiency and occupational safety and health. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2023; 348:119341. [PMID: 37852080 DOI: 10.1016/j.jenvman.2023.119341] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 10/04/2023] [Accepted: 10/13/2023] [Indexed: 10/20/2023]
Abstract
Construction waste sorting (CWS) is highly recommended as a key step for construction waste management. However, current CWS involves humans' manual hand-picking, which poses significant threats to their occupational safety and health (OSH). Robotic sorting promises to change the situation by adopting modern artificial intelligence and automation technologies. However, in practice, it is usually challenging for robots to do an efficient job (e.g., measured by quickness and accuracy) owing to the difficulties in precisely recognizing compositions of the mixed and heterogeneous waste stream. Leveraging augmented reality (AR) as a communication interface, this research aims to develop a human-robot collaboration (HRC) approach to address the dilemmatic balance between CWS efficiency and OSH. Firstly, a model for human-robot collaborative sorting using AR is established. Then, a prototype for the AR-enable collaborative sorting system is developed and evaluated. The experimental results demonstrate that the proposed AR-enabled HRC method can improve the accuracy rate of CWS by 10% and 15% for sorting isolated waste and obscured waste, respectively, when compared to the method without human involvement. Interview results indicate a significant improvement in OSH, especially the reduction of contamination risks and machinery risks. The research lays out a human-robot collaborative paradigm for productive and safe CWS via an immersive and interactive interface like AR.
Collapse
Affiliation(s)
- Junjie Chen
- Department of Real Estate and Construction, The University of Hong Kong, Pokfulam Road, Hong Kong, China
| | - Yonglin Fu
- Department of Real Estate and Construction, The University of Hong Kong, Pokfulam Road, Hong Kong, China.
| | - Weisheng Lu
- Department of Real Estate and Construction, The University of Hong Kong, Pokfulam Road, Hong Kong, China
| | - Yipeng Pan
- Department of Real Estate and Construction, The University of Hong Kong, Pokfulam Road, Hong Kong, China
| |
Collapse
|
16
|
Casas-Yrurzum S, Gimeno J, Casanova-Salas P, García-Pereira I, García del Olmo E, Salvador A, Guijarro R, Zaragoza C, Fernández M. A new mixed reality tool for training in minimally invasive robotic-assisted surgery. Health Inf Sci Syst 2023; 11:34. [PMID: 37545486 PMCID: PMC10397172 DOI: 10.1007/s13755-023-00238-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2023] [Indexed: 08/08/2023] Open
Abstract
Robotic-assisted surgery (RAS) is developing an increasing role in surgical practice. Therefore, it is of the utmost importance to introduce this paradigm into surgical training programs. However, the steep learning curve of RAS remains a problem that hinders the development and widespread use of this surgical paradigm. For this reason, it is important to be able to train surgeons in the use of RAS procedures. RAS involves distinctive features that makes its learning different to other minimally invasive surgical procedures. One of these features is that the surgeons operate using a stereoscopic console. Therefore, it is necessary to perform RAS training stereoscopically. This article presents a mixed-reality (MR) tool for the stereoscopic visualization, annotation and collaborative display of RAS surgical procedures. The tool is an MR application because it can display real stereoscopic content and augment it with virtual elements (annotations) properly registered in 3D and tracked over time. This new tool allows the registration of surgical procedures, teachers (experts) and students (trainees), so that the teacher can share a set of videos with their students, annotate them with virtual information and use a shared virtual pointer with the students. The students can visualize the videos within a web environment using their personal mobile phones or a desktop stereo system. The use of the tool has been assessed by a group of 15 surgeons during a robotic-surgery master's course. The results show that surgeons consider that this tool can be very useful in RAS training.
Collapse
Affiliation(s)
- Sergio Casas-Yrurzum
- Institute of Robotics and Information Technology and Communication (IRTIC), University of Valencia, Valencia, Spain
| | - Jesús Gimeno
- Institute of Robotics and Information Technology and Communication (IRTIC), University of Valencia, Valencia, Spain
| | - Pablo Casanova-Salas
- Institute of Robotics and Information Technology and Communication (IRTIC), University of Valencia, Valencia, Spain
| | - Inma García-Pereira
- Institute of Robotics and Information Technology and Communication (IRTIC), University of Valencia, Valencia, Spain
| | - Eva García del Olmo
- General and Gastrointestinal Surgery, Fundación Investigación Consorcio Hospital General Universitario de Valencia (FIHGUV), Valencia, Spain
| | - Antonio Salvador
- General and Gastrointestinal Surgery, Fundación Investigación Consorcio Hospital General Universitario de Valencia (FIHGUV), Valencia, Spain
| | - Ricardo Guijarro
- Thoracic Surgery, Fundación Investigación Consorcio Hospital General Universitario de Valencia (FIHGUV), Valencia, Spain
| | - Cristóbal Zaragoza
- General and Gastrointestinal Surgery, Fundación Investigación Consorcio Hospital General Universitario de Valencia (FIHGUV), Valencia, Spain
| | - Marcos Fernández
- Institute of Robotics and Information Technology and Communication (IRTIC), University of Valencia, Valencia, Spain
| |
Collapse
|
17
|
Seetohul J, Shafiee M, Sirlantzis K. Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions. SENSORS (BASEL, SWITZERLAND) 2023; 23:6202. [PMID: 37448050 DOI: 10.3390/s23136202] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 06/09/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future.
Collapse
Affiliation(s)
- Jenna Seetohul
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| | - Mahmood Shafiee
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
- School of Mechanical Engineering Sciences, University of Surrey, Guildford GU2 7XH, UK
| | - Konstantinos Sirlantzis
- School of Engineering, Technology and Design, Canterbury Christ Church University, Canterbury CT1 1QU, UK
- Intelligent Interactions Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| |
Collapse
|
18
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
19
|
Ma L, Huang T, Wang J, Liao H. Visualization, registration and tracking techniques for augmented reality guided surgery: a review. Phys Med Biol 2023; 68. [PMID: 36580681 DOI: 10.1088/1361-6560/acaf23] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/29/2022] [Indexed: 12/31/2022]
Abstract
Augmented reality (AR) surgical navigation has developed rapidly in recent years. This paper reviews and analyzes the visualization, registration, and tracking techniques used in AR surgical navigation systems, as well as the application of these AR systems in different surgical fields. The types of AR visualization are divided into two categories ofin situvisualization and nonin situvisualization. The rendering contents of AR visualization are various. The registration methods include manual registration, point-based registration, surface registration, marker-based registration, and calibration-based registration. The tracking methods consist of self-localization, tracking with integrated cameras, external tracking, and hybrid tracking. Moreover, we describe the applications of AR in surgical fields. However, most AR applications were evaluated through model experiments and animal experiments, and there are relatively few clinical experiments, indicating that the current AR navigation methods are still in the early stage of development. Finally, we summarize the contributions and challenges of AR in the surgical fields, as well as the future development trend. Despite the fact that AR-guided surgery has not yet reached clinical maturity, we believe that if the current development trend continues, it will soon reveal its clinical utility.
Collapse
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Jie Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| |
Collapse
|
20
|
Avrumova F, Lebl DR. Augmented reality for minimally invasive spinal surgery. Front Surg 2023; 9:1086988. [PMID: 36776471 PMCID: PMC9914175 DOI: 10.3389/fsurg.2022.1086988] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 12/28/2022] [Indexed: 01/28/2023] Open
Abstract
Background Augmented reality (AR) is an emerging technology that can overlay computer graphics onto the real world and enhance visual feedback from information systems. Within the past several decades, innovations related to AR have been integrated into our daily lives; however, its application in medicine, specifically in minimally invasive spine surgery (MISS), may be most important to understand. AR navigation provides auditory and haptic feedback, which can further enhance surgeons' capabilities and improve safety. Purpose The purpose of this article is to address previous and current applications of AR, AR in MISS, limitations of today's technology, and future areas of innovation. Methods A literature review related to applications of AR technology in previous and current generations was conducted. Results AR systems have been implemented for treatments related to spinal surgeries in recent years, and AR may be an alternative to current approaches such as traditional navigation, robotically assisted navigation, fluoroscopic guidance, and free hand. As AR is capable of projecting patient anatomy directly on the surgical field, it can eliminate concern for surgeon attention shift from the surgical field to navigated remote screens, line-of-sight interruption, and cumulative radiation exposure as the demand for MISS increases. Conclusion AR is a novel technology that can improve spinal surgery, and limitations will likely have a great impact on future technology.
Collapse
Affiliation(s)
| | - Darren R. Lebl
- Department of Spine Surgery, Hospital for Special Surgery, New York, NY, United States
| |
Collapse
|
21
|
Maldonado-Romo J, Maldonado-Romo A, Aldape-Pérez M. Path Generator with Unpaired Samples Employing Generative Adversarial Networks. SENSORS (BASEL, SWITZERLAND) 2022; 22:9411. [PMID: 36502113 PMCID: PMC9738659 DOI: 10.3390/s22239411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 11/12/2022] [Accepted: 11/25/2022] [Indexed: 06/17/2023]
Abstract
Interactive technologies such as augmented reality have grown in popularity, but specialized sensors and high computer power must be used to perceive and analyze the environment in order to obtain an immersive experience in real time. However, these kinds of implementations have high costs. On the other hand, machine learning has helped create alternative solutions for reducing costs, but it is limited to particular solutions because the creation of datasets is complicated. Due to this problem, this work suggests an alternate strategy for dealing with limited information: unpaired samples from known and unknown surroundings are used to generate a path on embedded devices, such as smartphones, in real time. This strategy creates a path that avoids virtual elements through physical objects. The authors suggest an architecture for creating a path using imperfect knowledge. Additionally, an augmented reality experience is used to describe the generated path, and some users tested the proposal to evaluate the performance. Finally, the primary contribution is the approximation of a path produced from a known environment by using an unpaired dataset.
Collapse
Affiliation(s)
- Javier Maldonado-Romo
- Institute of Advanced Materials and Sustainable Manufacturing, Tecnologico de Monterrey, Mexico City 14380, Mexico
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Unidad Profesional Adolfo López Mateos, Juan de Dios Bátiz s/n esq. Miguel Othón de Mendizábal, Mexico City 07700, Mexico
| | - Alberto Maldonado-Romo
- Centro de Investigación en Computación, Instituto Politécnico Nacional, Unidad Profesional Adolfo López Mateos, Juan de Dios Bátiz s/n esq. Miguel Othón de Mendizábal, Mexico City 07700, Mexico
| | - Mario Aldape-Pérez
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Unidad Profesional Adolfo López Mateos, Juan de Dios Bátiz s/n esq. Miguel Othón de Mendizábal, Mexico City 07700, Mexico
| |
Collapse
|
22
|
Long Y, Li C, Dou Q. Robotic surgery remote mentoring via AR with 3D scene streaming and hand interaction. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2145498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Yonghao Long
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Chengkun Li
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
23
|
Mikamo M, Furukawa R, Oka S, Kotachi T, Okamoto Y, Tanaka S, Sagawa R, Kawasaki H. 3D endoscope system with AR display superimposing dense and wide-angle-of-view 3D points obtained by using micro pattern projector. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:881-885. [PMID: 36085656 DOI: 10.1109/embc48229.2022.9871060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In recent years, augmented reality (AR) technologies have been widespread for supporting various kinds of tasks, by superimposing useful information on the users' view of the real environments. In endoscopic diagnosis, AR systems can be helpful as an aid in presenting information to endoscopists who have their hands full. In this paper, we propose a system that can superimpose shapes, which are reconstructed from an endoscope image, onto the field of view. The feature of the proposed system is that it reconstructs 3D shapes from the images captured by the endoscope and superimposes them onto the real views. As a result, the superimposed view allows the doctor to keep operating the endoscope while observing the patient's internal body with additional information. The proposed system is composed of the reconstruction module and the display module. The reconstruction module is for acquiring 3D shapes based on an active stereo method. In particular, we propose a novel projection pattern that can reconstruct wide areas of the endoscopic view. The display module shows the 3D shape obtained by the reconstructed module, superimposing on the field of view. In the experiments, we show that it is possible to perform a wide range of dense 3D reconstructions using the new projection patterns. In addition, we confirmed the usefulness of the AR system by interviewing medical doctors.
Collapse
|
24
|
Sun X, Zou Y, Wang S, Su H, Guan B. A parallel network utilizing local features and global representations for segmentation of surgical instruments. Int J Comput Assist Radiol Surg 2022; 17:1903-1913. [PMID: 35680692 DOI: 10.1007/s11548-022-02687-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 05/19/2022] [Indexed: 11/30/2022]
Abstract
PURPOSE Automatic image segmentation of surgical instruments is a fundamental task in robot-assisted minimally invasive surgery, which greatly improves the context awareness of surgeons during the operation. A novel method based on Mask R-CNN is proposed in this paper to realize accurate instance segmentation of surgical instruments. METHODS A novel feature extraction backbone is built, which could extract both local features through the convolutional neural network branch and global representations through the Swin-Transformer branch. Moreover, skip fusions are applied in the backbone to fuse both features and improve the generalization ability of the network. RESULTS The proposed method is evaluated on the dataset of MICCAI 2017 EndoVis Challenge with three segmentation tasks and shows state-of-the-art performance with an mIoU of 0.5873 in type segmentation and 0.7408 in part segmentation. Furthermore, the results of ablation studies prove that the proposed novel backbone contributes to at least 17% improvement in mIoU. CONCLUSION The promising results demonstrate that our method can effectively extract global representations as well as local features in the segmentation of surgical instruments and improve the accuracy of segmentation. With the proposed novel backbone, the network can segment the contours of surgical instruments' end tips more precisely. This method can provide more accurate data for localization and pose estimation of surgical instruments, and make a further contribution to the automation of robot-assisted minimally invasive surgery.
Collapse
Affiliation(s)
- Xinan Sun
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, 135 Yaguan Road, Tianjin, 300350, China.,School of Mechanical Engineering, Tianjin University, 135 Yaguan Road, Jinnan District, Tianjin, 300350, China
| | - Yuelin Zou
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, 135 Yaguan Road, Tianjin, 300350, China.,School of Mechanical Engineering, Tianjin University, 135 Yaguan Road, Jinnan District, Tianjin, 300350, China
| | - Shuxin Wang
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, 135 Yaguan Road, Tianjin, 300350, China.,School of Mechanical Engineering, Tianjin University, 135 Yaguan Road, Jinnan District, Tianjin, 300350, China
| | - He Su
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, 135 Yaguan Road, Tianjin, 300350, China. .,School of Mechanical Engineering, Tianjin University, 135 Yaguan Road, Jinnan District, Tianjin, 300350, China.
| | - Bo Guan
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, 135 Yaguan Road, Tianjin, 300350, China.,School of Mechanical Engineering, Tianjin University, 135 Yaguan Road, Jinnan District, Tianjin, 300350, China
| |
Collapse
|
25
|
Robotically Assisted Surgery in Children—A Perspective. CHILDREN 2022; 9:children9060839. [PMID: 35740776 PMCID: PMC9221697 DOI: 10.3390/children9060839] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 05/16/2022] [Accepted: 06/03/2022] [Indexed: 11/26/2022]
Abstract
The introduction of robotically assisted surgery was a milestone for minimally invasive surgery in the 21st century. Currently, there are two CE-approved robotically assisted surgery systems for use and development in pediatrics. Specifically, tremor filtration and optimal visualization are approaches which can have enormous benefits for procedures in small bodies. Robotically assisted surgery in children might have advantages compared to laparoscopic or open approaches. This review focuses on the research literature regarding robotically assisted surgery that has been published within the past decade. A literature search was conducted to identify studies comparing robotically assisted surgery with laparoscopic and open approaches. While reported applications in urology were the most cited, three other fields (gynecology, general surgery, and “others”) were also identified. In total, 36 of the publications reviewed suggested that robotically assisted surgery was a good alternative for pediatric procedures. After several years of experience of this surgery, a strong learning curve was evident in the literature. However, some authors have highlighted limitations, such as high cost and a limited spectrum of small-sized instruments. The recent introduction of reusable 3 mm instruments to the market might help to overcome these limitations. In the future, it can be anticipated that there will be a broader range of applications for robotically assisted surgery in selected pediatric surgeries, especially as surgical skills continue to improve and further system innovations emerge.
Collapse
|
26
|
Lee JJ, Klepcha M, Wong M, Dang PN, Sadrameli SS, Britz GW. The First Pilot Study of an Interactive, 360° Augmented Reality Visualization Platform for Neurosurgical Patient Education: A Case Series. Oper Neurosurg (Hagerstown) 2022; 23:53-59. [DOI: 10.1227/ons.0000000000000186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 01/09/2022] [Indexed: 11/19/2022] Open
|
27
|
Liu S, Fan J, Ai D, Song H, Fu T, Wang Y, Yang J. Feature matching for texture-less endoscopy images via superpixel vector field consistency. BIOMEDICAL OPTICS EXPRESS 2022; 13:2247-2265. [PMID: 35519251 PMCID: PMC9045917 DOI: 10.1364/boe.450259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 01/05/2022] [Accepted: 01/23/2022] [Indexed: 06/14/2023]
Abstract
Feature matching is an important technology to obtain the surface morphology of soft tissues in intraoperative endoscopy images. The extraction of features from clinical endoscopy images is a difficult problem, especially for texture-less images. The reduction of surface details makes the problem more challenging. We proposed an adaptive gradient-preserving method to improve the visual feature of texture-less images. For feature matching, we first constructed a spatial motion field by using the superpixel blocks and estimated its information entropy matching with the motion consistency algorithm to obtain the initial outlier feature screening. Second, we extended the superpixel spatial motion field to the vector field and constrained it with the vector feature to optimize the confidence of the initial matching set. Evaluations were implemented on public and undisclosed datasets. Our method increased by an order of magnitude in the three feature point extraction methods than the original image. In the public dataset, the accuracy and F1-score increased to 92.6% and 91.5%. The matching score was improved by 1.92%. In the undisclosed dataset, the reconstructed surface integrity of the proposed method was improved from 30% to 85%. Furthermore, we also presented the surface reconstruction result of differently sized images to validate the robustness of our method, which showed high-quality feature matching results. Overall, the experiment results proved the effectiveness of the proposed matching method. This demonstrates its capability to extract sufficient visual feature points and generate reliable feature matches for 3D reconstruction and meaningful applications in clinical.
Collapse
Affiliation(s)
- Shiyuan Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Tianyu Fu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
28
|
Birlo M, Edwards PJE, Clarkson M, Stoyanov D. Utility of optical see-through head mounted displays in augmented reality-assisted surgery: A systematic review. Med Image Anal 2022; 77:102361. [PMID: 35168103 PMCID: PMC10466024 DOI: 10.1016/j.media.2022.102361] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 11/17/2021] [Accepted: 01/10/2022] [Indexed: 12/11/2022]
Abstract
This article presents a systematic review of optical see-through head mounted display (OST-HMD) usage in augmented reality (AR) surgery applications from 2013 to 2020. Articles were categorised by: OST-HMD device, surgical speciality, surgical application context, visualisation content, experimental design and evaluation, accuracy and human factors of human-computer interaction. 91 articles fulfilled all inclusion criteria. Some clear trends emerge. The Microsoft HoloLens increasingly dominates the field, with orthopaedic surgery being the most popular application (28.6%). By far the most common surgical context is surgical guidance (n=58) and segmented preoperative models dominate visualisation (n=40). Experiments mainly involve phantoms (n=43) or system setup (n=21), with patient case studies ranking third (n=19), reflecting the comparative infancy of the field. Experiments cover issues from registration to perception with very different accuracy results. Human factors emerge as significant to OST-HMD utility. Some factors are addressed by the systems proposed, such as attention shift away from the surgical site and mental mapping of 2D images to 3D patient anatomy. Other persistent human factors remain or are caused by OST-HMD solutions, including ease of use, comfort and spatial perception issues. The significant upward trend in published articles is clear, but such devices are not yet established in the operating room and clinical studies showing benefit are lacking. A focused effort addressing technical registration and perceptual factors in the lab coupled with design that incorporates human factors considerations to solve clear clinical problems should ensure that the significant current research efforts will succeed.
Collapse
Affiliation(s)
- Manuel Birlo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK.
| | - P J Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Matthew Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| |
Collapse
|
29
|
Fletcher J. Methods and Applications of 3D Patient-Specific Virtual Reconstructions in Surgery. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1356:53-71. [PMID: 35146617 DOI: 10.1007/978-3-030-87779-8_3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
3D modelling has been highlighted as one of the key digital technologies likely to impact surgical practice in the next decade. 3D virtual models are reconstructed using traditional 2D imaging data through either direct volume or indirect surface rendering. One of the principal benefits of 3D visualisation in surgery relates to improved anatomical understanding-particularly in cases involving highly variable complex structures or where precision is required.Workflows begin with imaging segmentation which is a key step in 3D reconstruction and is defined as the process of identifying and delineating structures of interest. Fully automated segmentation will be essential if 3D visualisation is to be feasibly incorporated into routine clinical workflows; however, most algorithmic solutions remain incomplete. 3D models must undergo a range of processing steps prior to visualisation, which typically include smoothing, decimation and colourization. Models used for illustrative purposes may undergo more advanced processing such as UV unwrapping, retopology and PBR texture mapping.Clinical applications are wide ranging and vary significantly between specialities. Beyond pure anatomical visualisation, 3D modelling offers new methods of interacting with imaging data; enabling patient-specific simulations/rehearsal, Computer-Aided Design (CAD) of custom implants/cutting guides and serves as the substrate for augmented reality (AR) enhanced navigation.3D may enable faster, safer surgery with reduced errors and complications, ultimately resulting in improved patient outcomes. However, the relative effectiveness of 3D visualisation remains poorly understood. Future research is needed to not only define the ideal application, specific user and optimal interface/platform for interacting with models but also identify means by which we can systematically evaluate the efficacy of 3D modelling in surgery.
Collapse
|
30
|
Duan XX, Wang YL, Dou WS, Kumar R, Saluja N. An Integrated Remote Control-Based Human-Robot Interface for Education Application. INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGY AND WEB ENGINEERING 2022. [DOI: 10.4018/ijitwe.306916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
Portable interfaced robot arms equipped with mobile user interactions are significantly being utilized in modern world. The application of teaching robotics is being used in challenging pandemic situation but it is still challenging due to mathematical formulation. This article utilizes the augmented reality (AR) concept for remote control-based human-robot interaction using the Bluetooth correspondence. The proposed framework incorporates different modules like a robot arm control, a regulator module and a distant portable smartphone application for envisioning the robot arm points for its real-time relevance. This novel approach fuses AR innovation into portable application which permit the continuous virtual coordination with actual physical platform. The simulation yields effective outcomes with 96.94% accuracy for testing stage while maintaining error and loss values of 0.194 and 0.183 respectively. The proposed interface gives consistent results for teaching application in real time changing environment by outperforming existing methods with an accuracy improvement of 13.4
Collapse
Affiliation(s)
- Xue-xi Duan
- CangZhou Vocational Technology College, China
| | | | | | - Rajeev Kumar
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Nitin Saluja
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| |
Collapse
|
31
|
Wendler T, van Leeuwen FWB, Navab N, van Oosterom MN. How molecular imaging will enable robotic precision surgery : The role of artificial intelligence, augmented reality, and navigation. Eur J Nucl Med Mol Imaging 2021; 48:4201-4224. [PMID: 34185136 PMCID: PMC8566413 DOI: 10.1007/s00259-021-05445-6] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 06/01/2021] [Indexed: 02/08/2023]
Abstract
Molecular imaging is one of the pillars of precision surgery. Its applications range from early diagnostics to therapy planning, execution, and the accurate assessment of outcomes. In particular, molecular imaging solutions are in high demand in minimally invasive surgical strategies, such as the substantially increasing field of robotic surgery. This review aims at connecting the molecular imaging and nuclear medicine community to the rapidly expanding armory of surgical medical devices. Such devices entail technologies ranging from artificial intelligence and computer-aided visualization technologies (software) to innovative molecular imaging modalities and surgical navigation (hardware). We discuss technologies based on their role at different steps of the surgical workflow, i.e., from surgical decision and planning, over to target localization and excision guidance, all the way to (back table) surgical verification. This provides a glimpse of how innovations from the technology fields can realize an exciting future for the molecular imaging and surgery communities.
Collapse
Affiliation(s)
- Thomas Wendler
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
| | - Fijs W. B. van Leeuwen
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Orsi Academy, Melle, Belgium
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
- Chair for Computer Aided Medical Procedures Laboratory for Computational Sensing + Robotics, Johns-Hopkins University, Baltimore, MD USA
| | - Matthias N. van Oosterom
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| |
Collapse
|
32
|
Forte MP, Gourishetti R, Javot B, Engler T, Gomez ED, Kuchenbecker KJ. Design of interactive augmented reality functions for robotic surgery and evaluation in dry-lab lymphadenectomy. Int J Med Robot 2021; 18:e2351. [PMID: 34781414 DOI: 10.1002/rcs.2351] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 10/28/2021] [Accepted: 11/11/2021] [Indexed: 12/18/2022]
Abstract
BACKGROUND Augmented reality (AR) has been widely researched for use in healthcare. Prior AR for robot-assisted minimally invasive surgery has mainly focussed on superimposing preoperative three-dimensional (3D) images onto patient anatomy. This article presents alternative interactive AR tools for robotic surgery. METHODS We designed, built and evaluated four voice-controlled functions: viewing a live video of the operating room, viewing two-dimensional preoperative images, measuring 3D distances and warning about out-of-view instruments. This low-cost system was developed on a da Vinci Si, and it can be integrated into surgical robots equipped with a stereo camera and a stereo viewer. RESULTS Eight experienced surgeons performed dry-lab lymphadenectomies and reported that the functions improved the procedure. They particularly appreciated the possibility of accessing the patient's medical records on demand, measuring distances intraoperatively and interacting with the functions using voice commands. CONCLUSIONS The positive evaluations garnered by these alternative AR functions and interaction methods provide support for further exploration.
Collapse
Affiliation(s)
- Maria-Paola Forte
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Ravali Gourishetti
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Bernard Javot
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | | | - Ernest D Gomez
- Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Katherine J Kuchenbecker
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| |
Collapse
|
33
|
Bassyouni Z, Elhajj IH. Augmented Reality Meets Artificial Intelligence in Robotics: A Systematic Review. Front Robot AI 2021; 8:724798. [PMID: 34631805 PMCID: PMC8493292 DOI: 10.3389/frobt.2021.724798] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 08/30/2021] [Indexed: 11/30/2022] Open
Abstract
Recently, advancements in computational machinery have facilitated the integration of artificial intelligence (AI) to almost every field and industry. This fast-paced development in AI and sensing technologies have stirred an evolution in the realm of robotics. Concurrently, augmented reality (AR) applications are providing solutions to a myriad of robotics applications, such as demystifying robot motion intent and supporting intuitive control and feedback. In this paper, research papers combining the potentials of AI and AR in robotics over the last decade are presented and systematically reviewed. Four sources for data collection were utilized: Google Scholar, Scopus database, the International Conference on Robotics and Automation 2020 proceedings, and the references and citations of all identified papers. A total of 29 papers were analyzed from two perspectives: a theme-based perspective showcasing the relation between AR and AI, and an application-based analysis highlighting how the robotics application was affected. These two sections are further categorized based on the type of robotics platform and the type of robotics application, respectively. We analyze the work done and highlight some of the prevailing limitations hindering the field. Results also explain how AR and AI can be combined to solve the model-mismatch paradigm by creating a closed feedback loop between the user and the robot. This forms a solid base for increasing the efficiency of the robotic application and enhancing the user’s situational awareness, safety, and acceptance of AI robots. Our findings affirm the promising future for robust integration of AR and AI in numerous robotic applications.
Collapse
Affiliation(s)
- Zahraa Bassyouni
- Vision and Robotics Lab, Department of Electrical and Computer Engineering, American University of Beirut, Beirut, Lebanon
| | - Imad H Elhajj
- Vision and Robotics Lab, Department of Electrical and Computer Engineering, American University of Beirut, Beirut, Lebanon
| |
Collapse
|
34
|
Hagmann K, Hellings-Kuß A, Klodmann J, Richter R, Stulp F, Leidner D. A Digital Twin Approach for Contextual Assistance for Surgeons During Surgical Robotics Training. Front Robot AI 2021; 8:735566. [PMID: 34621791 PMCID: PMC8491613 DOI: 10.3389/frobt.2021.735566] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 09/06/2021] [Indexed: 11/13/2022] Open
Abstract
Minimally invasive robotic surgery copes with some disadvantages for the surgeon of minimally invasive surgery while preserving the advantages for the patient. Most commercially available robotic systems are telemanipulated with haptic input devices. The exploitation of the haptics channel, e.g., by means of Virtual Fixtures, would allow for an individualized enhancement of surgical performance with contextual assistance. However, it remains an open field of research as it is non-trivial to estimate the task context itself during a surgery. In contrast, surgical training allows to abstract away from a real operation and thus makes it possible to model the task accurately. The presented approach exploits this fact to parameterize Virtual Fixtures during surgical training, proposing a Shared Control Parametrization Engine that retrieves procedural context information from a Digital Twin. This approach accelerates a proficient use of the robotic system for novice surgeons by augmenting the surgeon's performance through haptic assistance. With this our aim is to reduce the required skill level and cognitive load of a surgeon performing minimally invasive robotic surgery. A pilot study is performed on the DLR MiroSurge system to evaluate the presented approach. The participants are tasked with two benchmark scenarios of surgical training. The execution of the benchmark scenarios requires basic skills as pick, place and path following. The evaluation of the pilot study shows the promising trend that novel users profit from the haptic augmentation during training of certain tasks.
Collapse
Affiliation(s)
- Katharina Hagmann
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| | - Anja Hellings-Kuß
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| | - Julian Klodmann
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| | - Rebecca Richter
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| | - Freek Stulp
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| | - Daniel Leidner
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| |
Collapse
|
35
|
Augmented and virtual reality in spine surgery, current applications and future potentials. Spine J 2021; 21:1617-1625. [PMID: 33774210 DOI: 10.1016/j.spinee.2021.03.018] [Citation(s) in RCA: 91] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 03/17/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND CONTEXT The field of artificial intelligence (AI) is rapidly advancing, especially with recent improvements in deep learning (DL) techniques. Augmented (AR) and virtual reality (VR) are finding their place in healthcare, and spine surgery is no exception. The unique capabilities and advantages of AR and VR devices include their low cost, flexible integration with other technologies, user-friendly features and their application in navigation systems, which makes them beneficial across different aspects of spine surgery. Despite the use of AR for pedicle screw placement, targeted cervical foraminotomy, bone biopsy, osteotomy planning, and percutaneous intervention, the current applications of AR and VR in spine surgery remain limited. PURPOSE The primary goal of this study was to provide the spine surgeons and clinical researchers with the general information about the current applications, future potentials, and accessibility of AR and VR systems in spine surgery. STUDY DESIGN/SETTING We reviewed titles of more than 250 journal papers from google scholar and PubMed with search words: augmented reality, virtual reality, spine surgery, and orthopaedic, out of which 89 related papers were selected for abstract review. Finally, full text of 67 papers were analyzed and reviewed. METHODS The papers were divided into four groups: technological papers, applications in surgery, applications in spine education and training, and general application in orthopaedic. A team of two reviewers performed paper reviews and a thorough web search to ensure the most updated state of the art in each of four group is captured in the review. RESULTS In this review we discuss the current state of the art in AR and VR hardware, their preoperative applications and surgical applications in spine surgery. Finally, we discuss the future potentials of AR and VR and their integration with AI, robotic surgery, gaming, and wearables. CONCLUSIONS AR and VR are promising technologies that will soon become part of standard of care in spine surgery.
Collapse
|
36
|
Campbell E, Phinyomark A, Scheme E. Deep Cross-User Models Reduce the Training Burden in Myoelectric Control. Front Neurosci 2021; 15:657958. [PMID: 34108858 PMCID: PMC8181426 DOI: 10.3389/fnins.2021.657958] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 04/27/2021] [Indexed: 12/03/2022] Open
Abstract
The effort, focus, and time to collect data and train EMG pattern recognition systems is one of the largest barriers to their widespread adoption in commercial applications. In addition to multiple repetitions of motions, including exemplars of confounding factors during the training protocol has been shown to be critical for robust machine learning models. This added training burden is prohibitive for most regular use cases, so cross-user models have been proposed that could leverage inter-repetition variability supplied by other users. Existing cross-user models have not yet achieved performance levels sufficient for commercialization and require users to closely adhere to a training protocol that is impractical without expert guidance. In this work, we extend a previously reported adaptive domain adversarial neural network (ADANN) to a cross-subject framework that requires very little training data from the end-user. We compare its performance to single-repetition within-user training and the previous state-of-the-art cross-subject technique, canonical correlation analysis (CCA). ADANN significantly outperformed CCA for both intact-limb (86.8–96.2%) and amputee (64.1–84.2%) populations. Moreover, the ADANN adaptation computation time was substantially lower than the time otherwise devoted to conducting a full within-subject training protocol. This study shows that cross-user models, enabled by deep-learned adaptations, may be a viable option for improved generalized pattern recognition-based myoelectric control.
Collapse
Affiliation(s)
- Evan Campbell
- Department of Electrical and Computer Engineering, Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, Canada
| | - Angkoon Phinyomark
- Department of Electrical and Computer Engineering, Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, Canada
| | - Erik Scheme
- Department of Electrical and Computer Engineering, Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, Canada
| |
Collapse
|
37
|
Gao Y, Zhao Y, Xie L, Zheng G. A Projector-Based Augmented Reality Navigation System for Computer-Assisted Surgery. SENSORS 2021; 21:s21092931. [PMID: 33922079 PMCID: PMC8122285 DOI: 10.3390/s21092931] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 04/10/2021] [Accepted: 04/19/2021] [Indexed: 12/31/2022]
Abstract
In the medical field, guidance to follow the surgical plan is crucial. Image overlay projection is a solution to link the surgical plan with the patient. It realizes augmented reality (AR) by projecting computer-generated image on the surface of the target through a projector, which can visualize additional information to the scene. By overlaying anatomical information or surgical plans on the surgery area, projection helps to enhance the surgeon's understanding of the anatomical structure, and intuitively visualizes the surgical target and key structures of the operation, and avoid the surgeon's sight diversion between monitor and patient. However, it still remains a challenge to project the surgical navigation information on the target precisely and efficiently. In this study, we propose a projector-based surgical navigation system. Through the gray code-based calibration method, the projector can be calibrated with a camera and then be integrated with an optical spatial locator, so that the navigation information of the operation can be accurately projected onto the target area. We validated the projection accuracy of the system through back projection, with average projection error of 3.37 pixels in x direction and 1.51 pixels in y direction, and model projection with an average position error of 1.03 ± 0.43 mm, and carried out puncture experiments using the system with correct rate of 99%, and qualitatively analyzed the system's performance through the questionnaire. The results demonstrate the efficacy of our proposed AR system.
Collapse
Affiliation(s)
- Yuan Gao
- Institute of Forming Technology & Equipment, Shanghai Jiao Tong University, Shanghai 200030, China;
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai 200240, China;
| | - Yuyun Zhao
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai 200240, China;
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Le Xie
- Institute of Forming Technology & Equipment, Shanghai Jiao Tong University, Shanghai 200030, China;
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai 200240, China;
- Correspondence: (L.X.); (G.Z.)
| | - Guoyan Zheng
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai 200240, China;
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
- Correspondence: (L.X.); (G.Z.)
| |
Collapse
|
38
|
Munawar A, Wu JY, Taylor RH, Kazanzides P, Fischer GS. A Framework for Customizable Multi-User Teleoperated Control. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3062604] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
39
|
Connor MJ, Dasgupta P, Ahmed HU, Raza A. Autonomous surgery in the era of robotic urology: friend or foe of the future surgeon? Nat Rev Urol 2020; 17:643-649. [DOI: 10.1038/s41585-020-0375-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/02/2020] [Indexed: 02/06/2023]
|
40
|
Abstract
Augmented reality (AR) is used to enhance the perception of the real world by integrating virtual objects to an image sequence acquired from various camera technologies. Numerous AR applications in robotics have been developed in recent years. The aim of this paper is to provide an overview of AR research in robotics during the five year period from 2015 to 2019. We classified these works in terms of application areas into four categories: (1) Medical robotics: Robot-Assisted surgery (RAS), prosthetics, rehabilitation, and training systems; (2) Motion planning and control: trajectory generation, robot programming, simulation, and manipulation; (3) Human-robot interaction (HRI): teleoperation, collaborative interfaces, wearable robots, haptic interfaces, brain-computer interfaces (BCIs), and gaming; (4) Multi-agent systems: use of visual feedback to remotely control drones, robot swarms, and robots with shared workspace. Recent developments in AR technology are discussed followed by the challenges met in AR due to issues of camera localization, environment mapping, and registration. We explore AR applications in terms of how AR was integrated and which improvements it introduced to corresponding fields of robotics. In addition, we summarize the major limitations of the presented applications in each category. Finally, we conclude our review with future directions of AR research in robotics. The survey covers over 100 research works published over the last five years.
Collapse
|