1
|
Ball M, Fuller P, Cha JS. Identification of surgical human-robot interactions and measures during robotic-assisted surgery: A scoping review. APPLIED ERGONOMICS 2025; 125:104478. [PMID: 39983252 DOI: 10.1016/j.apergo.2025.104478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 01/22/2025] [Accepted: 02/06/2025] [Indexed: 02/23/2025]
Abstract
This study aims to identify the dynamics of robotic-assisted surgery (RAS) teams and their metrics. A scoping review across seven science, engineering, and clinical databases was conducted. It was found that literature focuses on skills and interactions centralized around the surgeon and technical components of the robotic system; however, limited literature exists on skill proceduralization specific for other surgical team members performing robotic-assisted surgery procedures. A framework that identifies the individuals (i.e., surgeon, surgical team members, and robotic platform), with their respective skill requirements (technical and nontechnical), and the required interactions among the team and RAS systems was developed. Future research in RAS human-robot interaction can address the need to understand changing dynamics and skills required by the surgical team with the continuing evolution and adoption of surgical robot technology.
Collapse
Affiliation(s)
- Matthew Ball
- Department of Industrial Engineering, Clemson University, 211 Fernow St., Clemson, SC 29634, USA
| | - Patrick Fuller
- Department of Industrial Engineering, Clemson University, 211 Fernow St., Clemson, SC 29634, USA
| | - Jackie S Cha
- Department of Industrial Engineering, Clemson University, 211 Fernow St., Clemson, SC 29634, USA.
| |
Collapse
|
2
|
Knudsen JE, Ghaffar U, Ma R, Hung AJ. Clinical applications of artificial intelligence in robotic surgery. J Robot Surg 2024; 18:102. [PMID: 38427094 PMCID: PMC10907451 DOI: 10.1007/s11701-024-01867-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 02/10/2024] [Indexed: 03/02/2024]
Abstract
Artificial intelligence (AI) is revolutionizing nearly every aspect of modern life. In the medical field, robotic surgery is the sector with some of the most innovative and impactful advancements. In this narrative review, we outline recent contributions of AI to the field of robotic surgery with a particular focus on intraoperative enhancement. AI modeling is allowing surgeons to have advanced intraoperative metrics such as force and tactile measurements, enhanced detection of positive surgical margins, and even allowing for the complete automation of certain steps in surgical procedures. AI is also Query revolutionizing the field of surgical education. AI modeling applied to intraoperative surgical video feeds and instrument kinematics data is allowing for the generation of automated skills assessments. AI also shows promise for the generation and delivery of highly specialized intraoperative surgical feedback for training surgeons. Although the adoption and integration of AI show promise in robotic surgery, it raises important, complex ethical questions. Frameworks for thinking through ethical dilemmas raised by AI are outlined in this review. AI enhancements in robotic surgery is some of the most groundbreaking research happening today, and the studies outlined in this review represent some of the most exciting innovations in recent years.
Collapse
Affiliation(s)
- J Everett Knudsen
- Keck School of Medicine, University of Southern California, Los Angeles, USA
| | | | - Runzhuo Ma
- Cedars-Sinai Medical Center, Los Angeles, USA
| | | |
Collapse
|
3
|
Gruijthuijsen C, Garcia-Peraza-Herrera LC, Borghesan G, Reynaerts D, Deprest J, Ourselin S, Vercauteren T, Vander Poorten E. Robotic Endoscope Control Via Autonomous Instrument Tracking. Front Robot AI 2022; 9:832208. [PMID: 35480090 PMCID: PMC9035496 DOI: 10.3389/frobt.2022.832208] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 02/17/2022] [Indexed: 11/13/2022] Open
Abstract
Many keyhole interventions rely on bi-manual handling of surgical instruments, forcing the main surgeon to rely on a second surgeon to act as a camera assistant. In addition to the burden of excessively involving surgical staff, this may lead to reduced image stability, increased task completion time and sometimes errors due to the monotony of the task. Robotic endoscope holders, controlled by a set of basic instructions, have been proposed as an alternative, but their unnatural handling may increase the cognitive load of the (solo) surgeon, which hinders their clinical acceptance. More seamless integration in the surgical workflow would be achieved if robotic endoscope holders collaborated with the operating surgeon via semantically rich instructions that closely resemble instructions that would otherwise be issued to a human camera assistant, such as "focus on my right-hand instrument." As a proof of concept, this paper presents a novel system that paves the way towards a synergistic interaction between surgeons and robotic endoscope holders. The proposed platform allows the surgeon to perform a bimanual coordination and navigation task, while a robotic arm autonomously performs the endoscope positioning tasks. Within our system, we propose a novel tooltip localization method based on surgical tool segmentation and a novel visual servoing approach that ensures smooth and appropriate motion of the endoscope camera. We validate our vision pipeline and run a user study of this system. The clinical relevance of the study is ensured through the use of a laparoscopic exercise validated by the European Academy of Gynaecological Surgery which involves bi-manual coordination and navigation. Successful application of our proposed system provides a promising starting point towards broader clinical adoption of robotic endoscope holders.
Collapse
Affiliation(s)
| | - Luis C. Garcia-Peraza-Herrera
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
| | - Gianni Borghesan
- Department of Mechanical Engineering, KU Leuven, Leuven, Belgium
- Core Lab ROB, Flanders Make, Lommel, Belgium
| | | | - Jan Deprest
- Department of Development and Regeneration, Division Woman and Child, KU Leuven, Leuven, Belgium
| | - Sebastien Ourselin
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
| | - Tom Vercauteren
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
| | | |
Collapse
|
4
|
A Natural Language Interface for an Autonomous Camera Control System on the da Vinci Surgical Robot. ROBOTICS 2022. [DOI: 10.3390/robotics11020040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Positioning a camera during laparoscopic and robotic procedures is challenging and essential for successful operations. During surgery, if the camera view is not optimal, surgery becomes more complex and potentially error-prone. To address this need, we have developed a voice interface to an autonomous camera system that can trigger behavioral changes and be more of a partner to the surgeon. Similarly to a human operator, the camera can take cues from the surgeon to help create optimized surgical camera views. It has the advantage of nominal behavior that is helpful in most general cases and has a natural language interface that makes it dynamically customizable and on-demand. It permits the control of a camera with a higher level of abstraction. This paper shows the implementation details and usability of a voice-activated autonomous camera system. A voice activation test on a limited set of practiced key phrases was performed using both online and offline voice recognition systems. The results show an on-average greater than 94% recognition accuracy for the online system and 86% accuracy for the offline system. However, the response time of the online system was greater than 1.5 s, whereas the local system was 0.6 s. This work is a step towards cooperative surgical robots that will effectively partner with human operators to enable more robust surgeries. A video link of the system in operation is provided in this paper.
Collapse
|
5
|
Da Col T, Caccianiga G, Catellani M, Mariani A, Ferro M, Cordima G, De Momi E, Ferrigno G, de Cobelli O. Automating Endoscope Motion in Robotic Surgery: A Usability Study on da Vinci-Assisted Ex Vivo Neobladder Reconstruction. Front Robot AI 2021; 8:707704. [PMID: 34901168 PMCID: PMC8656430 DOI: 10.3389/frobt.2021.707704] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 11/01/2021] [Indexed: 11/18/2022] Open
Abstract
Robots for minimally invasive surgery introduce many advantages, but still require the surgeon to alternatively control the surgical instruments and the endoscope. This work aims at providing autonomous navigation of the endoscope during a surgical procedure. The autonomous endoscope motion was based on kinematic tracking of the surgical instruments and integrated with the da Vinci Research Kit. A preclinical usability study was conducted by 10 urologists. They carried out an ex vivo orthotopic neobladder reconstruction twice, using both traditional and autonomous endoscope control. The usability of the system was tested by asking participants to fill standard system usability scales. Moreover, the effectiveness of the method was assessed by analyzing the total procedure time and the time spent with the instruments out of the field of view. The average system usability score overcame the threshold usually identified as the limit to assess good usability (average score = 73.25 > 68). The average total procedure time with the autonomous endoscope navigation was comparable with the classic control (p = 0.85 > 0.05), yet it significantly reduced the time out of the field of view (p = 0.022 < 0.05). Based on our findings, the autonomous endoscope improves the usability of the surgical system, and it has the potential to be an additional and customizable tool for the surgeon that can always take control of the endoscope or leave it to move autonomously.
Collapse
Affiliation(s)
- Tommaso Da Col
- Neuro-Engineering and Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Guido Caccianiga
- Haptic Intelligence Department, Max-Planck-Institute for Intelligent Systems, Stuttgart, Germany
| | - Michele Catellani
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| | - Andrea Mariani
- Excellence in Robotics and AI Department, Sant’Anna School of Advanced Studies, Pisa, Italy
| | - Matteo Ferro
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| | - Giovanni Cordima
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| | - Elena De Momi
- Neuro-Engineering and Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Giancarlo Ferrigno
- Neuro-Engineering and Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Ottavio de Cobelli
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| |
Collapse
|
6
|
Huber M, Mitchell JB, Henry R, Ourselin S, Vercauteren T, Bergeles C. Homography-based Visual Servoing with Remote Center of Motion for Semi-autonomous Robotic Endoscope Manipulation. ... INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS. INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS 2021; 220:1-7. [PMID: 39351396 PMCID: PMC7616652 DOI: 10.1109/ismr48346.2021.9661563] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/04/2024]
Abstract
The dominant visual servoing approaches in Minimally Invasive Surgery (MIS) follow single points or adapt the endoscope's field of view based on the surgical tools' distance. These methods rely on point positions with respect to the camera frame to infer a control policy. Deviating from the dominant methods, we formulate a robotic controller that allows for image-based visual servoing that requires neither explicit tool and camera positions nor any explicit image depth information. The proposed method relies on homography-based image registration, which changes the automation paradigm from point-centric towards surgical-scene-centric approach. It simultaneously respects a programmable Remote Center of Motion (RCM). Our approach allows a surgeon to build a graph of desired views, from which, once built, views can be manually selected and automatically servoed to irrespective of robot-patient frame transformation changes. We evaluate our method on an abdominal phantom and provide an open source ROS Moveit integration for use with any serial manipulator. A video is provided.
Collapse
Affiliation(s)
- Martin Huber
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - John Bason Mitchell
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
- Department of Medical Physics and Biomedical Engineering, Faculty of Engineering Sciences, University College London, London, United Kingdom
| | - Ross Henry
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - Sébastien Ourselin
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - Christos Bergeles
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| |
Collapse
|
7
|
Aguiar Noury G, Walmsley A, Jones RB, Gaudl SE. The Barriers of the Assistive Robotics Market-What Inhibits Health Innovation? SENSORS 2021; 21:s21093111. [PMID: 33947063 PMCID: PMC8125645 DOI: 10.3390/s21093111] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 04/23/2021] [Accepted: 04/26/2021] [Indexed: 01/19/2023]
Abstract
Demographic changes are putting the healthcare industry under pressure. However, while other industries have been able to automate their operation through robotic and autonomous systems, the healthcare sector is still reluctant to change. What makes robotic innovation in healthcare so difficult? Despite offering more efficient, and consumer-friendly care, the assistive robotics market has lacked penetration. To answer this question, we have broken down the development process, taking a market transformation perspective. By interviewing assistive robotics companies at different business stages from France and the UK, this paper identifies new insight into the main barriers of the assistive robotics market that are inhibiting the sector. Their impact is analysed during the different stages of the development, exploring how these barriers affect the planning, conceptualisation and adoption of these solutions. This research presents a foundation for understanding innovation barriers that high-tech ventures face in the healthcare industry, and the need for public policy measures to support these technology-based firms.
Collapse
Affiliation(s)
- Gabriel Aguiar Noury
- School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth PL48AA, UK;
- Correspondence:
| | - Andreas Walmsley
- International Centre for Transformational Entrepreneurship, Coventry University, Coventry CV15FB, UK;
| | - Ray B. Jones
- School of Nursing and Midwifery, University of Plymouth, Plymouth PL48AA, UK;
| | - Swen E. Gaudl
- School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth PL48AA, UK;
| |
Collapse
|
8
|
Abstract
The advent of telerobotic systems has revolutionized various aspects of the industry and human life. This technology is designed to augment human sensorimotor capabilities to extend them beyond natural competence. Classic examples are space and underwater applications when distance and access are the two major physical barriers to be combated with this technology. In modern examples, telerobotic systems have been used in several clinical applications, including teleoperated surgery and telerehabilitation. In this regard, there has been a significant amount of research and development due to the major benefits in terms of medical outcomes. Recently telerobotic systems are combined with advanced artificial intelligence modules to better share the agency with the operator and open new doors of medical automation. In this review paper, we have provided a comprehensive analysis of the literature considering various topologies of telerobotic systems in the medical domain while shedding light on different levels of autonomy for this technology, starting from direct control, going up to command-tracking autonomous telerobots. Existing challenges, including instrumentation, transparency, autonomy, stochastic communication delays, and stability, in addition to the current direction of research related to benefit in telemedicine and medical automation, and future vision of this technology, are discussed in this review paper.
Collapse
|
9
|
Farajiparvar P, Ying H, Pandya A. A Brief Survey of Telerobotic Time Delay Mitigation. Front Robot AI 2020; 7:578805. [PMID: 33501338 PMCID: PMC7805850 DOI: 10.3389/frobt.2020.578805] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 11/19/2020] [Indexed: 02/04/2023] Open
Abstract
There is a substantial number of telerobotics and teleoperation applications ranging from space operations, ground/aerial robotics, drive-by-wire systems to medical interventions. Major obstacles for such applications include latency, channel corruptions, and bandwidth which limit teleoperation efficacy. This survey reviews the time delay problem in teleoperation systems. We briefly review different solutions from early approaches which consist of control-theory-based models and user interface designs and focus on newer approaches developed since 2014. Future solutions to the time delay problem will likely be hybrid solutions which include modeling of user intent, prediction of robot movements, and time delay prediction all potentially using time series prediction methods. Hence, we examine methods that are primarily based on time series prediction. Recent prediction approaches take advantage of advances in nonlinear statistical models as well as machine learning and neural network techniques. We review Recurrent Neural Networks, Long Short-Term Memory, Sequence to Sequence, and Generative Adversarial Network models and examine each of these approaches for addressing time delay. As time delay is still an unsolved problem, we suggest some possible future research directions from information-theory-based modeling, which may lead to promising new approaches to advancing the field.
Collapse
|