1
|
Knudsen JE, Ma R, Hung AJ. Simulation training in urology. Curr Opin Urol 2024; 34:37-42. [PMID: 37909886 PMCID: PMC10842538 DOI: 10.1097/mou.0000000000001141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
PURPOSE OF REVIEW This review outlines recent innovations in simulation technology as it applies to urology. It is essential for the next generation of urologists to attain a solid foundation of technical and nontechnical skills, and simulation technology provides a variety of safe, controlled environments to acquire this baseline knowledge. RECENT FINDINGS With a focus on urology, this review first outlines the evidence to support surgical simulation, then discusses the strides being made in the development of 3D-printed models for surgical skill training and preoperative planning, virtual reality models for different urologic procedures, surgical skill assessment for simulation, and integration of simulation into urology residency curricula. SUMMARY Simulation continues to be an integral part of the journey towards the mastery of skills necessary for becoming an expert urologist. Clinicians and researchers should consider how to further incorporate simulation technology into residency training and help future generations of urologists throughout their career.
Collapse
Affiliation(s)
| | - Runzhuo Ma
- Department of Urology, Cedars-Sinai Medical Center; Los Angeles, California, USA
| | - Andrew J Hung
- Department of Urology, Cedars-Sinai Medical Center; Los Angeles, California, USA
| |
Collapse
|
2
|
Wise PA, Studier-Fischer A, Nickel F, Hackert T. [Status Quo of Surgical Navigation]. Zentralbl Chir 2023. [PMID: 38056501 DOI: 10.1055/a-2211-4898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023]
Abstract
Surgical navigation, also referred to as computer-assisted or image-guided surgery, is a technique that employs a variety of methods - such as 3D imaging, tracking systems, specialised software, and robotics to support surgeons during surgical interventions. These emerging technologies aim not only to enhance the accuracy and precision of surgical procedures, but also to enable less invasive approaches, with the objective of reducing complications and improving operative outcomes for patients. By harnessing the integration of emerging digital technologies, surgical navigation holds the promise of assisting complex procedures across various medical disciplines. In recent years, the field of surgical navigation has witnessed significant advances. Abdominal surgical navigation, particularly endoscopy, laparoscopic, and robot-assisted surgery, is currently undergoing a phase of rapid evolution. Emphases include image-guided navigation, instrument tracking, and the potential integration of augmented and mixed reality (AR, MR). This article will comprehensively delve into the latest developments in surgical navigation, spanning state-of-the-art intraoperative technologies like hyperspectral and fluorescent imaging, to the integration of preoperative radiological imaging within the intraoperative setting.
Collapse
Affiliation(s)
- Philipp Anthony Wise
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| | - Alexander Studier-Fischer
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| | - Felix Nickel
- Klinik für Allgemein-, Viszeral- und Thoraxchirurgie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Deutschland
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| | - Thilo Hackert
- Klinik für Allgemein-, Viszeral- und Thoraxchirurgie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Deutschland
| |
Collapse
|
3
|
Burton W, Myers C, Rutherford M, Rullkoetter P. Evaluation of single-stage vision models for pose estimation of surgical instruments. Int J Comput Assist Radiol Surg 2023; 18:2125-2142. [PMID: 37120481 DOI: 10.1007/s11548-023-02890-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 03/27/2023] [Indexed: 05/01/2023]
Abstract
PURPOSE Multiple applications in open surgical environments may benefit from adoption of markerless computer vision depending on associated speed and accuracy requirements. The current work evaluates vision models for 6-degree of freedom pose estimation of surgical instruments in RGB scenes. Potential use cases are discussed based on observed performance. METHODS Convolutional neural nets were developed with simulated training data for 6-degree of freedom pose estimation of a representative surgical instrument in RGB scenes. Trained models were evaluated with simulated and real-world scenes. Real-world scenes were produced by using a robotic manipulator to procedurally generate a wide range of object poses. RESULTS CNNs trained in simulation transferred to real-world evaluation scenes with a mild decrease in pose accuracy. Model performance was sensitive to input image resolution and orientation prediction format. The model with highest accuracy demonstrated mean in-plane translation error of 13 mm and mean long axis orientation error of 5[Formula: see text] in simulated evaluation scenes. Similar errors of 29 mm and 8[Formula: see text] were observed in real-world scenes. CONCLUSION 6-DoF pose estimators can predict object pose in RGB scenes with real-time inference speed. Observed pose accuracy suggests that applications such as coarse-grained guidance, surgical skill evaluation, or instrument tracking for tray optimization may benefit from markerless pose estimation.
Collapse
Affiliation(s)
- William Burton
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80210, USA.
| | - Casey Myers
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80210, USA
| | - Matthew Rutherford
- Unmanned Systems Research Institute, University of Denver, 2155 E Wesley Ave, Denver, CO, 80210, USA
| | - Paul Rullkoetter
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80210, USA
| |
Collapse
|
4
|
Felinska EA, Fuchs TE, Kogkas A, Chen ZW, Otto B, Kowalewski KF, Petersen J, Müller-Stich BP, Mylonas G, Nickel F. Telestration with augmented reality improves surgical performance through gaze guidance. Surg Endosc 2023; 37:3557-3566. [PMID: 36609924 PMCID: PMC10156835 DOI: 10.1007/s00464-022-09859-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 12/27/2022] [Indexed: 01/07/2023]
Abstract
BACKGROUND In minimally invasive surgery (MIS), trainees need to learn how to interpret the operative field displayed on the laparoscopic screen. Experts currently guide trainees mainly verbally during laparoscopic procedures. A newly developed telestration system with augmented reality (iSurgeon) allows the instructor to display hand gestures in real-time on the laparoscopic screen in augmented reality to provide visual expert guidance (telestration). This study analysed the effect of telestration guided instructions on gaze behaviour during MIS training. METHODS In a randomized-controlled crossover study, 40 MIS naive medical students performed 8 laparoscopic tasks with telestration or with verbal instructions only. Pupil Core eye-tracking glasses were used to capture the instructor's and trainees' gazes. Gaze behaviour measures for tasks 1-7 were gaze latency, gaze convergence and collaborative gaze convergence. Performance measures included the number of errors in tasks 1-7 and trainee's ratings in structured and standardized performance scores in task 8 (ex vivo porcine laparoscopic cholecystectomy). RESULTS There was a significant improvement 1-7 on gaze latency [F(1,39) = 762.5, p < 0.01, ηp2 = 0.95], gaze convergence [F(1,39) = 482.8, p < 0.01, ηp2 = 0.93] and collaborative gaze convergence [F(1,39) = 408.4, p < 0.01, ηp2 = 0.91] upon instruction with iSurgeon. The number of errors was significantly lower in tasks 1-7 (0.18 ± 0.56 vs. 1.94 ± 1.80, p < 0.01) and the score ratings for laparoscopic cholecystectomy were significantly higher with telestration (global OSATS: 29 ± 2.5 vs. 25 ± 5.5, p < 0.01; task-specific OSATS: 60 ± 3 vs. 50 ± 6, p < 0.01). CONCLUSIONS Telestration with augmented reality successfully improved surgical performance. The trainee's gaze behaviour was improved by reducing the time from instruction to fixation on targets and leading to a higher convergence of the instructor's and the trainee's gazes. Also, the convergence of trainee's gaze and target areas increased with telestration. This confirms augmented reality-based telestration works by means of gaze guidance in MIS and could be used to improve training outcomes.
Collapse
Affiliation(s)
- Eleni Amelia Felinska
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Thomas Ewald Fuchs
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Alexandros Kogkas
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK
| | - Zi-Wei Chen
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Benjamin Otto
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urological Surgery, University Medical Center Mannheim, Heidelberg University, 68167, Mannheim, Germany
| | - Jens Petersen
- Department of Medical Image Computing, German Cancer Research Center, 69120, Heidelberg, Germany
| | - Beat Peter Müller-Stich
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - George Mylonas
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK
| | - Felix Nickel
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany.
| |
Collapse
|
5
|
Zheng P, Wieber PB, Baber J, Aycard O. Human Arm Motion Prediction for Collision Avoidance in a Shared Workspace. Sensors (Basel) 2022; 22:6951. [PMID: 36146296 PMCID: PMC9502074 DOI: 10.3390/s22186951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 08/04/2022] [Accepted: 08/07/2022] [Indexed: 06/16/2023]
Abstract
Industry 4.0 transforms classical industrial systems into more human-centric and digitized systems. Close human-robot collaboration is becoming more frequent, which means security and efficiency issues need to be carefully considered. In this paper, we propose to equip robots with exteroceptive sensors and online motion generation so that the robot is able to perceive and predict human trajectories and react to the motion of the human in order to reduce the occurrence of the collisions. The dataset for training is generated in a real environment in which a human and a robot are sharing their workspace. An Encoder-Decoder based network is proposed to predict the human hand trajectories. A Model Predictive Control (MPC) framework is also proposed, which is able to plan a collision-free trajectory in the shared workspace based on this human motion prediction. The proposed framework is validated in a real environment that ensures collision free collaboration between humans and robots in a shared workspace.
Collapse
Affiliation(s)
- Pu Zheng
- The Laboratoire d’Informatique de Grenoble, University of Grenoble Alpes, 38000 Grenoble, France
| | | | - Junaid Baber
- The Laboratoire d’Informatique de Grenoble, University of Grenoble Alpes, 38000 Grenoble, France
| | - Olivier Aycard
- The Laboratoire d’Informatique de Grenoble, University of Grenoble Alpes, 38000 Grenoble, France
| |
Collapse
|