1
|
Haak F, Müller PC, Kollmar O, Billeter AT, Lavanchy JL, Wiencierz A, Müller-Stich BP, von Strauss Und Torney M. Digital standardization in liver surgery through a surgical workflow management system: A pilot randomized controlled trial. Langenbecks Arch Surg 2025; 410:96. [PMID: 40069334 PMCID: PMC11897067 DOI: 10.1007/s00423-025-03634-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2024] [Accepted: 02/02/2025] [Indexed: 03/15/2025]
Abstract
INTRODUCTION Surgical process models (SPM) are simplified representations of operations and their visualization by surgical workflow management systems (SWMS), and offer a solution to enhance communication and workflow. METHODS A 1:1 randomized controlled trial was conducted. A SPM consisting of six surgical steps was defined to represent the surgical procedure. The primary outcome, termed "deviation" measured the difference between actual and planned surgery duration. Secondary outcomes included stress levels of the operating team and complications. Analyses employed Welch t-tests and linear regression models. RESULTS 18 procedures were performed with a SWMS and 18 without. The deviation showed no significant difference between the intervention and control group. Stress levels (TLX score) of the team remained largely unaffected. Duration of operation steps defined by SPM allows a classification of all liver procedures into three phases: The Start Phase (low IQR of operation time), the Main Phase (high IQR of operation time) and the End Phase (low IQR of operation time). CONCLUSION This study presents a novel SPM for open liver resections visualized by a SWMS. No significant reduction of deviations from planned operation time was observed with system use. Stress levels of the operation team were not influenced by the SWMS.
Collapse
Affiliation(s)
- Fabian Haak
- Clarunis, Department of Visceral Surgery, University Digestive Health Care Center, St. Clara Hospital and University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland.
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, Division of Hepatobiliary Surgery and Visceral Transplant Surgery, University Hospital Leipzig, Leipzig , Germany.
| | - Philip C Müller
- Clarunis, Department of Visceral Surgery, University Digestive Health Care Center, St. Clara Hospital and University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Otto Kollmar
- Department of General, Visceral, Vascular and Thoracic Surgery, Kantonsspital Baselland, Liestal, Switzerland
| | - Adrian T Billeter
- Clarunis, Department of Visceral Surgery, University Digestive Health Care Center, St. Clara Hospital and University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Joël L Lavanchy
- Clarunis, Department of Visceral Surgery, University Digestive Health Care Center, St. Clara Hospital and University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Andrea Wiencierz
- Department of Clinical Research, University of Basel, University Hospital, Basel, Switzerland
| | - Beat Peter Müller-Stich
- Clarunis, Department of Visceral Surgery, University Digestive Health Care Center, St. Clara Hospital and University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Marco von Strauss Und Torney
- Clarunis, Department of Visceral Surgery, University Digestive Health Care Center, St. Clara Hospital and University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
- Department of Clinical Research, University of Basel, University Hospital, Basel, Switzerland
- St. Clara Research Ltd, Basel, Switzerland
| |
Collapse
|
2
|
Younis R, Yamlahi A, Bodenstedt S, Scheikl PM, Kisilenko A, Daum M, Schulze A, Wise PA, Nickel F, Mathis-Ullrich F, Maier-Hein L, Müller-Stich BP, Speidel S, Distler M, Weitz J, Wagner M. A surgical activity model of laparoscopic cholecystectomy for co-operation with collaborative robots. Surg Endosc 2024; 38:4316-4328. [PMID: 38872018 PMCID: PMC11289174 DOI: 10.1007/s00464-024-10958-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 05/24/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Laparoscopic cholecystectomy is a very frequent surgical procedure. However, in an ageing society, less surgical staff will need to perform surgery on patients. Collaborative surgical robots (cobots) could address surgical staff shortages and workload. To achieve context-awareness for surgeon-robot collaboration, the intraoperative action workflow recognition is a key challenge. METHODS A surgical process model was developed for intraoperative surgical activities including actor, instrument, action and target in laparoscopic cholecystectomy (excluding camera guidance). These activities, as well as instrument presence and surgical phases were annotated in videos of laparoscopic cholecystectomy performed on human patients (n = 10) and on explanted porcine livers (n = 10). The machine learning algorithm Distilled-Swin was trained on our own annotated dataset and the CholecT45 dataset. The validation of the model was conducted using a fivefold cross-validation approach. RESULTS In total, 22,351 activities were annotated with a cumulative duration of 24.9 h of video segments. The machine learning algorithm trained and validated on our own dataset scored a mean average precision (mAP) of 25.7% and a top K = 5 accuracy of 85.3%. With training and validation on our dataset and CholecT45, the algorithm scored a mAP of 37.9%. CONCLUSIONS An activity model was developed and applied for the fine-granular annotation of laparoscopic cholecystectomies in two surgical settings. A machine recognition algorithm trained on our own annotated dataset and CholecT45 achieved a higher performance than training only on CholecT45 and can recognize frequently occurring activities well, but not infrequent activities. The analysis of an annotated dataset allowed for the quantification of the potential of collaborative surgical robots to address the workload of surgical staff. If collaborative surgical robots could grasp and hold tissue, up to 83.5% of the assistant's tissue interacting tasks (i.e. excluding camera guidance) could be performed by robots.
Collapse
Affiliation(s)
- R Younis
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - A Yamlahi
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - S Bodenstedt
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - P M Scheikl
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - A Kisilenko
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - M Daum
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - A Schulze
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - P A Wise
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - F Nickel
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg- Eppendorf, Hamburg, Germany
| | - F Mathis-Ullrich
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - L Maier-Hein
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - B P Müller-Stich
- Department for Abdominal Surgery, University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland
| | - S Speidel
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - M Distler
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - J Weitz
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - M Wagner
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany.
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany.
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany.
| |
Collapse
|
3
|
Yang J, Barragan JA, Farrow JM, Sundaram CP, Wachs JP, Yu D. An Adaptive Human-Robotic Interaction Architecture for Augmenting Surgery Performance Using Real-Time Workload Sensing-Demonstration of a Semi-autonomous Suction Tool. HUMAN FACTORS 2024; 66:1081-1102. [PMID: 36367971 PMCID: PMC11558698 DOI: 10.1177/00187208221129940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
OBJECTIVE This study developed and evaluated a mental workload-based adaptive automation (MWL-AA) that monitors surgeon cognitive load and assist during cognitively demanding tasks and assists surgeons in robotic-assisted surgery (RAS). BACKGROUND The introduction of RAS makes operators overwhelmed. The need for precise, continuous assessment of human mental workload (MWL) states is important to identify when the interventions should be delivered to moderate operators' MWL. METHOD The MWL-AA presented in this study was a semi-autonomous suction tool. The first experiment recruited ten participants to perform surgical tasks under different MWL levels. The physiological responses were captured and used to develop a real-time multi-sensing model for MWL detection. The second experiment evaluated the effectiveness of the MWL-AA, where nine brand-new surgical trainees performed the surgical task with and without the MWL-AA. Mixed effect models were used to compare task performance, objective- and subjective-measured MWL. RESULTS The proposed system predicted high MWL hemorrhage conditions with an accuracy of 77.9%. For the MWL-AA evaluation, the surgeons' gaze behaviors and brain activities suggested lower perceived MWL with MWL-AA than without. This was further supported by lower self-reported MWL and better task performance in the task condition with MWL-AA. CONCLUSION A MWL-AA systems can reduce surgeons' workload and improve performance in a high-stress hemorrhaging scenario. Findings highlight the potential of utilizing MWL-AA to enhance the collaboration between the autonomous system and surgeons. Developing a robust and personalized MWL-AA is the first step that can be used do develop additional use cases in future studies. APPLICATION The proposed framework can be expanded and applied to more complex environments to improve human-robot collaboration.
Collapse
Affiliation(s)
- Jing Yang
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| | | | - Jason Michael Farrow
- Department of Urology, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Chandru P Sundaram
- Department of Urology, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Juan P Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| | - Denny Yu
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
4
|
Yu H. A Cogitation on the ChatGPT Craze from the Perspective of Psychological Algorithm Aversion and Appreciation. Psychol Res Behav Manag 2023; 16:3837-3844. [PMID: 37724135 PMCID: PMC10505389 DOI: 10.2147/prbm.s430936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/08/2023] [Indexed: 09/20/2023] Open
Abstract
In recent times, ChatGPT has garnered significant interest from the public, sparking a range of reactions that encompass both aversion and appreciation. This paper delves into the paradoxical attitudes of individuals towards ChatGPT, highlighting the simultaneous existence of algorithmic aversion and appreciation. A comprehensive analysis is conducted from the vantage points of psychology and algorithmic decision-making, exploring the underlying causes of these conflicting attitudes from three dimensions: self-performance, task types, and individual factors. Subsequently, strategies to reconcile these opposing psychological stances are proposed, delineated into two categories: flexible coping and inflexible coping. In light of the ongoing advancements in artificial intelligence, this paper posits recommendations for the attitudes and actions that individuals ought to adopt in the face of artificial intelligence. Regardless of whether one exhibits algorithm aversion or appreciation, the paper underscores that coexisting with algorithms is an inescapable reality in the age of artificial intelligence, necessitating the preservation of human advantages.
Collapse
Affiliation(s)
- Hao Yu
- Faculty of Education, Shaanxi Normal University, Xi’an, Shaanxi, People’s Republic of China
| |
Collapse
|
5
|
Ramesh S, Dall'Alba D, Gonzalez C, Yu T, Mascagni P, Mutter D, Marescaux J, Fiorini P, Padoy N. TRandAugment: temporal random augmentation strategy for surgical activity recognition from videos. Int J Comput Assist Radiol Surg 2023; 18:1665-1672. [PMID: 36944845 PMCID: PMC10491694 DOI: 10.1007/s11548-023-02864-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 03/01/2023] [Indexed: 03/23/2023]
Abstract
PURPOSE Automatic recognition of surgical activities from intraoperative surgical videos is crucial for developing intelligent support systems for computer-assisted interventions. Current state-of-the-art recognition methods are based on deep learning where data augmentation has shown the potential to improve the generalization of these methods. This has spurred work on automated and simplified augmentation strategies for image classification and object detection on datasets of still images. Extending such augmentation methods to videos is not straightforward, as the temporal dimension needs to be considered. Furthermore, surgical videos pose additional challenges as they are composed of multiple, interconnected, and long-duration activities. METHODS This work proposes a new simplified augmentation method, called TRandAugment, specifically designed for long surgical videos, that treats each video as an assemble of temporal segments and applies consistent but random transformations to each segment. The proposed augmentation method is used to train an end-to-end spatiotemporal model consisting of a CNN (ResNet50) followed by a TCN. RESULTS The effectiveness of the proposed method is demonstrated on two surgical video datasets, namely Bypass40 and CATARACTS, and two tasks, surgical phase and step recognition. TRandAugment adds a performance boost of 1-6% over previous state-of-the-art methods, that uses manually designed augmentations. CONCLUSION This work presents a simplified and automated augmentation method for long surgical videos. The proposed method has been validated on different datasets and tasks indicating the importance of devising temporal augmentation methods for long surgical videos.
Collapse
Affiliation(s)
- Sanat Ramesh
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy.
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France.
| | - Diego Dall'Alba
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Cristians Gonzalez
- University Hospital of Strasbourg, 67000, Strasbourg, France
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France
| | - Pietro Mascagni
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168, Rome, Italy
| | - Didier Mutter
- University Hospital of Strasbourg, 67000, Strasbourg, France
- IRCAD, 67000, Strasbourg, France
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
| | | | - Paolo Fiorini
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
| |
Collapse
|
6
|
Heuermann K, Bieck R, Dietz A, Fischer M, Hofer M, Neumuth T, Pirlich M. [BIOPASS hybrid navigation for endoscopic sinus surgery - an assistance system]. Laryngorhinootologie 2023; 102:32-39. [PMID: 36328186 DOI: 10.1055/a-1940-9723] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Previous navigation systems can determine the position of the "tracked" surgical instrument in CT images in the context of functional endoscopic sinus surgery (FESS), but do not provide any assistance directly in the video endoscopic image of the surgeon. Developing this direct assistance for intraoperative orientation and risk reduction was the goal of the BIOPASS project (Bild Ontologie und prozessgestütztes Assistenzsystem). The Project pursues the development of a novel navigation system for FESS without markers. BIOPASS describes a hybrid system that integrates various sensor data and makes it available. The goal is to abandon tracking and exclusively provide navigation information directly in the video image. This paper describes the first step of the development by collecting and structuring the surgical phases (workflows), the video endoscopic landmarks and a first clinical evaluation of the model version. The results provide the important basis and platform for the next step of the project.
Collapse
Affiliation(s)
- Katharina Heuermann
- Medizinisches Versorgungszentrum am Universitätsklinikum Leipzig gGmbH, Leipzig, Germany
| | - Richard Bieck
- Innovation Center Computer Assisted Surgery (ICCAS), Universität Leipzig, 04103 Leipzig, Germany
| | - Andreas Dietz
- Klinik für Hals-Nasen-Ohrenheilkunde, Universitätsklinikum Leipzig, Leipzig, Germany
| | - Miloš Fischer
- HNO-Heilkunde, HNO-Praxis am Johannisplatz, Johannisplatz 1, 04103 Leipzig, Germany
| | - Mathias Hofer
- Klinik für Hals-Nasen-Ohrenheilkunde, Universitätsklinikum Leipzig, Leipzig, Germany.,HNO-Praxis Lindenauer Markt, Leipzig, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), Universität Leipzig, 04103 Leipzig, Germany
| | - Markus Pirlich
- Klinik für Hals-Nasen-Ohrenheilkunde, Universitätsklinikum Leipzig, Leipzig, Germany
| |
Collapse
|
7
|
Da Col T, Caccianiga G, Catellani M, Mariani A, Ferro M, Cordima G, De Momi E, Ferrigno G, de Cobelli O. Automating Endoscope Motion in Robotic Surgery: A Usability Study on da Vinci-Assisted Ex Vivo Neobladder Reconstruction. Front Robot AI 2021; 8:707704. [PMID: 34901168 PMCID: PMC8656430 DOI: 10.3389/frobt.2021.707704] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 11/01/2021] [Indexed: 11/18/2022] Open
Abstract
Robots for minimally invasive surgery introduce many advantages, but still require the surgeon to alternatively control the surgical instruments and the endoscope. This work aims at providing autonomous navigation of the endoscope during a surgical procedure. The autonomous endoscope motion was based on kinematic tracking of the surgical instruments and integrated with the da Vinci Research Kit. A preclinical usability study was conducted by 10 urologists. They carried out an ex vivo orthotopic neobladder reconstruction twice, using both traditional and autonomous endoscope control. The usability of the system was tested by asking participants to fill standard system usability scales. Moreover, the effectiveness of the method was assessed by analyzing the total procedure time and the time spent with the instruments out of the field of view. The average system usability score overcame the threshold usually identified as the limit to assess good usability (average score = 73.25 > 68). The average total procedure time with the autonomous endoscope navigation was comparable with the classic control (p = 0.85 > 0.05), yet it significantly reduced the time out of the field of view (p = 0.022 < 0.05). Based on our findings, the autonomous endoscope improves the usability of the surgical system, and it has the potential to be an additional and customizable tool for the surgeon that can always take control of the endoscope or leave it to move autonomously.
Collapse
Affiliation(s)
- Tommaso Da Col
- Neuro-Engineering and Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Guido Caccianiga
- Haptic Intelligence Department, Max-Planck-Institute for Intelligent Systems, Stuttgart, Germany
| | - Michele Catellani
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| | - Andrea Mariani
- Excellence in Robotics and AI Department, Sant’Anna School of Advanced Studies, Pisa, Italy
| | - Matteo Ferro
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| | - Giovanni Cordima
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| | - Elena De Momi
- Neuro-Engineering and Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Giancarlo Ferrigno
- Neuro-Engineering and Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Ottavio de Cobelli
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| |
Collapse
|
8
|
Ramesh S, Dall’Alba D, Gonzalez C, Yu T, Mascagni P, Mutter D, Marescaux J, Fiorini P, Padoy N. Multi-task temporal convolutional networks for joint recognition of surgical phases and steps in gastric bypass procedures. Int J Comput Assist Radiol Surg 2021; 16:1111-1119. [PMID: 34013464 PMCID: PMC8260406 DOI: 10.1007/s11548-021-02388-z] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Accepted: 04/27/2021] [Indexed: 12/31/2022]
Abstract
PURPOSE Automatic segmentation and classification of surgical activity is crucial for providing advanced support in computer-assisted interventions and autonomous functionalities in robot-assisted surgeries. Prior works have focused on recognizing either coarse activities, such as phases, or fine-grained activities, such as gestures. This work aims at jointly recognizing two complementary levels of granularity directly from videos, namely phases and steps. METHODS We introduce two correlated surgical activities, phases and steps, for the laparoscopic gastric bypass procedure. We propose a multi-task multi-stage temporal convolutional network (MTMS-TCN) along with a multi-task convolutional neural network (CNN) training setup to jointly predict the phases and steps and benefit from their complementarity to better evaluate the execution of the procedure. We evaluate the proposed method on a large video dataset consisting of 40 surgical procedures (Bypass40). RESULTS We present experimental results from several baseline models for both phase and step recognition on the Bypass40. The proposed MTMS-TCN method outperforms single-task methods in both phase and step recognition by 1-2% in accuracy, precision and recall. Furthermore, for step recognition, MTMS-TCN achieves a superior performance of 3-6% compared to LSTM-based models on all metrics. CONCLUSION In this work, we present a multi-task multi-stage temporal convolutional network for surgical activity recognition, which shows improved results compared to single-task models on a gastric bypass dataset with multi-level annotations. The proposed method shows that the joint modeling of phases and steps is beneficial to improve the overall recognition of each type of activity.
Collapse
Affiliation(s)
- Sanat Ramesh
- Altair Robotics Lab, Department of Computer Science, University of Verona, Verona, Italy
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
| | - Diego Dall’Alba
- Altair Robotics Lab, Department of Computer Science, University of Verona, Verona, Italy
| | | | - Tong Yu
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Didier Mutter
- University Hospital of Strasbourg, IHU Strasbourg, France
- IRCAD, Strasbourg, France
| | | | - Paolo Fiorini
- Altair Robotics Lab, Department of Computer Science, University of Verona, Verona, Italy
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
| |
Collapse
|
9
|
A learning robot for cognitive camera control in minimally invasive surgery. Surg Endosc 2021; 35:5365-5374. [PMID: 33904989 PMCID: PMC8346448 DOI: 10.1007/s00464-021-08509-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 04/07/2021] [Indexed: 12/13/2022]
Abstract
Background We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons. Methods The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon’s learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR. Results The duration of each operation decreased with the robot’s increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%. Conclusions The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon’s needs. Supplementary Information The online version contains supplementary material available at 10.1007/s00464-021-08509-8.
Collapse
|
10
|
Review of 3D-printing technologies for wearable and implantable bio-integrated sensors. Essays Biochem 2021; 65:491-502. [PMID: 33860794 DOI: 10.1042/ebc20200131] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 03/10/2021] [Accepted: 03/22/2021] [Indexed: 01/16/2023]
Abstract
Thin-film microfabrication-based bio-integrated sensors are widely used for a broad range of applications that require continuous measurements of biophysical and biochemical signals from the human body. Typically, they are fabricated using standard photolithography and etching techniques. This traditional method is capable of producing a precise, thin, and flexible bio-integrated sensor system. However, it has several drawbacks, such as the fact that it can only be used to fabricate sensors on a planar surface, it is highly complex requiring specialized high-end facilities and equipment, and it mostly allows only 2D features to be fabricated. Therefore, developing bio-integrated sensors via 3D-printing technology has attracted particular interest. 3D-printing technology offers the possibility to develop sensors on nonplanar substrates, which is beneficial for noninvasive bio-signal sensing, and to directly print on complex 3D nonplanar organ structures. Moreover, this technology introduces a highly flexible and precisely controlled printing process to realize patient-specific sensor systems for ultimate personalized medicine, with the potential of rapid prototyping and mass customization. This review summarizes the latest advancements in 3D-printed bio-integrated systems, including 3D-printing methods and employed printing materials. Furthermore, two widely used 3D-printing techniques are discussed, namely, ex-situ and in-situ fabrication techniques, which can be utilized in different types of applications, including wearable and smart-implantable biosensor systems.
Collapse
|
11
|
Optimization-Based Constrained Trajectory Generation for Robot-Assisted Stitching in Endonasal Surgery. ROBOTICS 2021. [DOI: 10.3390/robotics10010027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
The reduced workspace in endonasal endoscopic surgery (EES) hinders the execution of complex surgical tasks such as suturing. Typically, surgeons need to manipulate non-dexterous long surgical instruments with an endoscopic view that makes it difficult to estimate the distances and angles required for precise suturing motion. Recently, robot-assisted surgical systems have been used in laparoscopic surgery with promising results. Although robotic systems can provide enhanced dexterity, robot-assisted suturing is still highly challenging. In this paper, we propose a robot-assisted stitching method based on an online optimization-based trajectory generation for curved needle stitching and a constrained motion planning framework to ensure safe surgical instrument motion. The needle trajectory is generated online by using a sequential convex optimization algorithm subject to stitching kinematic constraints. The constrained motion planner is designed to reduce surrounding damages to the nasal cavity by setting a remote center of motion over the nostril. A dual concurrent inverse kinematics (IK) solver is proposed to achieve convergence of the solution and optimal time execution, in which two constrained IK methods are performed simultaneously; a task-priority based IK and a nonlinear optimization-based IK. We evaluate the performance of the proposed method in a stitching experiment with our surgical robotic system in a robot-assisted mode and an autonomous mode in comparison to the use of a conventional surgical tool. Our results demonstrate a noticeable improvement in the stitching success ratio in the robot-assisted mode and the shortest completion time for the autonomous mode. In addition, the force interaction with the tissue was highly reduced when using the robotic system.
Collapse
|
12
|
Mariani A, Colaci G, Da Col T, Sanna N, Vendrame E, Menciassi A, De Momi E. An Experimental Comparison Towards Autonomous Camera Navigation to Optimize Training in Robot Assisted Surgery. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2965067] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
13
|
Eslamian S, Reisner LA, Pandya AK. Development and evaluation of an autonomous camera control algorithm on the da Vinci Surgical System. Int J Med Robot 2019; 16:e2036. [PMID: 31490615 DOI: 10.1002/rcs.2036] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 08/20/2019] [Accepted: 09/02/2019] [Indexed: 12/16/2022]
Abstract
BACKGROUND Manual control of the camera arm in telerobotic surgical systems requires the surgeon to repeatedly interrupt the flow of the surgery. During surgery, there are instances when one or even both tools can drift out of the field of view. These issues may lead to increased workload and potential errors. METHODS We performed a 20-participant subject study (including four surgeons) to compare different methods of camera control on a customized da Vinci Surgical System. We tested (a) an autonomous camera algorithm, (b) standard clutched control, and (c) an experienced camera operator using a joystick. RESULTS The automated algorithm surpassed the traditional method of clutched camera control in measures of userperceived workload, efficiency, and progress. Additionally, it was consistently able to generate more centered and appropriately zoomed viewpoints than the other methods while keeping both tools safely inside the camera's field of view. CONCLUSIONS Clinical systems of the future should consider automating the camera control aspects of robotic surgery.
Collapse
Affiliation(s)
- Shahab Eslamian
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan
| | - Luke A Reisner
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan
| | - Abhilash K Pandya
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan
| |
Collapse
|
14
|
Kawashima K, Kanno T, Tadano K. Robots in laparoscopic surgery: current and future status. BMC Biomed Eng 2019; 1:12. [PMID: 32903302 PMCID: PMC7422514 DOI: 10.1186/s42490-019-0012-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 04/25/2019] [Indexed: 02/07/2023] Open
Abstract
In this paper, we focus on robots used for laparoscopic surgery, which is one of the most active areas for research and development of surgical robots. We introduce research and development of laparoscope-holder robots, master-slave robots and hand-held robotic forceps. Then, we discuss future directions for surgical robots. For robot hardware, snake like flexible mechanisms for single-port access surgery (SPA) and NOTES (Natural Orifice Transluminal Endoscopic Surgery) and applications of soft robotics are actively used. On the software side, research such as automation of surgical procedures using machine learning is one of the hot topics.
Collapse
|
15
|
Franke S, Rockstroh M, Hofer M, Neumuth T. The intelligent OR: design and validation of a context-aware surgical working environment. Int J Comput Assist Radiol Surg 2018; 13:1301-1308. [DOI: 10.1007/s11548-018-1791-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Accepted: 05/09/2018] [Indexed: 11/28/2022]
|
16
|
Dynamic Gesture Recognition Using a Smart Glove in Hand-Assisted Laparoscopic Surgery. TECHNOLOGIES 2018. [DOI: 10.3390/technologies6010008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
17
|
|
18
|
Miehle J, Ostler D, Gerstenlauer N, Minker W. The next step: intelligent digital assistance for clinical operating rooms. Innov Surg Sci 2017; 2:159-161. [PMID: 31579748 PMCID: PMC6754018 DOI: 10.1515/iss-2017-0034] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2017] [Accepted: 08/03/2017] [Indexed: 01/19/2023] Open
Abstract
With the emergence of new technologies, the surgical working environment becomes increasingly complex and comprises many medical devices that have to be taken cared of. However, the goal is to reduce the workload of the surgical team to allow them to fully focus on the actual surgical procedure. Therefore, new strategies are needed to keep the working environment manageable. Existing research projects in the field of intelligent medical environments mostly concentrate on workflow modeling or single smart features rather than building up a complete intelligent environment. In this article, we present the concept of intelligent digital assistance for clinical operating rooms (IDACO), providing the surgeon assistance in many different situations before and during an ongoing procedure using natural spoken language. The speech interface enables the surgeon to concentrate on the surgery and control the technical environment at the same time, without taking care of how to interact with the system. Furthermore, the system observes the context of the surgery and controls several devices autonomously at the appropriate time during the procedure.
Collapse
Affiliation(s)
- Juliana Miehle
- Institute of Communications Engineering, Ulm University, Ulm, Germany
| | - Daniel Ostler
- Minimally-Invasive Interdisciplinary Therapeutical Intervention (MITI), Klinikum rechts der Isar, Technische Universität München, Munich, Germany
| | | | - Wolfgang Minker
- Institute of Communications Engineering, Ulm University, Ulm, Germany
| |
Collapse
|
19
|
Abstract
Due to the rapidly evolving medical, technological, and technical possibilities, surgical procedures are becoming more and more complex. On the one hand, this offers an increasing number of advantages for patients, such as enhanced patient safety, minimal invasive interventions, and less medical malpractices. On the other hand, it also heightens pressure on surgeons and other clinical staff and has brought about a new policy in hospitals, which must rely on a great number of economic, social, psychological, qualitative, practical, and technological resources. As a result, medical disciplines, such as surgery, are slowly merging with technical disciplines. However, this synergy is not yet fully matured. The current information and communication technology in hospitals cannot manage the clinical and operational sequence adequately. The consequences are breaches in the surgical workflow, extensions in procedure times, and media disruptions. Furthermore, the data accrued in operating rooms (ORs) by surgeons and systems are not sufficiently implemented. A flood of information, “big data”, is available from information systems. That might be deployed in the context of Medicine 4.0 to facilitate the surgical treatment. However, it is unused due to infrastructure breaches or communication errors. Surgical process models (SPMs) alleviate these problems. They can be defined as simplified, formal, or semiformal representations of a network of surgery-related activities, reflecting a predefined subset of interest. They can employ different means of generation, languages, and data acquisition strategies. They can represent surgical interventions with high resolution, offering qualifiable and quantifiable information on the course of the intervention on the level of single, minute, surgical work-steps. The basic idea is to gather information concerning the surgical intervention and its activities, such as performance time, surgical instrument used, trajectories, movements, or intervention phases. These data can be gathered by means of workflow recordings. These recordings are abstracted to represent an individual surgical process as a model and are an essential requirement to enable Medicine 4.0 in the OR. Further abstraction can be generated by merging individual process models to form generic SPMs to increase the validity for a larger number of patients. Furthermore, these models can be applied in a wide variety of use-cases. In this regard, the term “modeling” can be used to support either one or more of the following tasks: “to describe”, “to understand”, “to explain”, to optimize”, “to learn”, “to teach”, or “to automate”. Possible use-cases are requirements analyses, evaluating surgical assist systems, generating surgeon-specific training-recommendation, creating workflow management systems for ORs, and comparing different surgical strategies. The presented chapter will give an introduction into this challenging topic, presenting different methods to generate SPMs from the workflow in the OR, as well as various use-cases, and state-of-the-art research in this field. Although many examples in the article are given according to SPMs that were computed based on observations, the same approaches can be easily applied to SPMs that were measured automatically and mined from big data.
Collapse
Affiliation(s)
- Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), Universität Leipzig, Leipzig, Germany
| |
Collapse
|
20
|
Paschold M, Huber T, Maedge S, Zeissig SR, Lang H, Kneist W. Laparoscopic assistance by operating room nurses: Results of a virtual-reality study. NURSE EDUCATION TODAY 2017; 51:68-72. [PMID: 28131934 DOI: 10.1016/j.nedt.2017.01.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Revised: 01/03/2017] [Accepted: 01/16/2017] [Indexed: 05/23/2023]
Abstract
BACKGROUND Laparoscopic assistance is often entrusted to a less experienced resident, medical student, or operating room nurse. Data regarding laparoscopic training for operating room nurses are not available. OBJECTIVES The aim of the study was to analyse the initial performance level and learning curves of operating room nurses in basic laparoscopic surgery compared with medical students and surgical residents to determine their ability to assist with this type of procedure. DESIGN The study was designed to compare the initial virtual reality performance level and learning curves of user groups to analyse competence in laparoscopic assistance. PARTICIPANTS The study subjects were operating room nurses, medical students, and first year residents. METHODS Participants performed three validated tasks (camera navigation, peg transfer, fine dissection) on a virtual reality laparoscopic simulator three times in 3 consecutive days. Laparoscopic experts were enrolled as a control group. Participants filled out questionnaires before and after the course. RESULTS Nurses and students were comparable in their initial performance (p>0.05). Residents performed better in camera navigation than students and nurses and reached the expert level for this task. Residents, students, and nurses had comparable bimanual skills throughout the study; while, experts performed significantly better in bimanual manoeuvres at all times (p<0.05). CONCLUSION The included user groups had comparable skills for bimanual tasks. Residents with limited experience reached the expert level in camera navigation. With training, nurses, students, and first year residents are equally capable of assisting in basic laparoscopic procedures.
Collapse
Affiliation(s)
- M Paschold
- Department of General, Visceral and Transplant Surgery, University Medicine of the Johannes Gutenberg-University Mainz, Germany
| | - T Huber
- Department of General, Visceral and Transplant Surgery, University Medicine of the Johannes Gutenberg-University Mainz, Germany
| | - S Maedge
- Department of Operating Room Management, University Medicine of the Johannes Gutenberg-University Mainz, Germany
| | - S R Zeissig
- Institute of Medical Biometry, Epidemiology and Informatics (IMBEI), Johannes Gutenberg-University, Mainz, Germany
| | - H Lang
- Department of General, Visceral and Transplant Surgery, University Medicine of the Johannes Gutenberg-University Mainz, Germany
| | - W Kneist
- Department of General, Visceral and Transplant Surgery, University Medicine of the Johannes Gutenberg-University Mainz, Germany.
| |
Collapse
|
21
|
Prendergast JM, Rentschler ME. Towards autonomous motion control in minimally invasive robotic surgery. Expert Rev Med Devices 2016; 13:741-8. [PMID: 27376789 DOI: 10.1080/17434440.2016.1205482] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
INTRODUCTION While autonomous surgical robotic systems exist primarily at the research level, recently these systems have made a strong push into clinical settings. The autonomous or semi-autonomous control of surgical robotic platforms may offer significant improvements to a diverse field of surgical procedures, allowing for high precision, intelligent manipulation of these systems and opening the door to advanced minimally invasive surgical procedures not currently possible. AREAS COVERED This review highlights those experimental systems currently under development with a focus on in vivo modeling and control strategies designed specifically for the complex and dynamic surgical environment. Expert review: Novel methods for state estimation, system modeling and disturbance rejection, as applied to these devices, continues to improve the performance of these important surgical tools. Procedures such as Natural Orifice Transluminal Endoscopic Surgery and Laparo-Endoscopic Single Site surgery, as well as more conventional procedures such as Colonoscopy, serve to benefit tremendously from the development of these automated robotic systems, enabling surgeons to minimize tissue damage and shorten procedure times while avoiding the consequences of laparotomy.
Collapse
Affiliation(s)
- J Micah Prendergast
- a Department of Mechanical Engineering , University of Colorado , Boulder , CO , USA
| | - Mark E Rentschler
- a Department of Mechanical Engineering , University of Colorado , Boulder , CO , USA
| |
Collapse
|
22
|
Bridging the gap between formal and experience-based knowledge for context-aware laparoscopy. Int J Comput Assist Radiol Surg 2016; 11:881-8. [DOI: 10.1007/s11548-016-1379-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Accepted: 03/07/2016] [Indexed: 10/22/2022]
|
23
|
Ellis RD, Munaco AJ, Reisner LA, Klein MD, Composto AM, Pandya AK, King BW. Task analysis of laparoscopic camera control schemes. Int J Med Robot 2015; 12:576-584. [PMID: 26648563 DOI: 10.1002/rcs.1716] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2015] [Revised: 09/24/2015] [Accepted: 10/20/2015] [Indexed: 11/07/2022]
Abstract
BACKGROUND Minimally invasive surgeries rely on laparoscopic camera views to guide the procedure. Traditionally, an expert surgical assistant operates the camera. In some cases, a robotic system is used to help position the camera, but the surgeon is required to direct all movements of the system. Some prior research has focused on developing automated robotic camera control systems, but that work has been limited to rudimentary control schemes due to a lack of understanding of how the camera should be moved for different surgical tasks. METHODS This research used task analysis with a sample of eight expert surgeons to discover and document several salient methods of camera control and their related task contexts. RESULTS Desired camera placements and behaviours were established for two common surgical subtasks (suturing and knot tying). CONCLUSION The results can be used to develop better robotic control algorithms that will be more responsive to surgeons' needs. Copyright © 2015 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- R Darin Ellis
- Department of Industrial and Systems Engineering, Wayne State University, Detroit, MI, USA
| | - Anthony J Munaco
- Department of Pediatric Surgery, Children's Hospital of Michigan, Detroit, MI, USA
| | - Luke A Reisner
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI, USA
| | - Michael D Klein
- Department of Pediatric Surgery, Children's Hospital of Michigan, Detroit, MI, USA
| | - Anthony M Composto
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI, USA
| | - Abhilash K Pandya
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI, USA
| | - Brady W King
- Department of Pediatric Surgery, Children's Hospital of Michigan, Detroit, MI, USA
| |
Collapse
|
24
|
Toward cognitive pipelines of medical assistance algorithms. Int J Comput Assist Radiol Surg 2015; 11:1743-53. [PMID: 26646415 DOI: 10.1007/s11548-015-1322-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2015] [Accepted: 10/30/2015] [Indexed: 10/22/2022]
Abstract
PURPOSE Assistance algorithms for medical tasks have great potential to support physicians with their daily work. However, medicine is also one of the most demanding domains for computer-based support systems, since medical assistance tasks are complex and the practical experience of the physician is crucial. Recent developments in the area of cognitive computing appear to be well suited to tackle medicine as an application domain. METHODS We propose a system based on the idea of cognitive computing and consisting of auto-configurable medical assistance algorithms and their self-adapting combination. The system enables automatic execution of new algorithms, given they are made available as Medical Cognitive Apps and are registered in a central semantic repository. Learning components can be added to the system to optimize the results in the cases when numerous Medical Cognitive Apps are available for the same task. Our prototypical implementation is applied to the areas of surgical phase recognition based on sensor data and image progressing for tumor progression mappings. RESULTS Our results suggest that such assistance algorithms can be automatically configured in execution pipelines, candidate results can be automatically scored and combined, and the system can learn from experience. Furthermore, our evaluation shows that the Medical Cognitive Apps are providing the correct results as they did for local execution and run in a reasonable amount of time. CONCLUSION The proposed solution is applicable to a variety of medical use cases and effectively supports the automated and self-adaptive configuration of cognitive pipelines based on medical interpretation algorithms.
Collapse
|
25
|
Katić D, Julliard C, Wekerle AL, Kenngott H, Müller-Stich BP, Dillmann R, Speidel S, Jannin P, Gibaud B. LapOntoSPM: an ontology for laparoscopic surgeries and its application to surgical phase recognition. Int J Comput Assist Radiol Surg 2015; 10:1427-34. [PMID: 26062794 DOI: 10.1007/s11548-015-1222-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2014] [Accepted: 05/01/2015] [Indexed: 10/23/2022]
Abstract
PURPOSE The rise of intraoperative information threatens to outpace our abilities to process it. Context-aware systems, filtering information to automatically adapt to the current needs of the surgeon, are necessary to fully profit from computerized surgery. To attain context awareness, representation of medical knowledge is crucial. However, most existing systems do not represent knowledge in a reusable way, hindering also reuse of data. Our purpose is therefore to make our computational models of medical knowledge sharable, extensible and interoperational with established knowledge representations in the form of the LapOntoSPM ontology. To show its usefulness, we apply it to situation interpretation, i.e., the recognition of surgical phases based on surgical activities. METHODS Considering best practices in ontology engineering and building on our ontology for laparoscopy, we formalized the workflow of laparoscopic adrenalectomies, cholecystectomies and pancreatic resections in the framework of OntoSPM, a new standard for surgical process models. Furthermore, we provide a rule-based situation interpretation algorithm based on SQWRL to recognize surgical phases using the ontology. RESULTS The system was evaluated on ground-truth data from 19 manually annotated surgeries. The aim was to show that the phase recognition capabilities are equal to a specialized solution. The recognition rates of the new system were equal to the specialized one. However, the time needed to interpret a situation rose from 0.5 to 1.8 s on average which is still viable for practical application. CONCLUSION We successfully integrated medical knowledge for laparoscopic surgeries into OntoSPM, facilitating knowledge and data sharing. This is especially important for reproducibility of results and unbiased comparison of recognition algorithms. The associated recognition algorithm was adapted to the new representation without any loss of classification power. The work is an important step to standardized knowledge and data representation in the field on context awareness and thus toward unified benchmark data sets.
Collapse
Affiliation(s)
- Darko Katić
- Karlsruhe Institute of Technology (KIT), Adenauerring 2, 76131, Karlsruhe, Germany,
| | | | | | | | | | | | | | | | | |
Collapse
|
26
|
Kenngott HG, Wagner M, Nickel F, Wekerle AL, Preukschas A, Apitz M, Schulte T, Rempel R, Mietkowski P, Wagner F, Termer A, Müller-Stich BP. Computer-assisted abdominal surgery: new technologies. Langenbecks Arch Surg 2015; 400:273-81. [PMID: 25701196 DOI: 10.1007/s00423-015-1289-8] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2015] [Accepted: 02/09/2015] [Indexed: 12/16/2022]
Abstract
BACKGROUND Computer-assisted surgery is a wide field of technologies with the potential to enable the surgeon to improve efficiency and efficacy of diagnosis, treatment, and clinical management. PURPOSE This review provides an overview of the most important new technologies and their applications. METHODS A MEDLINE database search was performed revealing a total of 1702 references. All references were considered for information on six main topics, namely image guidance and navigation, robot-assisted surgery, human-machine interface, surgical processes and clinical pathways, computer-assisted surgical training, and clinical decision support. Further references were obtained through cross-referencing the bibliography cited in each work. Based on their respective field of expertise, the authors chose 64 publications relevant for the purpose of this review. CONCLUSION Computer-assisted systems are increasingly used not only in experimental studies but also in clinical studies. Although computer-assisted abdominal surgery is still in its infancy, the number of studies is constantly increasing, and clinical studies start showing the benefits of computers used not only as tools of documentation and accounting but also for directly assisting surgeons during diagnosis and treatment of patients. Further developments in the field of clinical decision support even have the potential of causing a paradigm shift in how patients are diagnosed and treated.
Collapse
Affiliation(s)
- H G Kenngott
- Department of General, Abdominal and Transplant Surgery, Ruprecht-Karls-University, Heidelberg, Germany
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
27
|
|
28
|
Reliability of sensor-based real-time workflow recognition in laparoscopic cholecystectomy. Int J Comput Assist Radiol Surg 2014; 9:941-8. [DOI: 10.1007/s11548-014-0986-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2013] [Accepted: 02/06/2014] [Indexed: 10/25/2022]
|
29
|
Feussner H, Reiser SB, Bauer M, Kranzfelder M, Schirren R, Kleeff J, Wilhelm D. [Further technical and digital development in minimally invasive and conventional surgery]. Chirurg 2014; 85:178, 180-5. [PMID: 24522491 DOI: 10.1007/s00104-013-2596-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Technological innovations have initiated a fundamental change in invasive therapeutic approaches which has led to a welcome reduction of surgical trauma but was also associated with a declining role of conventional surgery. Active utilization of future technological developments is decisive to promote new therapeutic strategies and to avoid a further loss of importance of surgery. This includes individualized preoperative therapy planning as well as intraoperative diagnostic work-up and navigation and the use of new functional intelligent implants. The working environment "surgical operating room" has to be refurbished into an integrated cooperating functional system. The impact of new technological developments is particularly obvious in minimally invasive surgery. There is a clear tendency towards further reduction in trauma in the surgical access. The incision will become smaller and the number of ports will be further reduced, with the aim of ultimately having just one port (monoport surgery) or even via natural access routes (scarless surgery). Among others, improved visualization including, e.g. autostereoscopy, digital image processing and intelligent support systems, which are able to assist in a cooperative way, will enable these goals to be achieved.
Collapse
Affiliation(s)
- H Feussner
- Klinikum rechts der Isar, Chirurgische Klinik und Poliklinik, Technische Universität München, Ismaninger Str. 22, 81675, München, Deutschland,
| | | | | | | | | | | | | |
Collapse
|
30
|
Katić D, Wekerle AL, Gärtner F, Kenngott H, Müller-Stich BP, Dillmann R, Speidel S. Knowledge-Driven Formalization of Laparoscopic Surgeries for Rule-Based Intraoperative Context-Aware Assistance. INFORMATION PROCESSING IN COMPUTER-ASSISTED INTERVENTIONS 2014. [DOI: 10.1007/978-3-319-07521-1_17] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|