1
|
Pathiraja Rathnayaka Hitige N, Song T, Craig SJ, Davis KJ, Hao X, Cui L, Yu P. An Ontology-Based Approach for Understanding Appendicectomy Processes and Associated Resources. Healthcare (Basel) 2024; 13:10. [PMID: 39791617 PMCID: PMC11720549 DOI: 10.3390/healthcare13010010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Revised: 12/08/2024] [Accepted: 12/18/2024] [Indexed: 01/12/2025] Open
Abstract
BACKGROUND Traditional methods for analysing surgical processes often fall short in capturing the intricate interconnectedness between clinical procedures, their execution sequences, and associated resources such as hospital infrastructure, staff, and protocols. AIM This study addresses this gap by developing an ontology for appendicectomy, a computational model that comprehensively represents appendicectomy processes and their resource dependencies to support informed decision making and optimise appendicectomy healthcare delivery. METHODS The ontology was developed using the NeON methodology, drawing knowledge from existing ontologies, scholarly literature, and de-identified patient data from local hospitals. RESULTS The resulting ontology comprises 108 classes, including 11 top-level classes and 96 subclasses organised across five hierarchical levels. The 11 top-level classes include "clinical procedure", "appendicectomy-related organisational protocols", "disease", "start time", "end time", "duration", "appendicectomy outcomes", "hospital infrastructure", "hospital staff", "patient", and "patient demographics". Additionally, the ontology includes 77 object and data properties to define relationships and attributes. The ontology offers a semantic, computable framework for encoding appendicectomy-specific clinical procedures and their associated resources. CONCLUSION By systematically representing this knowledge, this study establishes a foundation for enhancing clinical decision making, improving data integration, and ultimately advancing patient care. Future research can leverage this ontology to optimise healthcare workflows and outcomes in appendicectomy management.
Collapse
Affiliation(s)
- Nadeesha Pathiraja Rathnayaka Hitige
- Department of Information and Communication Technology, Faculty of Technology, Rajarata University of Sri Lanka, Mihintale 50300, Sri Lanka;
- Centre for Digital Transformation, School of Computing and Information Technology, University of Wollongong, Wollongong, NSW 2522, Australia;
| | - Ting Song
- Centre for Digital Transformation, School of Computing and Information Technology, University of Wollongong, Wollongong, NSW 2522, Australia;
| | - Steven J. Craig
- Department of Surgery, Shoalhaven District Memorial Hospital, Nowra, NSW 2541, Australia;
- Graduate School of Medicine, Faculty of Science Medicine and Health, University of Wollongong, Wollongong, NSW 2522, Australia;
| | - Kimberley J. Davis
- Graduate School of Medicine, Faculty of Science Medicine and Health, University of Wollongong, Wollongong, NSW 2522, Australia;
- Research Operations, Illawarra Shoalhaven Local Health District, Warrawong, NSW 2502, Australia
| | - Xubing Hao
- McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, USA; (X.H.); (L.C.)
| | - Licong Cui
- McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, USA; (X.H.); (L.C.)
| | - Ping Yu
- Centre for Digital Transformation, School of Computing and Information Technology, University of Wollongong, Wollongong, NSW 2522, Australia;
| |
Collapse
|
2
|
Junger D, Just E, Brandenburg JM, Wagner M, Schaumann K, Klenzner T, Burgert O. Toward an interoperable, intraoperative situation recognition system via process modeling, execution, and control using the standards BPMN and CMMN. Int J Comput Assist Radiol Surg 2024; 19:69-82. [PMID: 37620748 PMCID: PMC10770268 DOI: 10.1007/s11548-023-03004-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 07/17/2023] [Indexed: 08/26/2023]
Abstract
PURPOSE For the modeling, execution, and control of complex, non-standardized intraoperative processes, a modeling language is needed that reflects the variability of interventions. As the established Business Process Model and Notation (BPMN) reaches its limits in terms of flexibility, the Case Management Model and Notation (CMMN) was considered as it addresses weakly structured processes. METHODS To analyze the suitability of the modeling languages, BPMN and CMMN models of a Robot-Assisted Minimally Invasive Esophagectomy and Cochlea Implantation were derived and integrated into a situation recognition workflow. Test cases were used to contrast the differences and compare the advantages and disadvantages of the models concerning modeling, execution, and control. Furthermore, the impact on transferability was investigated. RESULTS Compared to BPMN, CMMN allows flexibility for modeling intraoperative processes while remaining understandable. Although more effort and process knowledge are needed for execution and control within a situation recognition system, CMMN enables better transferability of the models and therefore the system. Concluding, CMMN should be chosen as a supplement to BPMN for flexible process parts that can only be covered insufficiently by BPMN, or otherwise as a replacement for the entire process. CONCLUSION CMMN offers the flexibility for variable, weakly structured process parts, and is thus suitable for surgical interventions. A combination of both notations could allow optimal use of their advantages and support the transferability of the situation recognition system.
Collapse
Affiliation(s)
- Denise Junger
- School of Informatics, Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Reutlingen, Germany.
| | - Elisaveta Just
- School of Informatics, Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Reutlingen, Germany
| | - Johanna M Brandenburg
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases Heidelberg, Heidelberg, Germany
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases Heidelberg, Heidelberg, Germany
- Center for the Tactile Internet With Human in the Loop (CeTI), Technical University Dresden, Dresden, Germany
| | - Katharina Schaumann
- Department of Otorhinolaryngology, University Hospital Düsseldorf, Düsseldorf, Germany
| | - Thomas Klenzner
- Department of Otorhinolaryngology, University Hospital Düsseldorf, Düsseldorf, Germany
| | - Oliver Burgert
- School of Informatics, Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Reutlingen, Germany
| |
Collapse
|
3
|
Tao R, Zou X, Zheng G. LAST: LAtent Space-Constrained Transformers for Automatic Surgical Phase Recognition and Tool Presence Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3256-3268. [PMID: 37227905 DOI: 10.1109/tmi.2023.3279838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
When developing context-aware systems, automatic surgical phase recognition and tool presence detection are two essential tasks. There exist previous attempts to develop methods for both tasks but majority of the existing methods utilize a frame-level loss function (e.g., cross-entropy) which does not fully leverage the underlying semantic structure of a surgery, leading to sub-optimal results. In this paper, we propose multi-task learning-based, LAtent Space-constrained Transformers, referred as LAST, for automatic surgical phase recognition and tool presence detection. Our design features a two-branch transformer architecture with a novel and generic way to leverage video-level semantic information during network training. This is done by learning a non-linear compact presentation of the underlying semantic structure information of surgical videos through a transformer variational autoencoder (VAE) and by encouraging models to follow the learned statistical distributions. In other words, LAST is of structure-aware and favors predictions that lie on the extracted low dimensional data manifold. Validated on two public datasets of the cholecystectomy surgery, i.e., the Cholec80 dataset and the M2cai16 dataset, our method achieves better results than other state-of-the-art methods. Specifically, on the Cholec80 dataset, our method achieves an average accuracy of 93.12±4.71%, an average precision of 89.25±5.49%, an average recall of 90.10±5.45% and an average Jaccard of 81.11 ±7.62% for phase recognition, and an average mAP of 95.15±3.87% for tool presence detection. Similar superior performance is also observed when LAST is applied to the M2cai16 dataset.
Collapse
|
4
|
Cao J, Yip HC, Chen Y, Scheppach M, Luo X, Yang H, Cheng MK, Long Y, Jin Y, Chiu PWY, Yam Y, Meng HML, Dou Q. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun 2023; 14:6676. [PMID: 37865629 PMCID: PMC10590425 DOI: 10.1038/s41467-023-42451-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 10/11/2023] [Indexed: 10/23/2023] Open
Abstract
Recent advancements in artificial intelligence have witnessed human-level performance; however, AI-enabled cognitive assistance for therapeutic procedures has not been fully explored nor pre-clinically validated. Here we propose AI-Endo, an intelligent surgical workflow recognition suit, for endoscopic submucosal dissection (ESD). Our AI-Endo is trained on high-quality ESD cases from an expert endoscopist, covering a decade time expansion and consisting of 201,026 labeled frames. The learned model demonstrates outstanding performance on validation data, including cases from relatively junior endoscopists with various skill levels, procedures conducted with different endoscopy systems and therapeutic skills, and cohorts from international multi-centers. Furthermore, we integrate our AI-Endo with the Olympus endoscopic system and validate the AI-enabled cognitive assistance system with animal studies in live ESD training sessions. Dedicated data analysis from surgical phase recognition results is summarized in an automatically generated report for skill assessment.
Collapse
Affiliation(s)
- Jianfeng Cao
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hon-Chi Yip
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong, China.
| | - Yueyao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Markus Scheppach
- Internal Medicine III-Gastroenterology, University Hospital of Augsburg, Augsburg, Germany
| | - Xiaobei Luo
- Guangdong Provincial Key Laboratory of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Hongzheng Yang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Ming Kit Cheng
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yonghao Long
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yueming Jin
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore
| | - Philip Wai-Yan Chiu
- Multi-scale Medical Robotics Center and The Chinese University of Hong Kong, Hong Kong, China.
| | - Yeung Yam
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong, China.
- Multi-scale Medical Robotics Center and The Chinese University of Hong Kong, Hong Kong, China.
- Centre for Perceptual and Interactive Intelligence and The Chinese University of Hong Kong, Hong Kong, China.
| | - Helen Mei-Ling Meng
- Centre for Perceptual and Interactive Intelligence and The Chinese University of Hong Kong, Hong Kong, China.
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| |
Collapse
|
5
|
Nwoye CI, Yu T, Sharma S, Murali A, Alapatt D, Vardazaryan A, Yuan K, Hajek J, Reiter W, Yamlahi A, Smidt FH, Zou X, Zheng G, Oliveira B, Torres HR, Kondo S, Kasai S, Holm F, Özsoy E, Gui S, Li H, Raviteja S, Sathish R, Poudel P, Bhattarai B, Wang Z, Rui G, Schellenberg M, Vilaça JL, Czempiel T, Wang Z, Sheet D, Thapa SK, Berniker M, Godau P, Morais P, Regmi S, Tran TN, Fonseca J, Nölke JH, Lima E, Vazquez E, Maier-Hein L, Navab N, Mascagni P, Seeliger B, Gonzalez C, Mutter D, Padoy N. CholecTriplet2022: Show me a tool and tell me the triplet - An endoscopic vision challenge for surgical action triplet detection. Med Image Anal 2023; 89:102888. [PMID: 37451133 DOI: 10.1016/j.media.2023.102888] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 06/23/2023] [Accepted: 06/28/2023] [Indexed: 07/18/2023]
Abstract
Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of ‹instrument, verb, target› triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.
Collapse
Affiliation(s)
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, France
| | | | | | | | | | - Kun Yuan
- ICube, University of Strasbourg, CNRS, France; Technical University Munich, Germany
| | | | | | - Amine Yamlahi
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Finn-Henri Smidt
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Xiaoyang Zou
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Bruno Oliveira
- 2Ai School of Technology, IPCA, Barcelos, Portugal; Life and Health Science Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; Algoritimi Center, School of Engineering, University of Minho, Guimeraes, Portugal
| | - Helena R Torres
- 2Ai School of Technology, IPCA, Barcelos, Portugal; Life and Health Science Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; Algoritimi Center, School of Engineering, University of Minho, Guimeraes, Portugal
| | | | | | | | - Ege Özsoy
- Technical University Munich, Germany
| | | | - Han Li
- Southern University of Science and Technology, China
| | | | | | | | | | | | | | - Melanie Schellenberg
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | | | | | - Zhenkun Wang
- Southern University of Science and Technology, China
| | | | - Shrawan Kumar Thapa
- Nepal Applied Mathematics and Informatics Institute for research (NAAMII), Nepal
| | | | - Patrick Godau
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Pedro Morais
- 2Ai School of Technology, IPCA, Barcelos, Portugal
| | - Sudarshan Regmi
- Nepal Applied Mathematics and Informatics Institute for research (NAAMII), Nepal
| | - Thuy Nuong Tran
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jaime Fonseca
- Algoritimi Center, School of Engineering, University of Minho, Guimeraes, Portugal
| | - Jan-Hinrich Nölke
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Estevão Lima
- Life and Health Science Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal
| | | | - Lena Maier-Hein
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Pietro Mascagni
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Barbara Seeliger
- ICube, University of Strasbourg, CNRS, France; University Hospital of Strasbourg, France; IHU Strasbourg, France
| | | | - Didier Mutter
- University Hospital of Strasbourg, France; IHU Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, France
| |
Collapse
|
6
|
Ramesh S, Dall'Alba D, Gonzalez C, Yu T, Mascagni P, Mutter D, Marescaux J, Fiorini P, Padoy N. Weakly Supervised Temporal Convolutional Networks for Fine-Grained Surgical Activity Recognition. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2592-2602. [PMID: 37030859 DOI: 10.1109/tmi.2023.3262847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Automatic recognition of fine-grained surgical activities, called steps, is a challenging but crucial task for intelligent intra-operative computer assistance. The development of current vision-based activity recognition methods relies heavily on a high volume of manually annotated data. This data is difficult and time-consuming to generate and requires domain-specific knowledge. In this work, we propose to use coarser and easier-to-annotate activity labels, namely phases, as weak supervision to learn step recognition with fewer step annotated videos. We introduce a step-phase dependency loss to exploit the weak supervision signal. We then employ a Single-Stage Temporal Convolutional Network (SS-TCN) with a ResNet-50 backbone, trained in an end-to-end fashion from weakly annotated videos, for temporal activity segmentation and recognition. We extensively evaluate and show the effectiveness of the proposed method on a large video dataset consisting of 40 laparoscopic gastric bypass procedures and the public benchmark CATARACTS containing 50 cataract surgeries.
Collapse
|
7
|
Ramesh S, Dall'Alba D, Gonzalez C, Yu T, Mascagni P, Mutter D, Marescaux J, Fiorini P, Padoy N. TRandAugment: temporal random augmentation strategy for surgical activity recognition from videos. Int J Comput Assist Radiol Surg 2023; 18:1665-1672. [PMID: 36944845 PMCID: PMC10491694 DOI: 10.1007/s11548-023-02864-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 03/01/2023] [Indexed: 03/23/2023]
Abstract
PURPOSE Automatic recognition of surgical activities from intraoperative surgical videos is crucial for developing intelligent support systems for computer-assisted interventions. Current state-of-the-art recognition methods are based on deep learning where data augmentation has shown the potential to improve the generalization of these methods. This has spurred work on automated and simplified augmentation strategies for image classification and object detection on datasets of still images. Extending such augmentation methods to videos is not straightforward, as the temporal dimension needs to be considered. Furthermore, surgical videos pose additional challenges as they are composed of multiple, interconnected, and long-duration activities. METHODS This work proposes a new simplified augmentation method, called TRandAugment, specifically designed for long surgical videos, that treats each video as an assemble of temporal segments and applies consistent but random transformations to each segment. The proposed augmentation method is used to train an end-to-end spatiotemporal model consisting of a CNN (ResNet50) followed by a TCN. RESULTS The effectiveness of the proposed method is demonstrated on two surgical video datasets, namely Bypass40 and CATARACTS, and two tasks, surgical phase and step recognition. TRandAugment adds a performance boost of 1-6% over previous state-of-the-art methods, that uses manually designed augmentations. CONCLUSION This work presents a simplified and automated augmentation method for long surgical videos. The proposed method has been validated on different datasets and tasks indicating the importance of devising temporal augmentation methods for long surgical videos.
Collapse
Affiliation(s)
- Sanat Ramesh
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy.
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France.
| | - Diego Dall'Alba
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Cristians Gonzalez
- University Hospital of Strasbourg, 67000, Strasbourg, France
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France
| | - Pietro Mascagni
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168, Rome, Italy
| | - Didier Mutter
- University Hospital of Strasbourg, 67000, Strasbourg, France
- IRCAD, 67000, Strasbourg, France
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
| | | | - Paolo Fiorini
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
| |
Collapse
|
8
|
Pore A, Li Z, Dall'Alba D, Hernansanz A, De Momi E, Menciassi A, Casals Gelpí A, Dankelman J, Fiorini P, Poorten EV. Autonomous Navigation for Robot-Assisted Intraluminal and Endovascular Procedures: A Systematic Review. IEEE T ROBOT 2023; 39:2529-2548. [DOI: 10.1109/tro.2023.3269384] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Affiliation(s)
- Ameya Pore
- Department of Computer Science, University of Verona, Verona, Italy
| | - Zhen Li
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Diego Dall'Alba
- Department of Computer Science, University of Verona, Verona, Italy
| | - Albert Hernansanz
- Center of Research in Biomedical Engineering, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | | | - Alicia Casals Gelpí
- Center of Research in Biomedical Engineering, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Jenny Dankelman
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - Paolo Fiorini
- Department of Computer Science, University of Verona, Verona, Italy
| | | |
Collapse
|
9
|
Nwoye CI, Alapatt D, Yu T, Vardazaryan A, Xia F, Zhao Z, Xia T, Jia F, Yang Y, Wang H, Yu D, Zheng G, Duan X, Getty N, Sanchez-Matilla R, Robu M, Zhang L, Chen H, Wang J, Wang L, Zhang B, Gerats B, Raviteja S, Sathish R, Tao R, Kondo S, Pang W, Ren H, Abbing JR, Sarhan MH, Bodenstedt S, Bhasker N, Oliveira B, Torres HR, Ling L, Gaida F, Czempiel T, Vilaça JL, Morais P, Fonseca J, Egging RM, Wijma IN, Qian C, Bian G, Li Z, Balasubramanian V, Sheet D, Luengo I, Zhu Y, Ding S, Aschenbrenner JA, van der Kar NE, Xu M, Islam M, Seenivasan L, Jenke A, Stoyanov D, Mutter D, Mascagni P, Seeliger B, Gonzalez C, Padoy N. CholecTriplet2021: A benchmark challenge for surgical action triplet recognition. Med Image Anal 2023; 86:102803. [PMID: 37004378 DOI: 10.1016/j.media.2023.102803] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 12/13/2022] [Accepted: 03/23/2023] [Indexed: 03/29/2023]
Abstract
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of ‹instrument, verb, target› combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
Collapse
|
10
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 37] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
11
|
Surgical declarative knowledge learning: concept and acceptability study. Comput Assist Surg (Abingdon) 2022; 27:74-83. [PMID: 35727207 DOI: 10.1080/24699322.2022.2086484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
Improving surgical training by means of technology assistance is an important challenge that aims to directly impact surgical quality. Surgical training includes the acquisition of two categories of knowledge: declarative knowledge (i.e. 'knowing what') and procedural knowledge (i.e. 'knowing how'). It is essential to acquire both before performing any particular surgery. There are currently many tools for acquiring procedural knowledge, such as simulators. However, few approaches or tools allow a trainer to formalize and record surgical declarative knowledge, and a trainee to have easy access to it. In this paper, we propose an approach for structuring surgical declarative knowledge according to procedural knowledge and based on surgical process modeling. A dedicated software application has been implemented. We evaluated the concept and the software usability on two procedures with different medical populations: endoscopic third ventriculostomy involving 6 neurosurgeons and preparation of a surgical table for craniotomy involving 4 scrub nurses. The results of both studies show that surgical process models could be a well-adapted approach for structuring and visualizing surgical declarative knowledge. The software application was perceived by neurosurgeons and scrub nurses as an innovative tool for managing and presenting surgical knowledge. The preliminary results show that the feasibility of the proposed approach and the acceptability and usability of the corresponding software. Future experiments will study impact of such an approach on knowledge acquisition.
Collapse
|
12
|
Neumann J, Uciteli A, Meschke T, Bieck R, Franke S, Herre H, Neumuth T. Ontology-based surgical workflow recognition and prediction. J Biomed Inform 2022; 136:104240. [DOI: 10.1016/j.jbi.2022.104240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 10/27/2022] [Accepted: 11/03/2022] [Indexed: 11/09/2022]
|
13
|
Nwoye CI, Yu T, Gonzalez C, Seeliger B, Mascagni P, Mutter D, Marescaux J, Padoy N. Rendezvous: Attention mechanisms for the recognition of surgical action triplets in endoscopic videos. Med Image Anal 2022; 78:102433. [PMID: 35398658 DOI: 10.1016/j.media.2022.102433] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 02/25/2022] [Accepted: 03/21/2022] [Indexed: 10/18/2022]
Abstract
Out of all existing frameworks for surgical workflow analysis in endoscopic videos, action triplet recognition stands out as the only one aiming to provide truly fine-grained and comprehensive information on surgical activities. This information, presented as 〈instrument, verb, target〉 combinations, is highly challenging to be accurately identified. Triplet components can be difficult to recognize individually; in this task, it requires not only performing recognition simultaneously for all three triplet components, but also correctly establishing the data association between them. To achieve this task, we introduce our new model, the Rendezvous (RDV), which recognizes triplets directly from surgical videos by leveraging attention at two different levels. We first introduce a new form of spatial attention to capture individual action triplet components in a scene; called Class Activation Guided Attention Mechanism (CAGAM). This technique focuses on the recognition of verbs and targets using activations resulting from instruments. To solve the association problem, our RDV model adds a new form of semantic attention inspired by Transformer networks; called Multi-Head of Mixed Attention (MHMA). This technique uses several cross and self attentions to effectively capture relationships between instruments, verbs, and targets. We also introduce CholecT50 - a dataset of 50 endoscopic videos in which every frame has been annotated with labels from 100 triplet classes. Our proposed RDV model significantly improves the triplet prediction mAP by over 9% compared to the state-of-the-art methods on this dataset.
Collapse
Affiliation(s)
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, France
| | | | - Barbara Seeliger
- IHU Strasbourg, France; University Hospital of Strasbourg, France
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France; Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Didier Mutter
- IHU Strasbourg, France; University Hospital of Strasbourg, France
| | | | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, France.
| |
Collapse
|
14
|
Junger D, Frommer SM, Burgert O. State-of-the-art of situation recognition systems for intraoperative procedures. Med Biol Eng Comput 2022; 60:921-939. [PMID: 35178622 PMCID: PMC8933302 DOI: 10.1007/s11517-022-02520-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Accepted: 01/30/2022] [Indexed: 11/05/2022]
Abstract
One of the key challenges for automatic assistance is the support of actors in the operating room depending on the status of the procedure. Therefore, context information collected in the operating room is used to gain knowledge about the current situation. In literature, solutions already exist for specific use cases, but it is doubtful to what extent these approaches can be transferred to other conditions. We conducted a comprehensive literature research on existing situation recognition systems for the intraoperative area, covering 274 articles and 95 cross-references published between 2010 and 2019. We contrasted and compared 58 identified approaches based on defined aspects such as used sensor data or application area. In addition, we discussed applicability and transferability. Most of the papers focus on video data for recognizing situations within laparoscopic and cataract surgeries. Not all of the approaches can be used online for real-time recognition. Using different methods, good results with recognition accuracies above 90% could be achieved. Overall, transferability is less addressed. The applicability of approaches to other circumstances seems to be possible to a limited extent. Future research should place a stronger focus on adaptability. The literature review shows differences within existing approaches for situation recognition and outlines research trends. Applicability and transferability to other conditions are less addressed in current work.
Collapse
Affiliation(s)
- D Junger
- School of Informatics, Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Alteburgstr. 150, 72762, Reutlingen, Germany.
| | - S M Frommer
- School of Informatics, Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Alteburgstr. 150, 72762, Reutlingen, Germany
| | - O Burgert
- School of Informatics, Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Alteburgstr. 150, 72762, Reutlingen, Germany
| |
Collapse
|
15
|
Carrillo F, Esfandiari H, Müller S, von Atzigen M, Massalimova A, Suter D, Laux CJ, Spirig JM, Farshad M, Fürnstahl P. Surgical Process Modeling for Open Spinal Surgeries. Front Surg 2022; 8:776945. [PMID: 35145990 PMCID: PMC8821818 DOI: 10.3389/fsurg.2021.776945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/30/2021] [Indexed: 11/13/2022] Open
Abstract
Modern operating rooms are becoming increasingly advanced thanks to the emerging medical technologies and cutting-edge surgical techniques. Current surgeries are transitioning into complex processes that involve information and actions from multiple resources. When designing context-aware medical technologies for a given intervention, it is of utmost importance to have a deep understanding of the underlying surgical process. This is essential to develop technologies that can correctly address the clinical needs and can adapt to the existing workflow. Surgical Process Modeling (SPM) is a relatively recent discipline that focuses on achieving a profound understanding of the surgical workflow and providing a model that explains the elements of a given surgery as well as their sequence and hierarchy, both in quantitative and qualitative manner. To date, a significant body of work has been dedicated to the development of comprehensive SPMs for minimally invasive baroscopic and endoscopic surgeries, while such models are missing for open spinal surgeries. In this paper, we provide SPMs common open spinal interventions in orthopedics. Direct video observations of surgeries conducted in our institution were used to derive temporal and transitional information about the surgical activities. This information was later used to develop detailed SPMs that modeled different primary surgical steps and highlighted the frequency of transitions between the surgical activities made within each step. Given the recent emersion of advanced techniques that are tailored to open spinal surgeries (e.g., artificial intelligence methods for intraoperative guidance and navigation), we believe that the SPMs provided in this study can serve as the basis for further advancement of next-generation algorithms dedicated to open spinal interventions that require a profound understanding of the surgical workflow (e.g., automatic surgical activity recognition and surgical skill evaluation). Furthermore, the models provided in this study can potentially benefit the clinical community through standardization of the surgery, which is essential for surgical training.
Collapse
Affiliation(s)
- Fabio Carrillo
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Hooman Esfandiari
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
- *Correspondence: Hooman Esfandiari ;
| | - Sandro Müller
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Marco von Atzigen
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
- Laboratory for Orthopaedic Biomechanics, Institute for Biomechanics, Swiss Federal Institute of Technology (ETH), Zurich, Switzerland
| | - Aidana Massalimova
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Daniel Suter
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Christoph J. Laux
- Department of Orthopaedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - José M. Spirig
- Department of Orthopaedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopaedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
- Department of Orthopaedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| |
Collapse
|
16
|
Uncharted Waters of Machine and Deep Learning for Surgical Phase Recognition in Neurosurgery. World Neurosurg 2022; 160:4-12. [PMID: 35026457 DOI: 10.1016/j.wneu.2022.01.020] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 01/05/2022] [Accepted: 01/05/2022] [Indexed: 12/20/2022]
Abstract
Recent years have witnessed artificial intelligence (AI) make meteoric leaps in both medicine and surgery, bridging the gap between the capabilities of humans and machines. Digitization of operating rooms and the creation of massive quantities of data have paved the way for machine learning and computer vision applications in surgery. Surgical phase recognition (SPR) is a newly emerging technology that uses data derived from operative videos to train machine and deep learning algorithms to identify the phases of surgery. Advancement of this technology will be key in establishing context-aware surgical systems in the future. By automatically recognizing and evaluating the current surgical scenario, these intelligent systems are able to provide intraoperative decision support, improve operating room efficiency, assess surgical skills, and aid in surgical training and education. Still in its infancy, SPR has been mainly studied in laparoscopic surgeries, with a contrasting stark lack of research within neurosurgery. Given the high-tech and rapidly advancing nature of neurosurgery, we believe SPR has a tremendous untapped potential in this field. Herein, we present an overview of the SPR technology, its potential applications in neurosurgery, and the challenges that lie ahead.
Collapse
|
17
|
Review of automated performance metrics to assess surgical technical skills in robot-assisted laparoscopy. Surg Endosc 2021; 36:853-870. [PMID: 34750700 DOI: 10.1007/s00464-021-08792-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 10/17/2021] [Indexed: 10/19/2022]
Abstract
INTRODUCTION Robot-assisted laparoscopy is a safe surgical approach with several studies suggesting correlations between complication rates and the surgeon's technical skills. Surgical skills are usually assessed by questionnaires completed by an expert observer. With the advent of surgical robots, automated surgical performance metrics (APMs)-objective measures related to instrument movements-can be computed. The aim of this systematic review was thus to assess APMs use in robot-assisted laparoscopic procedures. The primary outcome was the assessment of surgical skills by APMs and the secondary outcomes were the association between APM and surgeon parameters and the prediction of clinical outcomes. METHODS A systematic review following the PRISMA guidelines was conducted. PubMed and Scopus electronic databases were screened with the query "robot-assisted surgery OR robotic surgery AND performance metrics" between January 2010 and January 2021. The quality of the studies was assessed by the medical education research study quality instrument. The study settings, metrics, and applications were analysed. RESULTS The initial search yielded 341 citations of which 16 studies were finally included. The study settings were either simulated virtual reality (VR) (4 studies) or real clinical environment (12 studies). Data to compute APMs were kinematics (motion tracking), and system and specific events data (actions from the robot console). APMs were used to differentiate expertise levels, and thus validate VR modules, predict outcomes, and integrate datasets for automatic recognition models. APMs were correlated with clinical outcomes for some studies. CONCLUSIONS APMs constitute an objective approach for assessing technical skills. Evidence of associations between APMs and clinical outcomes remain to be confirmed by further studies, particularly, for non-urological procedures. Concurrent validation is also required.
Collapse
|
18
|
Ramesh S, Dall’Alba D, Gonzalez C, Yu T, Mascagni P, Mutter D, Marescaux J, Fiorini P, Padoy N. Multi-task temporal convolutional networks for joint recognition of surgical phases and steps in gastric bypass procedures. Int J Comput Assist Radiol Surg 2021; 16:1111-1119. [PMID: 34013464 PMCID: PMC8260406 DOI: 10.1007/s11548-021-02388-z] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Accepted: 04/27/2021] [Indexed: 12/31/2022]
Abstract
PURPOSE Automatic segmentation and classification of surgical activity is crucial for providing advanced support in computer-assisted interventions and autonomous functionalities in robot-assisted surgeries. Prior works have focused on recognizing either coarse activities, such as phases, or fine-grained activities, such as gestures. This work aims at jointly recognizing two complementary levels of granularity directly from videos, namely phases and steps. METHODS We introduce two correlated surgical activities, phases and steps, for the laparoscopic gastric bypass procedure. We propose a multi-task multi-stage temporal convolutional network (MTMS-TCN) along with a multi-task convolutional neural network (CNN) training setup to jointly predict the phases and steps and benefit from their complementarity to better evaluate the execution of the procedure. We evaluate the proposed method on a large video dataset consisting of 40 surgical procedures (Bypass40). RESULTS We present experimental results from several baseline models for both phase and step recognition on the Bypass40. The proposed MTMS-TCN method outperforms single-task methods in both phase and step recognition by 1-2% in accuracy, precision and recall. Furthermore, for step recognition, MTMS-TCN achieves a superior performance of 3-6% compared to LSTM-based models on all metrics. CONCLUSION In this work, we present a multi-task multi-stage temporal convolutional network for surgical activity recognition, which shows improved results compared to single-task models on a gastric bypass dataset with multi-level annotations. The proposed method shows that the joint modeling of phases and steps is beneficial to improve the overall recognition of each type of activity.
Collapse
Affiliation(s)
- Sanat Ramesh
- Altair Robotics Lab, Department of Computer Science, University of Verona, Verona, Italy
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
| | - Diego Dall’Alba
- Altair Robotics Lab, Department of Computer Science, University of Verona, Verona, Italy
| | | | - Tong Yu
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Didier Mutter
- University Hospital of Strasbourg, IHU Strasbourg, France
- IRCAD, Strasbourg, France
| | | | - Paolo Fiorini
- Altair Robotics Lab, Department of Computer Science, University of Verona, Verona, Italy
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
| |
Collapse
|
19
|
A learning robot for cognitive camera control in minimally invasive surgery. Surg Endosc 2021; 35:5365-5374. [PMID: 33904989 PMCID: PMC8346448 DOI: 10.1007/s00464-021-08509-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 04/07/2021] [Indexed: 12/13/2022]
Abstract
Background We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons. Methods The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon’s learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR. Results The duration of each operation decreased with the robot’s increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%. Conclusions The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon’s needs. Supplementary Information The online version contains supplementary material available at 10.1007/s00464-021-08509-8.
Collapse
|
20
|
Garrow CR, Kowalewski KF, Li L, Wagner M, Schmidt MW, Engelhardt S, Hashimoto DA, Kenngott HG, Bodenstedt S, Speidel S, Müller-Stich BP, Nickel F. Machine Learning for Surgical Phase Recognition: A Systematic Review. Ann Surg 2021; 273:684-693. [PMID: 33201088 DOI: 10.1097/sla.0000000000004425] [Citation(s) in RCA: 140] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
OBJECTIVE To provide an overview of ML models and data streams utilized for automated surgical phase recognition. BACKGROUND Phase recognition identifies different steps and phases of an operation. ML is an evolving technology that allows analysis and interpretation of huge data sets. Automation of phase recognition based on data inputs is essential for optimization of workflow, surgical training, intraoperative assistance, patient safety, and efficiency. METHODS A systematic review was performed according to the Cochrane recommendations and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. PubMed, Web of Science, IEEExplore, GoogleScholar, and CiteSeerX were searched. Literature describing phase recognition based on ML models and the capture of intraoperative signals during general surgery procedures was included. RESULTS A total of 2254 titles/abstracts were screened, and 35 full-texts were included. Most commonly used ML models were Hidden Markov Models and Artificial Neural Networks with a trend towards higher complexity over time. Most frequently used data types were feature learning from surgical videos and manual annotation of instrument use. Laparoscopic cholecystectomy was used most commonly, often achieving accuracy rates over 90%, though there was no consistent standardization of defined phases. CONCLUSIONS ML for surgical phase recognition can be performed with high accuracy, depending on the model, data type, and complexity of surgery. Different intraoperative data inputs such as video and instrument type can successfully be used. Most ML models still require significant amounts of manual expert annotations for training. The ML models may drive surgical workflow towards standardization, efficiency, and objectiveness to improve patient outcome in the future. REGISTRATION PROSPERO CRD42018108907.
Collapse
Affiliation(s)
- Carly R Garrow
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Karl-Friedrich Kowalewski
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
- Department of Urology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Linhong Li
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Martin Wagner
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Mona W Schmidt
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Sandy Engelhardt
- Department of Computer Science, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Daniel A Hashimoto
- Department of Surgery, Massachusetts General Hospital, Boston, Massachusetts
| | - Hannes G Kenngott
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Sebastian Bodenstedt
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Beat P Müller-Stich
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Felix Nickel
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| |
Collapse
|
21
|
Bodenstedt S, Wagner M, Müller-Stich BP, Weitz J, Speidel S. Artificial Intelligence-Assisted Surgery: Potential and Challenges. Visc Med 2020; 36:450-455. [PMID: 33447600 PMCID: PMC7768095 DOI: 10.1159/000511351] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 09/03/2020] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI) has recently achieved considerable success in different domains including medical applications. Although current advances are expected to impact surgery, up until now AI has not been able to leverage its full potential due to several challenges that are specific to that field. SUMMARY This review summarizes data-driven methods and technologies needed as a prerequisite for different AI-based assistance functions in the operating room. Potential effects of AI usage in surgery will be highlighted, concluding with ongoing challenges to enabling AI for surgery. KEY MESSAGES AI-assisted surgery will enable data-driven decision-making via decision support systems and cognitive robotic assistance. The use of AI for workflow analysis will help provide appropriate assistance in the right context. The requirements for such assistance must be defined by surgeons in close cooperation with computer scientists and engineers. Once the existing challenges will have been solved, AI assistance has the potential to improve patient care by supporting the surgeon without replacing him or her.
Collapse
Affiliation(s)
- Sebastian Bodenstedt
- Division of Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Beat Peter Müller-Stich
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Jürgen Weitz
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital Carl-Gustav-Carus, TU Dresden, Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| |
Collapse
|
22
|
Novel evaluation of surgical activity recognition models using task-based efficiency metrics. Int J Comput Assist Radiol Surg 2019; 14:2155-2163. [PMID: 31267333 DOI: 10.1007/s11548-019-02025-w] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 06/26/2019] [Indexed: 01/14/2023]
Abstract
PURPOSE Surgical task-based metrics (rather than entire procedure metrics) can be used to improve surgeon training and, ultimately, patient care through focused training interventions. Machine learning models to automatically recognize individual tasks or activities are needed to overcome the otherwise manual effort of video review. Traditionally, these models have been evaluated using frame-level accuracy. Here, we propose evaluating surgical activity recognition models by their effect on task-based efficiency metrics. In this way, we can determine when models have achieved adequate performance for providing surgeon feedback via metrics from individual tasks. METHODS We propose a new CNN-LSTM model, RP-Net-V2, to recognize the 12 steps of robotic-assisted radical prostatectomies (RARP). We evaluated our model both in terms of conventional methods (e.g., Jaccard Index, task boundary accuracy) as well as novel ways, such as the accuracy of efficiency metrics computed from instrument movements and system events. RESULTS Our proposed model achieves a Jaccard Index of 0.85 thereby outperforming previous models on RARP. Additionally, we show that metrics computed from tasks automatically identified using RP-Net-V2 correlate well with metrics from tasks labeled by clinical experts. CONCLUSION We demonstrate that metrics-based evaluation of surgical activity recognition models is a viable approach to determine when models can be used to quantify surgical efficiencies. We believe this approach and our results illustrate the potential for fully automated, postoperative efficiency reports.
Collapse
|
23
|
Neumann J, Franke S, Rockstroh M, Kasparick M, Neumuth T. Extending BPMN 2.0 for intraoperative workflow modeling with IEEE 11073 SDC for description and orchestration of interoperable, networked medical devices. Int J Comput Assist Radiol Surg 2019; 14:1403-1413. [PMID: 31055764 DOI: 10.1007/s11548-019-01982-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 04/16/2019] [Indexed: 11/29/2022]
Abstract
PURPOSE Surgical workflow management in integrated operating rooms (ORs) enables the implementation of novel computer-aided surgical assistance and new applications in process automation, situation awareness, and decision support. The context-sensitive configuration and orchestration of interoperable, networked medical devices is a prerequisite for an effective reduction in the surgeons' workload, by providing the right service and right information at the right time. The information about the surgical situation must be described as surgical process models and distributed to the medical devices and IT systems in the OR. Available modeling languages are not capable of describing surgical processes for this application. METHODS In this work, the BPMNSIX modeling language for intraoperative processes is technically enhanced and implemented for workflow build-time and run-time. Therefore, particular attention is given to the integration of the recently published IEEE 11073 SDC standard family for a service-oriented architecture of networked medical devices. In addition, interaction patterns for context-aware configuration and device orchestration were presented. RESULTS The identified interaction patterns were implemented in BPMNSIX for an ophthalmologic use case. Therefore, the examples of the process-driven incorporation and control of device services could be demonstrated. CONCLUSION The modeling of surgical procedures with BPMNSIX allows the implementation of context-sensitive surgical assistance functionalities and enables flexibility in terms of the orchestration of dynamically changing device ensembles and integration of unknown devices in the surgical workflow management.
Collapse
Affiliation(s)
- Juliane Neumann
- Innovation Center Computer Assisted Surgery (ICCAS), Leipzig University, Semmelweisstr. 14, 04103, Leipzig, Germany.
| | - Stefan Franke
- Innovation Center Computer Assisted Surgery (ICCAS), Leipzig University, Semmelweisstr. 14, 04103, Leipzig, Germany
| | - Max Rockstroh
- Innovation Center Computer Assisted Surgery (ICCAS), Leipzig University, Semmelweisstr. 14, 04103, Leipzig, Germany
| | - Martin Kasparick
- Institute of Applied Microelectronics and Computer Engineering, University of Rostock, Rostock, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), Leipzig University, Semmelweisstr. 14, 04103, Leipzig, Germany
| |
Collapse
|
24
|
Kasparick M, Andersen B, Franke S, Rockstroh M, Golatowski F, Timmermann D, Ingenerf J, Neumuth T. Enabling artificial intelligence in high acuity medical environments. MINIM INVASIV THER 2019; 28:120-126. [PMID: 30950665 DOI: 10.1080/13645706.2019.1599957] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Acute patient treatment can heavily profit from AI-based assistive and decision support systems, in terms of improved patient outcome as well as increased efficiency. Yet, only very few applications have been reported because of the limited accessibility of device data due to the lack of adoption of open standards, and the complexity of regulatory/approval requirements for AI-based systems. The fragmentation of data, still being stored in isolated silos, results in limited accessibility for AI in healthcare and machine learning is complicated by the loss of semantics in data conversions. We outline a reference model that addresses the requirements of innovative AI-based research systems as well as the clinical reality. The integration of networked medical devices and Clinical Repositories based on open standards, such as IEEE 11073 SDC and HL7 FHIR, will foster novel assistance and decision support. The reference model will make point-of-care device data available for AI-based approaches. Semantic interoperability between Clinical and Research Repositories will allow correlating patient data, device data, and the patient outcome. Thus, complete workflows in high acuity environments can be analysed. Open semantic interoperability will enable the improvement of patient outcome and the increase of efficiency on a large scale and across clinical applications.
Collapse
Affiliation(s)
- Martin Kasparick
- a Institute of Applied Microelectronics and Computer Engineering (IMD) , University of Rostock , Rostock , Germany
| | - Björn Andersen
- b Institute of Medical Informatics , University of Lübeck , Lübeck , Germany
| | - Stefan Franke
- c Innovation Center Computer Assisted Surgery (ICCAS) , University of Leipzig , Leipzig , Germany
| | - Max Rockstroh
- c Innovation Center Computer Assisted Surgery (ICCAS) , University of Leipzig , Leipzig , Germany
| | - Frank Golatowski
- a Institute of Applied Microelectronics and Computer Engineering (IMD) , University of Rostock , Rostock , Germany
| | - Dirk Timmermann
- a Institute of Applied Microelectronics and Computer Engineering (IMD) , University of Rostock , Rostock , Germany
| | - Josef Ingenerf
- b Institute of Medical Informatics , University of Lübeck , Lübeck , Germany
| | - Thomas Neumuth
- c Innovation Center Computer Assisted Surgery (ICCAS) , University of Leipzig , Leipzig , Germany
| |
Collapse
|
25
|
Gholinejad M, J Loeve A, Dankelman J. Surgical process modelling strategies: which method to choose for determining workflow? MINIM INVASIV THER 2019; 28:91-104. [PMID: 30915885 DOI: 10.1080/13645706.2019.1591457] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The vital role of surgeries in healthcare requires a constant attention to improvement. Surgical process modelling is an innovative and rather recently introduced approach for tackling the issues in today's complex surgeries. This modelling field is very challenging and still under development, therefore, it is not always clear which modelling strategy would best fit the needs in which situations. The aim of this study was to provide a guide for matching the choice of modelling strategies for determining surgical workflows. In this work, the concepts associated with surgical process modelling are described, aiming to clarify them and to promote their use in future studies. The relationship of these concepts and the possible combinations of the suitable approaches for modelling strategies are elaborated and the criteria for opting for the proper modelling strategy are discussed.
Collapse
Affiliation(s)
- Maryam Gholinejad
- a Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering , Delft University of Technology , Delft , the Netherlands
| | - Arjo J Loeve
- a Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering , Delft University of Technology , Delft , the Netherlands
| | - Jenny Dankelman
- a Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering , Delft University of Technology , Delft , the Netherlands
| |
Collapse
|
26
|
Nakawala H, Bianchi R, Pescatori LE, De Cobelli O, Ferrigno G, De Momi E. “Deep-Onto” network for surgical workflow and context recognition. Int J Comput Assist Radiol Surg 2018; 14:685-696. [DOI: 10.1007/s11548-018-1882-8] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Accepted: 11/05/2018] [Indexed: 12/31/2022]
|
27
|
Meeuwsen FC, van Luyn F, Blikkendaal MD, Jansen FW, van den Dobbelsteen JJ. Surgical phase modelling in minimal invasive surgery. Surg Endosc 2018; 33:1426-1432. [PMID: 30187202 PMCID: PMC6484813 DOI: 10.1007/s00464-018-6417-4] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Accepted: 08/31/2018] [Indexed: 12/04/2022]
Abstract
Background Surgical Process Modelling (SPM) offers the possibility to automatically gain insight in the surgical workflow, with the potential to improve OR logistics and surgical care. Most studies have focussed on phase recognition modelling of the laparoscopic cholecystectomy, because of its standard and frequent execution. To demonstrate the broad applicability of SPM, more diverse and complex procedures need to be studied. The aim of this study is to investigate the accuracy in which we can recognise and extract surgical phases in laparoscopic hysterectomies (LHs) with inherent variability in procedure time. To show the applicability of the approach, the model was used to automatically predict surgical end-times. Methods A dataset of 40 video-recorded LHs was manually annotated for instrument use and divided into ten surgical phases. The use of instruments provided the feature input for building a Random Forest surgical phase recognition model that was trained to automatically recognise surgical phases. Tenfold cross-validation was performed to optimise the model for predicting the surgical end-time throughout the procedure. Results Average surgery time is 128 ± 27 min. Large variability within specific phases is seen. Overall, the Random Forest model reaches an accuracy of 77% recognising the current phase in the procedure. Six of the phases are predicted accurately over 80% of their duration. When predicting the surgical end-time, on average an error of 16 ± 13 min is reached throughout the procedure. Conclusions This study demonstrates an intra-operative approach to recognise surgical phases in 40 laparoscopic hysterectomy cases based on instrument usage data. The model is capable of automatic detection of surgical phases for generation of a solid prediction of the surgical end-time.
Collapse
Affiliation(s)
- F C Meeuwsen
- Department of Biomechanical Engineering, Delft University of Technology, Mekelweg 2, 2628 CD, Delft, The Netherlands.
| | - F van Luyn
- Department of Biomechanical Engineering, Delft University of Technology, Mekelweg 2, 2628 CD, Delft, The Netherlands
| | - M D Blikkendaal
- Department of Gynecology, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333 ZA, Leiden, The Netherlands
| | - F W Jansen
- Department of Gynecology, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333 ZA, Leiden, The Netherlands
| | - J J van den Dobbelsteen
- Department of Biomechanical Engineering, Delft University of Technology, Mekelweg 2, 2628 CD, Delft, The Netherlands
| |
Collapse
|
28
|
Gibaud B, Forestier G, Feldmann C, Ferrigno G, Gonçalves P, Haidegger T, Julliard C, Katić D, Kenngott H, Maier-Hein L, März K, de Momi E, Nagy DÁ, Nakawala H, Neumann J, Neumuth T, Rojas Balderrama J, Speidel S, Wagner M, Jannin P. Toward a standard ontology of surgical process models. Int J Comput Assist Radiol Surg 2018; 13:1397-1408. [PMID: 30006820 DOI: 10.1007/s11548-018-1824-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Accepted: 07/05/2018] [Indexed: 12/15/2022]
Abstract
PURPOSE The development of common ontologies has recently been identified as one of the key challenges in the emerging field of surgical data science (SDS). However, past and existing initiatives in the domain of surgery have mainly been focussing on individual groups and failed to achieve widespread international acceptance by the research community. To address this challenge, the authors of this paper launched a European initiative-OntoSPM Collaborative Action-with the goal of establishing a framework for joint development of ontologies in the field of SDS. This manuscript summarizes the goals and the current status of the international initiative. METHODS A workshop was organized in 2016, gathering the main European research groups having experience in developing and using ontologies in this domain. It led to the conclusion that a common ontology for surgical process models (SPM) was absolutely needed, and that the existing OntoSPM ontology could provide a good starting point toward the collaborative design and promotion of common, standard ontologies on SPM. RESULTS The workshop led to the OntoSPM Collaborative Action-launched in mid-2016-with the objective to develop, maintain and promote the use of common ontologies of SPM relevant to the whole domain of SDS. The fundamental concept, the architecture, the management and curation of the common ontology have been established, making it ready for wider public use. CONCLUSION The OntoSPM Collaborative Action has been in operation for 24 months, with a growing dedicated membership. Its main result is a modular ontology, undergoing constant updates and extensions, based on the experts' suggestions. It remains an open collaborative action, which always welcomes new contributors and applications.
Collapse
Affiliation(s)
| | | | - Carolin Feldmann
- Division of Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Paulo Gonçalves
- Instituto Politécnico de Castelo Branco, Castelo Branco, Portugal.,IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Tamás Haidegger
- Antal Bejczy Center for Intelligent Robotics, Óbuda University, Budapest, Hungary.,Austrian Center for Medical Innovation and Technology (ACMIT), Wiener Neustadt, Austria
| | - Chantal Julliard
- Inserm, LTSI - UMR_S 1099, Univ Rennes, Rennes, France.,LIRMM, Université de Montpellier, Montpellier, France.,Stryker GmbH, Freiburg, Germany
| | - Darko Katić
- Karlsruhe Institute of Technology, Institute for Anthropomatics and Robotics, Karlsruhe, Germany.,ArtiMinds Robotics GmbH, Karlsruhe, Germany
| | - Hannes Kenngott
- Department of General, Abdominal and Transplantation Surgery, University of Heidelberg, Heidelberg, Germany
| | - Lena Maier-Hein
- Division of Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Keno März
- Division of Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Dénes Ákos Nagy
- Antal Bejczy Center for Intelligent Robotics, Óbuda University, Budapest, Hungary.,Austrian Center for Medical Innovation and Technology (ACMIT), Wiener Neustadt, Austria
| | | | - Juliane Neumann
- Innovation Center Computer Assisted Surgery, Leipzig University, Leipzig, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery, Leipzig University, Leipzig, Germany
| | | | | | - Martin Wagner
- Department of General, Abdominal and Transplantation Surgery, University of Heidelberg, Heidelberg, Germany
| | - Pierre Jannin
- Inserm, LTSI - UMR_S 1099, Univ Rennes, Rennes, France
| |
Collapse
|
29
|
Franke S, Rockstroh M, Hofer M, Neumuth T. The intelligent OR: design and validation of a context-aware surgical working environment. Int J Comput Assist Radiol Surg 2018; 13:1301-1308. [DOI: 10.1007/s11548-018-1791-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Accepted: 05/09/2018] [Indexed: 11/28/2022]
|
30
|
Speidel S, Bodenstedt S, Maier-Hein L, Kenngott H. Kognitive Chirurgie/Chirurgie 4.0. COLOPROCTOLOGY 2018. [DOI: 10.1007/s00053-018-0236-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
31
|
Kenngott HG, Wagner M, Preukschas AA, Müller-Stich BP. [Intelligent operating room suite : From passive medical devices to the self-thinking cognitive surgical assistant]. Chirurg 2018; 87:1033-1038. [PMID: 27778059 DOI: 10.1007/s00104-016-0308-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Modern operating room (OR) suites are mostly digitally connected but until now the primary focus was on the presentation, transfer and distribution of images. Device information and processes within the operating theaters are barely considered. Cognitive assistance systems have triggered a fundamental rethinking in the automotive industry as well as in logistics. In principle, tasks in the OR, some of which are highly repetitive, also have great potential to be supported by automated cognitive assistance via a self-thinking system. This includes the coordination of the entire workflow in the perioperative process in both the operating theater and the whole hospital. With corresponding data from hospital information systems, medical devices and appropriate models of the surgical process, intelligent systems could optimize the workflow in the operating theater in the near future and support the surgeon. Preliminary results on the use of device information and automatically controlled OR suites are already available. Such systems include, for example the guidance of laparoscopic camera systems. Nevertheless, cognitive assistance systems that make use of knowledge about patients, processes and other pieces of information to improve surgical treatment are not yet available in the clinical routine but are urgently needed in order to automatically assist the surgeon in situation-related activities and thus substantially improve patient care.
Collapse
Affiliation(s)
- H G Kenngott
- Abteilung für Allgemein-, Viszeral- und Transplantationschirurgie, Klinikum der Universität Heidelberg, Chirurgische Universitätsklinik, Universität Heidelberg, Im Neuenheimer Feld 110, 69120, Heidelberg, Deutschland
| | - M Wagner
- Abteilung für Allgemein-, Viszeral- und Transplantationschirurgie, Klinikum der Universität Heidelberg, Chirurgische Universitätsklinik, Universität Heidelberg, Im Neuenheimer Feld 110, 69120, Heidelberg, Deutschland
| | - A A Preukschas
- Abteilung für Allgemein-, Viszeral- und Transplantationschirurgie, Klinikum der Universität Heidelberg, Chirurgische Universitätsklinik, Universität Heidelberg, Im Neuenheimer Feld 110, 69120, Heidelberg, Deutschland
| | - B P Müller-Stich
- Abteilung für Allgemein-, Viszeral- und Transplantationschirurgie, Klinikum der Universität Heidelberg, Chirurgische Universitätsklinik, Universität Heidelberg, Im Neuenheimer Feld 110, 69120, Heidelberg, Deutschland.
| |
Collapse
|
32
|
Nakawala H, Ferrigno G, De Momi E. Development of an intelligent surgical training system for Thoracentesis. Artif Intell Med 2018; 84:50-63. [DOI: 10.1016/j.artmed.2017.10.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Revised: 06/19/2017] [Accepted: 10/31/2017] [Indexed: 11/24/2022]
|
33
|
Kenngott HG, Apitz M, Wagner M, Preukschas AA, Speidel S, Müller-Stich BP. Paradigm shift: cognitive surgery. Innov Surg Sci 2017; 2:139-143. [PMID: 31579745 PMCID: PMC6754016 DOI: 10.1515/iss-2017-0012] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Accepted: 05/04/2017] [Indexed: 11/15/2022] Open
Abstract
In the last hundred years surgery has experienced a dramatic increase of scientific knowledge and innovation. The need to consider best available evidence and to apply technical innovations, such as minimally invasive approaches, challenges the surgeon both intellectually and manually. In order to overcome this challenge, computer scientists and surgeons within the interdisciplinary field of "cognitive surgery" explore and innovate new ways of data processing and management. This article gives a general overview of the topic and outlines selected pre-, intra- and postoperative applications. It explores the possibilities of new intelligent devices and software across the entire treatment process of patients ending in the consideration of an "Intelligent Hospital" or "Hospital 4.0", in which the borders between IT infrastructures, medical devices, medical personnel and patients are bridged by technology. Thereby, the "Hospital 4.0" is an intelligent system, which gives the right information, at the right time, at the right place to the individual stakeholder and thereby helps to decrease complications and improve clinical processes as well as patient outcome.
Collapse
Affiliation(s)
- Hannes G Kenngott
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, 69120 Heidelberg, Germany
| | - Martin Apitz
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, 69120 Heidelberg, Germany
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, 69120 Heidelberg, Germany
| | - Anas A Preukschas
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, 69120 Heidelberg, Germany
| | - Stefanie Speidel
- Karlsruhe Institute of Technology, Humanoids and Intelligence Systems Lab, 76131 Karlsruhe, Germany
| | - Beat Peter Müller-Stich
- Department of General, Visceral and Transplant Surgery, University of Heidelberg, Im Neuenheimer Feld 110, 69120 Heidelberg, Germany,
| |
Collapse
|
34
|
Zia A, Zhang C, Xiong X, Jarc AM. Temporal clustering of surgical activities in robot-assisted surgery. Int J Comput Assist Radiol Surg 2017; 12:1171-1178. [PMID: 28477279 PMCID: PMC5509863 DOI: 10.1007/s11548-017-1600-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 04/24/2017] [Indexed: 12/03/2022]
Abstract
Purpose Most evaluations of surgical workflow or surgeon skill use simple, descriptive statistics (e.g., time) across whole procedures, thereby deemphasizing critical steps and potentially obscuring critical inefficiencies or skill deficiencies. In this work, we examine off-line, temporal clustering methods that chunk training procedures into clinically relevant surgical tasks or steps during robot-assisted surgery. Methods We collected system kinematics and events data from nine surgeons performing five different surgical tasks on a porcine model using the da Vinci Si surgical system. The five tasks were treated as one ‘pseudo-procedure.’ We compared four different temporal clustering algorithms—hierarchical aligned cluster analysis (HACA), aligned cluster analysis (ACA), spectral clustering (SC), and Gaussian mixture model (GMM)—using multiple feature sets. Results HACA outperformed the other methods reaching an average segmentation accuracy of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$88.0\%$$\end{document}88.0% when using all system kinematics and events data as features. SC and ACA reached moderate performance with \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$84.1\%$$\end{document}84.1% and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$82.9\%$$\end{document}82.9% average segmentation accuracy, respectively. GMM consistently performed poorest across algorithms. Conclusions Unsupervised temporal segmentation of surgical procedures into clinically relevant steps achieves good accuracy using just system data. Such methods will enable surgeons to receive directed feedback on individual surgical tasks rather than whole procedures in order to improve workflow, assessment, and training.
Collapse
Affiliation(s)
- Aneeq Zia
- College of Computing, Georgia Institute of Technology, North Ave NW, Atlanta, GA, 30332, USA
| | - Chi Zhang
- Electrical Engineering and Computer Science, University of Tennessee, 1520 Middle Dr, Knoxville, TN, 37996, USA
| | - Xiaobin Xiong
- Robotics, Georgia Institute of Technology, North Ave NW, Atlanta, GA, 30332, USA
| | - Anthony M Jarc
- Medical Research, Intuitive Surgical, Inc., 5655 Spalding Drive, Norcross, GA, 30092, USA.
| |
Collapse
|
35
|
Nakawala H, Ferrigno G, De Momi E. Toward a Knowledge-Driven Context-Aware System for Surgical Assistance. ACTA ACUST UNITED AC 2017. [DOI: 10.1142/s2424905x17400074] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Complex surgeries complications are increasing, thus making an efficient surgical assistance is a real need. In this work, an ontology-based context-aware system was developed for surgical training/assistance during Thoracentesis by using image processing and semantic technologies. We evaluated the Thoracentesis ontology and implemented a paradigmatic test scenario to check the efficacy of the system by recognizing contextual information, e.g. the presence of surgical instruments on the table. The framework was able to retrieve contextual information about current surgical activity along with information on the need or presence of a surgical instrument.
Collapse
Affiliation(s)
- Hirenkumar Nakawala
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Piazza Leonardo da Vinci, Milan 20133, Italy
| | - Giancarlo Ferrigno
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Piazza Leonardo da Vinci, Milan 20133, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Piazza Leonardo da Vinci, Milan 20133, Italy
| |
Collapse
|
36
|
Bridging the gap between formal and experience-based knowledge for context-aware laparoscopy. Int J Comput Assist Radiol Surg 2016; 11:881-8. [DOI: 10.1007/s11548-016-1379-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Accepted: 03/07/2016] [Indexed: 10/22/2022]
|