1
|
Kolbinger FR, Bodenstedt S, Carstens M, Leger S, Krell S, Rinner FM, Nielen TP, Kirchberg J, Fritzmann J, Weitz J, Distler M, Speidel S. Artificial Intelligence for context-aware surgical guidance in complex robot-assisted oncological procedures: An exploratory feasibility study. EUROPEAN JOURNAL OF SURGICAL ONCOLOGY 2024; 50:106996. [PMID: 37591704 DOI: 10.1016/j.ejso.2023.106996] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/19/2023]
Abstract
INTRODUCTION Complex oncological procedures pose various surgical challenges including dissection in distinct tissue planes and preservation of vulnerable anatomical structures throughout different surgical phases. In rectal surgery, violation of dissection planes increases the risk of local recurrence and autonomous nerve damage resulting in incontinence and sexual dysfunction. This work explores the feasibility of phase recognition and target structure segmentation in robot-assisted rectal resection (RARR) using machine learning. MATERIALS AND METHODS A total of 57 RARR were recorded and subsets of these were annotated with respect to surgical phases and exact locations of target structures (anatomical structures, tissue types, static structures, and dissection areas). For surgical phase recognition, three machine learning models were trained: LSTM, MSTCN, and Trans-SVNet. Based on pixel-wise annotations of target structures in 9037 images, individual segmentation models based on DeepLabv3 were trained. Model performance was evaluated using F1 score, Intersection-over-Union (IoU), accuracy, precision, recall, and specificity. RESULTS The best results for phase recognition were achieved with the MSTCN model (F1 score: 0.82 ± 0.01, accuracy: 0.84 ± 0.03). Mean IoUs for target structure segmentation ranged from 0.14 ± 0.22 to 0.80 ± 0.14 for organs and tissue types and from 0.11 ± 0.11 to 0.44 ± 0.30 for dissection areas. Image quality, distorting factors (i.e. blood, smoke), and technical challenges (i.e. lack of depth perception) considerably impacted segmentation performance. CONCLUSION Machine learning-based phase recognition and segmentation of selected target structures are feasible in RARR. In the future, such functionalities could be integrated into a context-aware surgical guidance system for rectal surgery.
Collapse
Affiliation(s)
- Fiona R Kolbinger
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany; Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany.
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
| | - Matthias Carstens
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - Stefan Leger
- Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Stefanie Krell
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Franziska M Rinner
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - Thomas P Nielen
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - Johanna Kirchberg
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
| | - Johannes Fritzmann
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
| | - Jürgen Weitz
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany; Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
| | - Marius Distler
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
| | - Stefanie Speidel
- Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany.
| |
Collapse
|
2
|
Peng H, Lin S, King D, Su YH, Abuzeid WM, Bly RA, Moe KS, Hannaford B. Reducing annotating load: Active learning with synthetic images in surgical instrument segmentation. Med Image Anal 2024; 97:103246. [PMID: 38943835 DOI: 10.1016/j.media.2024.103246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 05/28/2024] [Accepted: 06/17/2024] [Indexed: 07/01/2024]
Abstract
Accurate instrument segmentation in the endoscopic vision of minimally invasive surgery is challenging due to complex instruments and environments. Deep learning techniques have shown competitive performance in recent years. However, deep learning usually requires a large amount of labeled data to achieve accurate prediction, which poses a significant workload. To alleviate this workload, we propose an active learning-based framework to generate synthetic images for efficient neural network training. In each active learning iteration, a small number of informative unlabeled images are first queried by active learning and manually labeled. Next, synthetic images are generated based on these selected images. The instruments and backgrounds are cropped out and randomly combined with blending and fusion near the boundary. The proposed method leverages the advantage of both active learning and synthetic images. The effectiveness of the proposed method is validated on two sinus surgery datasets and one intraabdominal surgery dataset. The results indicate a considerable performance improvement, especially when the size of the annotated dataset is small. All the code is open-sourced at: https://github.com/HaonanPeng/active_syn_generator.
Collapse
Affiliation(s)
- Haonan Peng
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA.
| | - Shan Lin
- University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| | - Daniel King
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA
| | - Yun-Hsuan Su
- Mount Holyoke College, 50 College St, South Hadley, MA 01075, USA
| | - Waleed M Abuzeid
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA
| | - Randall A Bly
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA
| | - Kris S Moe
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA
| | - Blake Hannaford
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA
| |
Collapse
|
3
|
Özsoy E, Czempiel T, Örnek EP, Eck U, Tombari F, Navab N. Holistic OR domain modeling: a semantic scene graph approach. Int J Comput Assist Radiol Surg 2024; 19:791-799. [PMID: 37823976 PMCID: PMC11098880 DOI: 10.1007/s11548-023-03022-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 09/12/2023] [Indexed: 10/13/2023]
Abstract
PURPOSE Surgical procedures take place in highly complex operating rooms (OR), involving medical staff, patients, devices and their interactions. Until now, only medical professionals are capable of comprehending these intricate links and interactions. This work advances the field toward automated, comprehensive and semantic understanding and modeling of the OR domain by introducing semantic scene graphs (SSG) as a novel approach to describing and summarizing surgical environments in a structured and semantically rich manner. METHODS We create the first open-source 4D SSG dataset. 4D-OR includes simulated total knee replacement surgeries captured by RGB-D sensors in a realistic OR simulation center. It includes annotations for SSGs, human and object pose, clinical roles and surgical phase labels. We introduce a neural network-based SSG generation pipeline for semantic reasoning in the OR and apply our approach to two downstream tasks: clinical role prediction and surgical phase recognition. RESULTS We show that our pipeline can successfully reason within the OR domain. The capabilities of our scene graphs are further highlighted by their successful application to clinical role prediction and surgical phase recognition tasks. CONCLUSION This work paves the way for multimodal holistic operating room modeling, with the potential to significantly enhance the state of the art in surgical data analysis, such as enabling more efficient and precise decision-making during surgical procedures, and ultimately improving patient safety and surgical outcomes. We release our code and dataset at github.com/egeozsoy/4D-OR.
Collapse
Affiliation(s)
- Ege Özsoy
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany.
| | - Tobias Czempiel
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany
| | - Evin Pınar Örnek
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany
| | - Ulrich Eck
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany
| | - Federico Tombari
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany
- Google, Zurich, Switzerland
| | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany
| |
Collapse
|
4
|
Rivoir D, Funke I, Speidel S. On the pitfalls of Batch Normalization for end-to-end video learning: A study on surgical workflow analysis. Med Image Anal 2024; 94:103126. [PMID: 38452578 DOI: 10.1016/j.media.2024.103126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 01/11/2024] [Accepted: 02/26/2024] [Indexed: 03/09/2024]
Abstract
Batch Normalization's (BN) unique property of depending on other samples in a batch is known to cause problems in several tasks, including sequence modeling. Yet, BN-related issues are hardly studied for long video understanding, despite the ubiquitous use of BN in CNNs (Convolutional Neural Networks) for feature extraction. Especially in surgical workflow analysis, where the lack of pretrained feature extractors has led to complex, multi-stage training pipelines, limited awareness of BN issues may have hidden the benefits of training CNNs and temporal models end to end. In this paper, we analyze pitfalls of BN in video learning, including issues specific to online tasks such as a 'cheating' effect in anticipation. We observe that BN's properties create major obstacles for end-to-end learning. However, using BN-free backbones, even simple CNN-LSTMs beat the state of the art on three surgical workflow benchmarks by utilizing adequate end-to-end training strategies which maximize temporal context. We conclude that awareness of BN's pitfalls is crucial for effective end-to-end learning in surgical tasks. By reproducing results on natural-video datasets, we hope our insights will benefit other areas of video learning as well. Code is available at: https://gitlab.com/nct_tso_public/pitfalls_bn.
Collapse
Affiliation(s)
- Dominik Rivoir
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC Dresden), Fetscherstraße 74, 01307 Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany; Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany.
| | - Isabel Funke
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC Dresden), Fetscherstraße 74, 01307 Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany; Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - Stefanie Speidel
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC Dresden), Fetscherstraße 74, 01307 Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany; Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| |
Collapse
|
5
|
Liu P, Steuer S, Golde J, Morgenstern J, Hu Y, Schieffer C, Ossmann S, Kirsten L, Bodenstedt S, Pfeiffer M, Speidel S, Koch E, Neudert M. The Dresden in vivo OCT dataset for automatic middle ear segmentation. Sci Data 2024; 11:242. [PMID: 38409278 DOI: 10.1038/s41597-024-03000-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 01/25/2024] [Indexed: 02/28/2024] Open
Abstract
Endoscopic optical coherence tomography (OCT) offers a non-invasive approach to perform the morphological and functional assessment of the middle ear in vivo. However, interpreting such OCT images is challenging and time-consuming due to the shadowing of preceding structures. Deep neural networks have emerged as a promising tool to enhance this process in multiple aspects, including segmentation, classification, and registration. Nevertheless, the scarcity of annotated datasets of OCT middle ear images poses a significant hurdle to the performance of neural networks. We introduce the Dresden in vivo OCT Dataset of the Middle Ear (DIOME) featuring 43 OCT volumes from both healthy and pathological middle ears of 29 subjects. DIOME provides semantic segmentations of five crucial anatomical structures (tympanic membrane, malleus, incus, stapes and promontory), and sparse landmarks delineating the salient features of the structures. The availability of these data facilitates the training and evaluation of algorithms regarding various analysis tasks with middle ear OCT images, e.g. diagnostics.
Collapse
Affiliation(s)
- Peng Liu
- Department of Otorhinolaryngology Head and Neck Surgery, University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Faculty of Medicine, 01307, Dresden, Germany.
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC Dresden), German Cancer Research Center (DKFZ), Helmholtz-Zentrum Dresden-Rossendorf (HZDR), 01307, Dresden, Germany.
- Else Kröner Fresenius Center, TUD Dresden University of Technology, 01307, Dresden, Germany.
| | - Svea Steuer
- Else Kröner Fresenius Center, TUD Dresden University of Technology, 01307, Dresden, Germany
- Clinical Sensoring and Monitoring, TUD Dresden University of Technology, 01307, Dresden, Germany
| | - Jonas Golde
- Else Kröner Fresenius Center, TUD Dresden University of Technology, 01307, Dresden, Germany
- Clinical Sensoring and Monitoring, TUD Dresden University of Technology, 01307, Dresden, Germany
- Medical Physics and Biomedical Engineering, TUD Dresden University of Technology, 01307, Dresden, Germany
- Fraunhofer Institute for Material and Beam Technology IWS, 01277, Dresden, Germany
| | - Joseph Morgenstern
- Department of Otorhinolaryngology Head and Neck Surgery, University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Faculty of Medicine, 01307, Dresden, Germany
- Else Kröner Fresenius Center, TUD Dresden University of Technology, 01307, Dresden, Germany
- Ear Research Center Dresden, TUD Dresden University of Technology, 01307, Dresden, Germany
| | - Yujia Hu
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC Dresden), German Cancer Research Center (DKFZ), Helmholtz-Zentrum Dresden-Rossendorf (HZDR), 01307, Dresden, Germany
| | - Catherina Schieffer
- Ear Research Center Dresden, TUD Dresden University of Technology, 01307, Dresden, Germany
| | - Steffen Ossmann
- Ear Research Center Dresden, TUD Dresden University of Technology, 01307, Dresden, Germany
| | - Lars Kirsten
- Clinical Sensoring and Monitoring, TUD Dresden University of Technology, 01307, Dresden, Germany
- Medical Physics and Biomedical Engineering, TUD Dresden University of Technology, 01307, Dresden, Germany
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC Dresden), German Cancer Research Center (DKFZ), Helmholtz-Zentrum Dresden-Rossendorf (HZDR), 01307, Dresden, Germany
- Else Kröner Fresenius Center, TUD Dresden University of Technology, 01307, Dresden, Germany
| | - Micha Pfeiffer
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC Dresden), German Cancer Research Center (DKFZ), Helmholtz-Zentrum Dresden-Rossendorf (HZDR), 01307, Dresden, Germany
| | - Stefanie Speidel
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC Dresden), German Cancer Research Center (DKFZ), Helmholtz-Zentrum Dresden-Rossendorf (HZDR), 01307, Dresden, Germany
- Else Kröner Fresenius Center, TUD Dresden University of Technology, 01307, Dresden, Germany
| | - Edmund Koch
- Else Kröner Fresenius Center, TUD Dresden University of Technology, 01307, Dresden, Germany
- Clinical Sensoring and Monitoring, TUD Dresden University of Technology, 01307, Dresden, Germany
| | - Marcus Neudert
- Department of Otorhinolaryngology Head and Neck Surgery, University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Faculty of Medicine, 01307, Dresden, Germany.
- Else Kröner Fresenius Center, TUD Dresden University of Technology, 01307, Dresden, Germany.
- Ear Research Center Dresden, TUD Dresden University of Technology, 01307, Dresden, Germany.
| |
Collapse
|
6
|
Abid R, Hussein AA, Guru KA. Artificial Intelligence in Urology: Current Status and Future Perspectives. Urol Clin North Am 2024; 51:117-130. [PMID: 37945097 DOI: 10.1016/j.ucl.2023.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Surgical fields, especially urology, have shifted increasingly toward the use of artificial intelligence (AI). Advancements in AI have created massive improvements in diagnostics, outcome predictions, and robotic surgery. For robotic surgery to progress from assisting surgeons to eventually reaching autonomous procedures, there must be advancements in machine learning, natural language processing, and computer vision. Moreover, barriers such as data availability, interpretability of autonomous decision-making, Internet connection and security, and ethical concerns must be overcome.
Collapse
Affiliation(s)
- Rayyan Abid
- Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106, USA
| | - Ahmed A Hussein
- Department of Urology, Roswell Park Comprehensive Cancer Center
| | - Khurshid A Guru
- Department of Urology, Roswell Park Comprehensive Cancer Center.
| |
Collapse
|
7
|
Demir KC, Schieber H, Weise T, Roth D, May M, Maier A, Yang SH. Deep Learning in Surgical Workflow Analysis: A Review of Phase and Step Recognition. IEEE J Biomed Health Inform 2023; 27:5405-5417. [PMID: 37665700 DOI: 10.1109/jbhi.2023.3311628] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/06/2023]
Abstract
OBJECTIVE In the last two decades, there has been a growing interest in exploring surgical procedures with statistical models to analyze operations at different semantic levels. This information is necessary for developing context-aware intelligent systems, which can assist the physicians during operations, evaluate procedures afterward or help the management team to effectively utilize the operating room. The objective is to extract reliable patterns from surgical data for the robust estimation of surgical activities performed during operations. The purpose of this article is to review the state-of-the-art deep learning methods that have been published after 2018 for analyzing surgical workflows, with a focus on phase and step recognition. METHODS Three databases, IEEE Xplore, Scopus, and PubMed were searched, and additional studies are added through a manual search. After the database search, 343 studies were screened and a total of 44 studies are selected for this review. CONCLUSION The use of temporal information is essential for identifying the next surgical action. Contemporary methods used mainly RNNs, hierarchical CNNs, and Transformers to preserve long-distance temporal relations. The lack of large publicly available datasets for various procedures is a great challenge for the development of new and robust models. As supervised learning strategies are used to show proof-of-concept, self-supervised, semi-supervised, or active learning methods are used to mitigate dependency on annotated data. SIGNIFICANCE The present study provides a comprehensive review of recent methods in surgical workflow analysis, summarizes commonly used architectures, datasets, and discusses challenges.
Collapse
|
8
|
Brandenburg JM, Jenke AC, Stern A, Daum MTJ, Schulze A, Younis R, Petrynowski P, Davitashvili T, Vanat V, Bhasker N, Schneider S, Mündermann L, Reinke A, Kolbinger FR, Jörns V, Fritz-Kebede F, Dugas M, Maier-Hein L, Klotz R, Distler M, Weitz J, Müller-Stich BP, Speidel S, Bodenstedt S, Wagner M. Active learning for extracting surgomic features in robot-assisted minimally invasive esophagectomy: a prospective annotation study. Surg Endosc 2023; 37:8577-8593. [PMID: 37833509 PMCID: PMC10615926 DOI: 10.1007/s00464-023-10447-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 09/02/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND With Surgomics, we aim for personalized prediction of the patient's surgical outcome using machine-learning (ML) on multimodal intraoperative data to extract surgomic features as surgical process characteristics. As high-quality annotations by medical experts are crucial, but still a bottleneck, we prospectively investigate active learning (AL) to reduce annotation effort and present automatic recognition of surgomic features. METHODS To establish a process for development of surgomic features, ten video-based features related to bleeding, as highly relevant intraoperative complication, were chosen. They comprise the amount of blood and smoke in the surgical field, six instruments, and two anatomic structures. Annotation of selected frames from robot-assisted minimally invasive esophagectomies was performed by at least three independent medical experts. To test whether AL reduces annotation effort, we performed a prospective annotation study comparing AL with equidistant sampling (EQS) for frame selection. Multiple Bayesian ResNet18 architectures were trained on a multicentric dataset, consisting of 22 videos from two centers. RESULTS In total, 14,004 frames were tag annotated. A mean F1-score of 0.75 ± 0.16 was achieved for all features. The highest F1-score was achieved for the instruments (mean 0.80 ± 0.17). This result is also reflected in the inter-rater-agreement (1-rater-kappa > 0.82). Compared to EQS, AL showed better recognition results for the instruments with a significant difference in the McNemar test comparing correctness of predictions. Moreover, in contrast to EQS, AL selected more frames of the four less common instruments (1512 vs. 607 frames) and achieved higher F1-scores for common instruments while requiring less training frames. CONCLUSION We presented ten surgomic features relevant for bleeding events in esophageal surgery automatically extracted from surgical video using ML. AL showed the potential to reduce annotation effort while keeping ML performance high for selected features. The source code and the trained models are published open source.
Collapse
Affiliation(s)
- Johanna M Brandenburg
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Alexander C Jenke
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Antonia Stern
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Marie T J Daum
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - André Schulze
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Rayan Younis
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Philipp Petrynowski
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Tornike Davitashvili
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Vincent Vanat
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Nithya Bhasker
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Sophia Schneider
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Lars Mündermann
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Annika Reinke
- Department of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Fiona R Kolbinger
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
- Else Kröner-Fresenius Center for Digital Health, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Vanessa Jörns
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Fleur Fritz-Kebede
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Martin Dugas
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Lena Maier-Hein
- Department of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Rosa Klotz
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- The Study Center of the German Surgical Society (SDGC), Heidelberg University Hospital, Heidelberg, Germany
| | - Marius Distler
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Jürgen Weitz
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - Beat P Müller-Stich
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- University Center for Gastrointestinal and Liver Diseases, St. Clara Hospital and University Hospital Basel, Basel, Switzerland
| | - Stefanie Speidel
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
- German Cancer Research Center (DKFZ), Heidelberg, Germany.
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany.
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany.
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany.
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany.
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany.
| |
Collapse
|
9
|
Boehringer AS, Sanaat A, Arabi H, Zaidi H. An active learning approach to train a deep learning algorithm for tumor segmentation from brain MR images. Insights Imaging 2023; 14:141. [PMID: 37620554 PMCID: PMC10449747 DOI: 10.1186/s13244-023-01487-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 07/22/2023] [Indexed: 08/26/2023] Open
Abstract
PURPOSE This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. METHODS The publicly available training dataset provided for the 2021 RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge was used in this study, consisting of 1251 multi-institutional, multi-parametric MR images. Post-contrast T1, T2, and T2 FLAIR images as well as ground truth manual segmentation were used as input for the model. The data were split into a training set of 1151 cases and testing set of 100 cases, with the testing set remaining constant throughout. Deep convolutional neural network segmentation models were trained using the NiftyNet platform. To test the viability of active learning in training a segmentation model, an initial reference model was trained using all 1151 training cases followed by two additional models using only 575 cases and 100 cases. The resulting predicted segmentations of these two additional models on the remaining training cases were then addended to the training dataset for additional training. RESULTS It was demonstrated that an active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas (0.906 reference Dice score vs 0.868 active learning Dice score) while only requiring manual annotation for 28.6% of the data. CONCLUSION The active learning approach when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data. CRITICAL RELEVANCE STATEMENT Active learning concepts were applied to a deep learning-assisted segmentation of brain gliomas from MR images to assess their viability in reducing the required amount of manually annotated ground truth data in model training. KEY POINTS • This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. • The active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas. • Active learning when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data.
Collapse
Affiliation(s)
- Andrew S Boehringer
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland.
- Geneva University Neurocenter, University of Geneva, CH-1211, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
10
|
Nyangoh Timoh K, Huaulme A, Cleary K, Zaheer MA, Lavoué V, Donoho D, Jannin P. A systematic review of annotation for surgical process model analysis in minimally invasive surgery based on video. Surg Endosc 2023:10.1007/s00464-023-10041-w. [PMID: 37157035 DOI: 10.1007/s00464-023-10041-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/25/2023] [Indexed: 05/10/2023]
Abstract
BACKGROUND Annotated data are foundational to applications of supervised machine learning. However, there seems to be a lack of common language used in the field of surgical data science. The aim of this study is to review the process of annotation and semantics used in the creation of SPM for minimally invasive surgery videos. METHODS For this systematic review, we reviewed articles indexed in the MEDLINE database from January 2000 until March 2022. We selected articles using surgical video annotations to describe a surgical process model in the field of minimally invasive surgery. We excluded studies focusing on instrument detection or recognition of anatomical areas only. The risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data from the studies were visually presented in table using the SPIDER tool. RESULTS Of the 2806 articles identified, 34 were selected for review. Twenty-two were in the field of digestive surgery, six in ophthalmologic surgery only, one in neurosurgery, three in gynecologic surgery, and two in mixed fields. Thirty-one studies (88.2%) were dedicated to phase, step, or action recognition and mainly relied on a very simple formalization (29, 85.2%). Clinical information in the datasets was lacking for studies using available public datasets. The process of annotation for surgical process model was lacking and poorly described, and description of the surgical procedures was highly variable between studies. CONCLUSION Surgical video annotation lacks a rigorous and reproducible framework. This leads to difficulties in sharing videos between institutions and hospitals because of the different languages used. There is a need to develop and use common ontology to improve libraries of annotated surgical videos.
Collapse
Affiliation(s)
- Krystel Nyangoh Timoh
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France.
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France.
- Laboratoire d'Anatomie et d'Organogenèse, Faculté de Médecine, Centre Hospitalier Universitaire de Rennes, 2 Avenue du Professeur Léon Bernard, 35043, Rennes Cedex, France.
- Department of Obstetrics and Gynecology, Rennes Hospital, Rennes, France.
| | - Arnaud Huaulme
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, 20010, USA
| | - Myra A Zaheer
- George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - Vincent Lavoué
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France
| | - Dan Donoho
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington, DC, 20010, USA
| | - Pierre Jannin
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| |
Collapse
|
11
|
Schulze A, Tran D, Daum MTJ, Kisilenko A, Maier-Hein L, Speidel S, Distler M, Weitz J, Müller-Stich BP, Bodenstedt S, Wagner M. Ensuring privacy protection in the era of big laparoscopic video data: development and validation of an inside outside discrimination algorithm (IODA). Surg Endosc 2023:10.1007/s00464-023-10078-x. [PMID: 37145173 PMCID: PMC10338566 DOI: 10.1007/s00464-023-10078-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 04/10/2023] [Indexed: 05/06/2023]
Abstract
BACKGROUND Laparoscopic videos are increasingly being used for surgical artificial intelligence (AI) and big data analysis. The purpose of this study was to ensure data privacy in video recordings of laparoscopic surgery by censoring extraabdominal parts. An inside-outside-discrimination algorithm (IODA) was developed to ensure privacy protection while maximizing the remaining video data. METHODS IODAs neural network architecture was based on a pretrained AlexNet augmented with a long-short-term-memory. The data set for algorithm training and testing contained a total of 100 laparoscopic surgery videos of 23 different operations with a total video length of 207 h (124 min ± 100 min per video) resulting in 18,507,217 frames (185,965 ± 149,718 frames per video). Each video frame was tagged either as abdominal cavity, trocar, operation site, outside for cleaning, or translucent trocar. For algorithm testing, a stratified fivefold cross-validation was used. RESULTS The distribution of annotated classes were abdominal cavity 81.39%, trocar 1.39%, outside operation site 16.07%, outside for cleaning 1.08%, and translucent trocar 0.07%. Algorithm training on binary or all five classes showed similar excellent results for classifying outside frames with a mean F1-score of 0.96 ± 0.01 and 0.97 ± 0.01, sensitivity of 0.97 ± 0.02 and 0.0.97 ± 0.01, and a false positive rate of 0.99 ± 0.01 and 0.99 ± 0.01, respectively. CONCLUSION IODA is able to discriminate between inside and outside with a high certainty. In particular, only a few outside frames are misclassified as inside and therefore at risk for privacy breach. The anonymized videos can be used for multi-centric development of surgical AI, quality management or educational purposes. In contrast to expensive commercial solutions, IODA is made open source and can be improved by the scientific community.
Collapse
Affiliation(s)
- A Schulze
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases, Heidelberg, Germany
| | - D Tran
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases, Heidelberg, Germany
| | - M T J Daum
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases, Heidelberg, Germany
| | - A Kisilenko
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases, Heidelberg, Germany
| | - L Maier-Hein
- Division of Intelligent Medical Systems, German Cancer Research Center (Dkfz), Heidelberg, Germany
| | - S Speidel
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Center for the Tactile Internet With Human in the Loop (CeTI), Technische Universität Dresden, Dresden, Germany
| | - M Distler
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - J Weitz
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - B P Müller-Stich
- Clarunis, University Center for Gastrointestinal and Liver Disease, Basel, Switzerland
| | - S Bodenstedt
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Center for the Tactile Internet With Human in the Loop (CeTI), Technische Universität Dresden, Dresden, Germany
| | - M Wagner
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany.
- National Center for Tumor Diseases, Heidelberg, Germany.
- Center for the Tactile Internet With Human in the Loop (CeTI), Technische Universität Dresden, Dresden, Germany.
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany.
| |
Collapse
|
12
|
Jalal NA, Abdulbaki Alshirbaji T, Laufer B, Docherty PD, Neumuth T, Moeller K. Analysing multi-perspective patient-related data during laparoscopic gynaecology procedures. Sci Rep 2023; 13:1604. [PMID: 36709360 PMCID: PMC9884204 DOI: 10.1038/s41598-023-28652-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 01/23/2023] [Indexed: 01/29/2023] Open
Abstract
Fusing data from different medical perspectives inside the operating room (OR) sets the stage for developing intelligent context-aware systems. These systems aim to promote better awareness inside the OR by keeping every medical team well informed about the work of other teams and thus mitigate conflicts resulting from different targets. In this research, a descriptive analysis of data collected from anaesthesiology and surgery was performed to investigate the relationships between the intra-abdominal pressure (IAP) and lung mechanics for patients during laparoscopic procedures. Data of nineteen patients who underwent laparoscopic gynaecology were included. Statistical analysis of all subjects showed a strong relationship between the IAP and dynamic lung compliance (r = 0.91). Additionally, the peak airway pressure was also strongly correlated to the IAP in volume-controlled ventilated patients (r = 0.928). Statistical results obtained by this study demonstrate the importance of analysing the relationship between surgical actions and physiological responses. Moreover, these results form the basis for developing medical decision support models, e.g., automatic compensation of IAP effects on lung function.
Collapse
Affiliation(s)
- Nour Aldeen Jalal
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054, Villingen-Schwenningen, Germany.
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany.
| | - Tamer Abdulbaki Alshirbaji
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054, Villingen-Schwenningen, Germany
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| | - Bernhard Laufer
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054, Villingen-Schwenningen, Germany
| | - Paul D Docherty
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054, Villingen-Schwenningen, Germany
- Department of Mechanical Engineering, University of Canterbury, Christchurch, 8041, New Zealand
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| | - Knut Moeller
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054, Villingen-Schwenningen, Germany
| |
Collapse
|
13
|
Wagner M, Brandenburg JM, Bodenstedt S, Schulze A, Jenke AC, Stern A, Daum MTJ, Mündermann L, Kolbinger FR, Bhasker N, Schneider G, Krause-Jüttler G, Alwanni H, Fritz-Kebede F, Burgert O, Wilhelm D, Fallert J, Nickel F, Maier-Hein L, Dugas M, Distler M, Weitz J, Müller-Stich BP, Speidel S. Surgomics: personalized prediction of morbidity, mortality and long-term outcome in surgery using machine learning on multimodal data. Surg Endosc 2022; 36:8568-8591. [PMID: 36171451 PMCID: PMC9613751 DOI: 10.1007/s00464-022-09611-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 09/03/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics. METHODS We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features' clinical relevance and technical feasibility. RESULTS In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was "surgical skill and quality of performance" for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was "Instrument" (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were "intraoperative adverse events", "action performed with instruments", "vital sign monitoring", and "difficulty of surgery". CONCLUSION Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.
Collapse
Affiliation(s)
- Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
| | - Johanna M Brandenburg
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - André Schulze
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Alexander C Jenke
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Antonia Stern
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Marie T J Daum
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Lars Mündermann
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Fiona R Kolbinger
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Else Kröner Fresenius Center for Digital Health, Technische Universität Dresden, Dresden, Germany
| | - Nithya Bhasker
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Gerd Schneider
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Grit Krause-Jüttler
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Hisham Alwanni
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Fleur Fritz-Kebede
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Reutlingen, Germany
| | - Dirk Wilhelm
- Department of Surgery, Faculty of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Johannes Fallert
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Felix Nickel
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Lena Maier-Hein
- Department of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Dugas
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Marius Distler
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Jürgen Weitz
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Beat-Peter Müller-Stich
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Stefanie Speidel
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| |
Collapse
|
14
|
Senk S, Ulbricht M, Tsokalo I, Rischke J, Li SC, Speidel S, Nguyen GT, Seeling P, Fitzek FHP. Healing Hands: The Tactile Internet in Future Tele-Healthcare. SENSORS 2022; 22:s22041404. [PMID: 35214306 PMCID: PMC8963047 DOI: 10.3390/s22041404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 01/31/2022] [Accepted: 02/01/2022] [Indexed: 02/01/2023]
Abstract
In the early 2020s, the coronavirus pandemic brought the notion of remotely connected care to the general population across the globe. Oftentimes, the timely provisioning of access to and the implementation of affordable care are drivers behind tele-healthcare initiatives. Tele-healthcare has already garnered significant momentum in research and implementations in the years preceding the worldwide challenge of 2020, supported by the emerging capabilities of communication networks. The Tactile Internet (TI) with human-in-the-loop is one of those developments, leading to the democratization of skills and expertise that will significantly impact the long-term developments of the provisioning of care. However, significant challenges remain that require today’s communication networks to adapt to support the ultra-low latency required. The resulting latency challenge necessitates trans-disciplinary research efforts combining psychophysiological as well as technological solutions to achieve one millisecond and below round-trip times. The objective of this paper is to provide an overview of the benefits enabled by solving this network latency reduction challenge by employing state-of-the-art Time-Sensitive Networking (TSN) devices in a testbed, realizing the service differentiation required for the multi-modal human-machine interface. With completely new types of services and use cases resulting from the TI, we describe the potential impacts on remote surgery and remote rehabilitation as examples, with a focus on the future of tele-healthcare in rural settings.
Collapse
Affiliation(s)
- Stefan Senk
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Deutsche Telekom Chair of Communication Network, 01062 Dresden, Germany; (S.S.); (M.U.); (J.R.); (F.H.P.F.)
| | - Marian Ulbricht
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Deutsche Telekom Chair of Communication Network, 01062 Dresden, Germany; (S.S.); (M.U.); (J.R.); (F.H.P.F.)
| | | | - Justus Rischke
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Deutsche Telekom Chair of Communication Network, 01062 Dresden, Germany; (S.S.); (M.U.); (J.R.); (F.H.P.F.)
| | - Shu-Chen Li
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Faculty of Psychology, Technische Universität Dresden, 01062 Dresden, Germany;
| | - Stefanie Speidel
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), National Center for Tumor Diseases, Technische Universität Dresden, 01062 Dresden, Germany;
| | - Giang T. Nguyen
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Chair of Haptic Communication Systems, 01062 Dresden, Germany;
| | - Patrick Seeling
- Department of Computer Science, Central Michigan University, Mount Pleasant, MI 48859, USA
- Correspondence:
| | - Frank H. P. Fitzek
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Deutsche Telekom Chair of Communication Network, 01062 Dresden, Germany; (S.S.); (M.U.); (J.R.); (F.H.P.F.)
| |
Collapse
|
15
|
Maier-Hein L, Eisenmann M, Sarikaya D, März K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Nötzel D, Kenngott HG, Kikinis R, Mündermann L, Navab N, Onogur S, Roß T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ückert F, Müller-Stich BP, Jannin P, Speidel S. Surgical data science - from concepts toward clinical translation. Med Image Anal 2022; 76:102306. [PMID: 34879287 PMCID: PMC9135051 DOI: 10.1016/j.media.2021.102306] [Citation(s) in RCA: 101] [Impact Index Per Article: 33.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 11/03/2021] [Accepted: 11/08/2021] [Indexed: 02/06/2023]
Abstract
Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Duygu Sarikaya
- Department of Computer Engineering, Faculty of Engineering, Gazi University, Ankara, Turkey; LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Anand Malpani
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | | | - Hubertus Feussner
- Department of Surgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | | | - Adrian Park
- Department of Surgery, Anne Arundel Health System, Annapolis, Maryland, USA; Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Carla Pugh
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Swaroop S Vedula
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Kevin Cleary
- The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, D.C., USA
| | | | - Germain Forestier
- L'Institut de Recherche en Informatique, Mathématiques, Automatique et Signal (IRIMAS), University of Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - Bernard Gibaud
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Teodor Grantcharov
- University of Toronto, Toronto, Ontario, Canada; The Li Ka Shing Knowledge Institute of St. Michael's Hospital, Toronto, Ontario, Canada
| | - Makoto Hashizume
- Kyushu University, Fukuoka, Japan; Kitakyushu Koga Hospital, Fukuoka, Japan
| | - Doreen Heckmann-Nötzel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | | | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tobias Roß
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Russell H Taylor
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Minu D Tizabi
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | - Justin Collins
- Division of Surgery and Interventional Science, University College London, London, United Kingdom
| | - Ines Gockel
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, Leipzig University Hospital, Leipzig, Germany
| | - Jan Goedeke
- Pediatric Surgery, Dr. von Hauner Children's Hospital, Ludwig-Maximilians-University, Munich, Germany
| | - Daniel A Hashimoto
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA; Surgical AI and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Luc Joyeux
- My FetUZ Fetal Research Center, Department of Development and Regeneration, Biomedical Sciences, KU Leuven, Leuven, Belgium; Center for Surgical Technologies, Faculty of Medicine, KU Leuven, Leuven, Belgium; Department of Obstetrics and Gynecology, Division Woman and Child, Fetal Medicine Unit, University Hospitals Leuven, Leuven, Belgium; Michael E. DeBakey Department of Surgery, Texas Children's Hospital and Baylor College of Medicine, Houston, Texas, USA
| | - Kyle Lam
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Daniel R Leff
- Department of BioSurgery and Surgical Technology, Imperial College London, London, United Kingdom; Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Breast Unit, Imperial Healthcare NHS Trust, London, United Kingdom
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Ontario, Canada
| | - Hani J Marcus
- National Hospital for Neurology and Neurosurgery, and UCL Queen Square Institute of Neurology, London, United Kingdom
| | - Ozanan Meireles
- Massachusetts General Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Alexander Seitel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dogu Teber
- Department of Urology, City Hospital Karlsruhe, Karlsruhe, Germany
| | - Frank Ückert
- Institute for Applied Medical Informatics, Hamburg University Hospital, Hamburg, Germany
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Pierre Jannin
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| |
Collapse
|
16
|
Harnessing Artificial Intelligence in Maxillofacial Surgery. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
17
|
Cai L, Wang L, Fu X, Zeng X. Active Semisupervised Model for Improving the Identification of Anticancer Peptides. ACS OMEGA 2021; 6:23998-24008. [PMID: 34568678 PMCID: PMC8459422 DOI: 10.1021/acsomega.1c03132] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Indexed: 06/13/2023]
Abstract
Cancer is one of the most dangerous threats to human health. Accurate identification of anticancer peptides (ACPs) is valuable for the development and design of new anticancer agents. However, most machine-learning algorithms have limited ability to identify ACPs, and their accuracy is sensitive to the amount of label data. In this paper, we construct a new technology that combines active learning (AL) and label propagation (LP) algorithm to solve this problem, called (ACP-ALPM). First, we develop an efficient feature representation method based on various descriptor information and coding information of the peptide sequence. Then, an AL strategy is used to filter out the most informative data for model training, and a more powerful LP classifier is cast through continuous iterations. Finally, we evaluate the performance of ACP-ALPM and compare it with that of some of the state-of-the-art and classic methods; experimental results show that our method is significantly superior to them. In addition, through the experimental comparison of random selection and AL on three public data sets, it is proved that the AL strategy is more effective. Notably, a visualization experiment further verified that AL can utilize unlabeled data to improve the performance of the model. We hope that our method can be extended to other types of peptides and provide more inspiration for other similar work.
Collapse
Affiliation(s)
- Lijun Cai
- Department of Information
Science and Technology, Hunan University, Changsha, Hunan 410000, China
| | - Li Wang
- Department of Information
Science and Technology, Hunan University, Changsha, Hunan 410000, China
| | - Xiangzheng Fu
- Department of Information
Science and Technology, Hunan University, Changsha, Hunan 410000, China
| | - Xiangxiang Zeng
- Department of Information
Science and Technology, Hunan University, Changsha, Hunan 410000, China
| |
Collapse
|
18
|
Aspart F, Bolmgren JL, Lavanchy JL, Beldi G, Woods MS, Padoy N, Hosgor E. ClipAssistNet: bringing real-time safety feedback to operating rooms. Int J Comput Assist Radiol Surg 2021; 17:5-13. [PMID: 34297269 PMCID: PMC8739308 DOI: 10.1007/s11548-021-02441-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 06/17/2021] [Indexed: 12/18/2022]
Abstract
Purpose Cholecystectomy is one of the most common laparoscopic procedures. A critical phase of laparoscopic cholecystectomy consists in clipping the cystic duct and artery before cutting them. Surgeons can improve the clipping safety by ensuring full visibility of the clipper, while enclosing the artery or the duct with the clip applier jaws. This can prevent unintentional interaction with neighboring tissues or clip misplacement. In this article, we present a novel real-time feedback to ensure safe visibility of the instrument during this critical phase. This feedback incites surgeons to keep the tip of their clip applier visible while operating. Methods We present a new dataset of 300 laparoscopic cholecystectomy videos with frame-wise annotation of clipper tip visibility. We further present ClipAssistNet, a neural network-based image classifier which detects the clipper tip visibility in single frames. ClipAssistNet ensembles predictions from 5 neural networks trained on different subsets of the dataset. Results Our model learns to classify the clipper tip visibility by detecting its presence in the image. Measured on a separate test set, ClipAssistNet classifies the clipper tip visibility with an AUROC of 0.9107, and 66.15% specificity at 95% sensitivity. Additionally, it can perform real-time inference (16 FPS) on an embedded computing board; this enables its deployment in operating room settings. Conclusion This work presents a new application of computer-assisted surgery for laparoscopic cholecystectomy, namely real-time feedback on adequate visibility of the clip applier. We believe this feedback can increase surgeons’ attentiveness when departing from safe visibility during the critical clipping of the cystic duct and artery. Supplementary Information The online version supplementary material available at 10.1007/s11548-021-02441-x.
Collapse
Affiliation(s)
- Florian Aspart
- Caresyntax GmbH, Komturstraße 18A, 12099, Berlin, Germany.
| | - Jon L Bolmgren
- Caresyntax GmbH, Komturstraße 18A, 12099, Berlin, Germany
| | - Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland
| | | | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Enes Hosgor
- Caresyntax GmbH, Komturstraße 18A, 12099, Berlin, Germany
| |
Collapse
|
19
|
Shi X, Jin Y, Dou Q, Heng PA. Semi-supervised learning with progressive unlabeled data excavation for label-efficient surgical workflow recognition. Med Image Anal 2021; 73:102158. [PMID: 34325149 DOI: 10.1016/j.media.2021.102158] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 06/04/2021] [Accepted: 06/29/2021] [Indexed: 11/16/2022]
Abstract
Surgical workflow recognition is a fundamental task in computer-assisted surgery and a key component of various applications in operating rooms. Existing deep learning models have achieved promising results for surgical workflow recognition, heavily relying on a large amount of annotated videos. However, obtaining annotation is time-consuming and requires the domain knowledge of surgeons. In this paper, we propose a novel two-stage Semi-Supervised Learning method for label-efficient Surgical workflow recognition, named as SurgSSL. Our proposed SurgSSL progressively leverages the inherent knowledge held in the unlabeled data to a larger extent: from implicit unlabeled data excavation via motion knowledge excavation, to explicit unlabeled data excavation via pre-knowledge pseudo labeling. Specifically, we first propose a novel intra-sequence Visual and Temporal Dynamic Consistency (VTDC) scheme for implicit excavation. It enforces prediction consistency of the same data under perturbations in both spatial and temporal spaces, encouraging model to capture rich motion knowledge. We further perform explicit excavation by optimizing the model towards our pre-knowledge pseudo label. It is naturally generated by the VTDC regularized model with prior knowledge of unlabeled data encoded, and demonstrates superior reliability for model supervision compared with the label generated by existing methods. We extensively evaluate our method on two public surgical datasets of Cholec80 and M2CAI challenge dataset. Our method surpasses the state-of-the-art semi-supervised methods by a large margin, e.g., improving 10.5% Accuracy under the severest annotation regime of M2CAI dataset. Using only 50% labeled videos on Cholec80, our approach achieves competitive performance compared with full-data training method.
Collapse
Affiliation(s)
- Xueying Shi
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Yueming Jin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong.
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong; T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong; T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
20
|
Meining A. Endoneering: A new perspective for basic research in gastrointestinal endoscopy. United European Gastroenterol J 2021; 8:241-245. [PMID: 32310738 DOI: 10.1177/2050640620913433] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Affiliation(s)
- Alexander Meining
- Department of Gastroenterology, University Hospital of Würzburg, Würzburg, Germany.,Dedicated to Professor Meinhard Classen, who sadly passed away on 6 October 2019
| |
Collapse
|
21
|
Garrow CR, Kowalewski KF, Li L, Wagner M, Schmidt MW, Engelhardt S, Hashimoto DA, Kenngott HG, Bodenstedt S, Speidel S, Müller-Stich BP, Nickel F. Machine Learning for Surgical Phase Recognition: A Systematic Review. Ann Surg 2021; 273:684-693. [PMID: 33201088 DOI: 10.1097/sla.0000000000004425] [Citation(s) in RCA: 134] [Impact Index Per Article: 33.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
OBJECTIVE To provide an overview of ML models and data streams utilized for automated surgical phase recognition. BACKGROUND Phase recognition identifies different steps and phases of an operation. ML is an evolving technology that allows analysis and interpretation of huge data sets. Automation of phase recognition based on data inputs is essential for optimization of workflow, surgical training, intraoperative assistance, patient safety, and efficiency. METHODS A systematic review was performed according to the Cochrane recommendations and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. PubMed, Web of Science, IEEExplore, GoogleScholar, and CiteSeerX were searched. Literature describing phase recognition based on ML models and the capture of intraoperative signals during general surgery procedures was included. RESULTS A total of 2254 titles/abstracts were screened, and 35 full-texts were included. Most commonly used ML models were Hidden Markov Models and Artificial Neural Networks with a trend towards higher complexity over time. Most frequently used data types were feature learning from surgical videos and manual annotation of instrument use. Laparoscopic cholecystectomy was used most commonly, often achieving accuracy rates over 90%, though there was no consistent standardization of defined phases. CONCLUSIONS ML for surgical phase recognition can be performed with high accuracy, depending on the model, data type, and complexity of surgery. Different intraoperative data inputs such as video and instrument type can successfully be used. Most ML models still require significant amounts of manual expert annotations for training. The ML models may drive surgical workflow towards standardization, efficiency, and objectiveness to improve patient outcome in the future. REGISTRATION PROSPERO CRD42018108907.
Collapse
Affiliation(s)
- Carly R Garrow
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Karl-Friedrich Kowalewski
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
- Department of Urology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Linhong Li
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Martin Wagner
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Mona W Schmidt
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Sandy Engelhardt
- Department of Computer Science, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Daniel A Hashimoto
- Department of Surgery, Massachusetts General Hospital, Boston, Massachusetts
| | - Hannes G Kenngott
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Sebastian Bodenstedt
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Beat P Müller-Stich
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Felix Nickel
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| |
Collapse
|
22
|
Sharma H, Drukker L, Chatelain P, Droste R, Papageorghiou AT, Noble JA. Knowledge representation and learning of operator clinical workflow from full-length routine fetal ultrasound scan videos. Med Image Anal 2021; 69:101973. [PMID: 33550004 DOI: 10.1016/j.media.2021.101973] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Revised: 11/18/2020] [Accepted: 01/11/2021] [Indexed: 12/25/2022]
Abstract
Ultrasound is a widely used imaging modality, yet it is well-known that scanning can be highly operator-dependent and difficult to perform, which limits its wider use in clinical practice. The literature on understanding what makes clinical sonography hard to learn and how sonography varies in the field is sparse, restricted to small-scale studies on the effectiveness of ultrasound training schemes, the role of ultrasound simulation in training, and the effect of introducing scanning guidelines and standards on diagnostic image quality. The Big Data era, and the recent and rapid emergence of machine learning as a more mainstream large-scale data analysis technique, presents a fresh opportunity to study sonography in the field at scale for the first time. Large-scale analysis of video recordings of full-length routine fetal ultrasound scans offers the potential to characterise differences between the scanning proficiency of experts and trainees that would be tedious and time-consuming to do manually due to the vast amounts of data. Such research would be informative to better understand operator clinical workflow when conducting ultrasound scans to support skills training, optimise scan times, and inform building better user-machine interfaces. This paper is to our knowledge the first to address sonography data science, which we consider in the context of second-trimester fetal sonography screening. Specifically, we present a fully-automatic framework to analyse operator clinical workflow solely from full-length routine second-trimester fetal ultrasound scan videos. An ultrasound video dataset containing more than 200 hours of scan recordings was generated for this study. We developed an original deep learning method to temporally segment the ultrasound video into semantically meaningful segments (the video description). The resulting semantic annotation was then used to depict operator clinical workflow (the knowledge representation). Machine learning was applied to the knowledge representation to characterise operator skills and assess operator variability. For video description, our best-performing deep spatio-temporal network shows favourable results in cross-validation (accuracy: 91.7%), statistical analysis (correlation: 0.98, p < 0.05) and retrospective manual validation (accuracy: 76.4%). For knowledge representation of operator clinical workflow, a three-level abstraction scheme consisting of a Subject-specific Timeline Model (STM), Summary of Timeline Features (STF), and an Operator Graph Model (OGM), was introduced that led to a significant decrease in dimensionality and computational complexity compared to raw video data. The workflow representations were learnt to discriminate between operator skills, where a proposed convolutional neural network-based model showed most promising performance (cross-validation accuracy: 98.5%, accuracy on unseen operators: 76.9%). These were further used to derive operator-specific scanning signatures and operator variability in terms of type, order and time distribution of constituent tasks.
Collapse
Affiliation(s)
- Harshita Sharma
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom.
| | - Lior Drukker
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, United Kingdom
| | - Pierre Chatelain
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Richard Droste
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, United Kingdom
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
23
|
Pereira KR. Harnessing Artificial Intelligence in Maxillofacial Surgery. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_322-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
24
|
Bodenstedt S, Wagner M, Müller-Stich BP, Weitz J, Speidel S. Artificial Intelligence-Assisted Surgery: Potential and Challenges. Visc Med 2020; 36:450-455. [PMID: 33447600 PMCID: PMC7768095 DOI: 10.1159/000511351] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 09/03/2020] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI) has recently achieved considerable success in different domains including medical applications. Although current advances are expected to impact surgery, up until now AI has not been able to leverage its full potential due to several challenges that are specific to that field. SUMMARY This review summarizes data-driven methods and technologies needed as a prerequisite for different AI-based assistance functions in the operating room. Potential effects of AI usage in surgery will be highlighted, concluding with ongoing challenges to enabling AI for surgery. KEY MESSAGES AI-assisted surgery will enable data-driven decision-making via decision support systems and cognitive robotic assistance. The use of AI for workflow analysis will help provide appropriate assistance in the right context. The requirements for such assistance must be defined by surgeons in close cooperation with computer scientists and engineers. Once the existing challenges will have been solved, AI assistance has the potential to improve patient care by supporting the surgeon without replacing him or her.
Collapse
Affiliation(s)
- Sebastian Bodenstedt
- Division of Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Beat Peter Müller-Stich
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Jürgen Weitz
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital Carl-Gustav-Carus, TU Dresden, Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| |
Collapse
|
25
|
Ward TM, Mascagni P, Ban Y, Rosman G, Padoy N, Meireles O, Hashimoto DA. Computer vision in surgery. Surgery 2020; 169:1253-1256. [PMID: 33272610 DOI: 10.1016/j.surg.2020.10.039] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 10/09/2020] [Accepted: 10/10/2020] [Indexed: 12/17/2022]
Abstract
The fields of computer vision (CV) and artificial intelligence (AI) have undergone rapid advancements in the past decade, many of which have been applied to the analysis of intraoperative video. These advances are driven by wide-spread application of deep learning, which leverages multiple layers of neural networks to teach computers complex tasks. Prior to these advances, applications of AI in the operating room were limited by our relative inability to train computers to accurately understand images with traditional machine learning (ML) techniques. The development and refining of deep neural networks that can now accurately identify objects in images and remember past surgical events has sparked a surge in the applications of CV to analyze intraoperative video and has allowed for the accurate identification of surgical phases (steps) and instruments across a variety of procedures. In some cases, CV can even identify operative phases with accuracy similar to surgeons. Future research will likely expand on this foundation of surgical knowledge using larger video datasets and improved algorithms with greater accuracy and interpretability to create clinically useful AI models that gain widespread adoption and augment the surgeon's ability to provide safer care for patients everywhere.
Collapse
Affiliation(s)
- Thomas M Ward
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France; Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Yutong Ban
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA; Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA
| | - Guy Rosman
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA; Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
| | - Ozanan Meireles
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Daniel A Hashimoto
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA.
| |
Collapse
|
26
|
Chadebecq F, Vasconcelos F, Mazomenos E, Stoyanov D. Computer Vision in the Surgical Operating Room. Visc Med 2020; 36:456-462. [PMID: 33447601 DOI: 10.1159/000511934] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 09/30/2020] [Indexed: 12/20/2022] Open
Abstract
Background Multiple types of surgical cameras are used in modern surgical practice and provide a rich visual signal that is used by surgeons to visualize the clinical site and make clinical decisions. This signal can also be used by artificial intelligence (AI) methods to provide support in identifying instruments, structures, or activities both in real-time during procedures and postoperatively for analytics and understanding of surgical processes. Summary In this paper, we provide a succinct perspective on the use of AI and especially computer vision to power solutions for the surgical operating room (OR). The synergy between data availability and technical advances in computational power and AI methodology has led to rapid developments in the field and promising advances. Key Messages With the increasing availability of surgical video sources and the convergence of technologies around video storage, processing, and understanding, we believe clinical solutions and products leveraging vision are going to become an important component of modern surgical capabilities. However, both technical and clinical challenges remain to be overcome to efficiently make use of vision-based approaches into the clinic.
Collapse
Affiliation(s)
- François Chadebecq
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Francisco Vasconcelos
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Evangelos Mazomenos
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Danail Stoyanov
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| |
Collapse
|
27
|
LRTD: long-range temporal dependency based active learning for surgical workflow recognition. Int J Comput Assist Radiol Surg 2020; 15:1573-1584. [PMID: 32588246 DOI: 10.1007/s11548-020-02198-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Accepted: 05/18/2020] [Indexed: 10/24/2022]
Abstract
PURPOSE Automatic surgical workflow recognition in video is an essentially fundamental yet challenging problem for developing computer-assisted and robotic-assisted surgery. Existing approaches with deep learning have achieved remarkable performance on analysis of surgical videos, however, heavily relying on large-scale labelled datasets. Unfortunately, the annotation is not often available in abundance, because it requires the domain knowledge of surgeons. Even for experts, it is very tedious and time-consuming to do a sufficient amount of annotations. METHODS In this paper, we propose a novel active learning method for cost-effective surgical video analysis. Specifically, we propose a non-local recurrent convolutional network, which introduces non-local block to capture the long-range temporal dependency (LRTD) among continuous frames. We then formulate an intra-clip dependency score to represent the overall dependency within this clip. By ranking scores among clips in unlabelled data pool, we select the clips with weak dependencies to annotate, which indicates the most informative ones to better benefit network training. RESULTS We validate our approach on a large surgical video dataset (Cholec80) by performing surgical workflow recognition task. By using our LRTD based selection strategy, we can outperform other state-of-the-art active learning methods who only consider neighbor-frame information. Using only up to 50% of samples, our approach can exceed the performance of full-data training. CONCLUSION By modeling the intra-clip dependency, our LRTD based strategy shows stronger capability to select informative video clips for annotation compared with other active learning methods, through the evaluation on a popular public surgical dataset. The results also show the promising potential of our framework for reducing annotation workload in the clinical practice.
Collapse
|
28
|
Baxter RD, Fann JI, DiMaio JM, Lobdell K. Digital Health Primer for Cardiothoracic Surgeons. Ann Thorac Surg 2020; 110:364-372. [PMID: 32268139 DOI: 10.1016/j.athoracsur.2020.02.072] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 01/03/2020] [Accepted: 02/23/2020] [Indexed: 12/12/2022]
Abstract
The burgeoning demands for quality, safety, and value in cardiothoracic surgery, in combination with the advancement and acceleration of digital health solutions and information technology, provide a unique opportunity to improve efficiency and effectiveness simultaneously in cardiothoracic surgery. This primer on digital health explores and reviews data integration, data processing, complex modeling, telehealth with remote monitoring, and cybersecurity as they shape the future of cardiothoracic surgery.
Collapse
Affiliation(s)
- Ronald D Baxter
- Department of Cardiothoracic Surgery, Baylor Scott and White, The Heart Hospital, Plano, Texas
| | - James I Fann
- Department of Cardiothoracic Surgery, Stanford University Medical Center, Stanford, California
| | - J Michael DiMaio
- Department of Cardiothoracic Surgery, Baylor Scott and White, The Heart Hospital, Plano, Texas
| | - Kevin Lobdell
- Sanger Heart and Vascular Institute, Atrium Health, Charlotte, North Carolina.
| |
Collapse
|
29
|
Vercauteren T, Unberath M, Padoy N, Navab N. CAI4CAI: The Rise of Contextual Artificial Intelligence in Computer Assisted Interventions. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:198-214. [PMID: 31920208 PMCID: PMC6952279 DOI: 10.1109/jproc.2019.2946993] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 09/12/2019] [Accepted: 10/04/2019] [Indexed: 05/10/2023]
Abstract
Data-driven computational approaches have evolved to enable extraction of information from medical images with a reliability, accuracy and speed which is already transforming their interpretation and exploitation in clinical practice. While similar benefits are longed for in the field of interventional imaging, this ambition is challenged by a much higher heterogeneity. Clinical workflows within interventional suites and operating theatres are extremely complex and typically rely on poorly integrated intra-operative devices, sensors, and support infrastructures. Taking stock of some of the most exciting developments in machine learning and artificial intelligence for computer assisted interventions, we highlight the crucial need to take context and human factors into account in order to address these challenges. Contextual artificial intelligence for computer assisted intervention, or CAI4CAI, arises as an emerging opportunity feeding into the broader field of surgical data science. Central challenges being addressed in CAI4CAI include how to integrate the ensemble of prior knowledge and instantaneous sensory information from experts, sensors and actuators; how to create and communicate a faithful and actionable shared representation of the surgery among a mixed human-AI actor team; how to design interventional systems and associated cognitive shared control schemes for online uncertainty-aware collaborative decision making ultimately producing more precise and reliable interventions.
Collapse
Affiliation(s)
- Tom Vercauteren
- School of Biomedical Engineering & Imaging SciencesKing’s College LondonLondonWC2R 2LSU.K.
| | - Mathias Unberath
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMD21218USA
| | - Nicolas Padoy
- ICube institute, CNRS, IHU Strasbourg, University of Strasbourg67081StrasbourgFrance
| | - Nassir Navab
- Fakultät für InformatikTechnische Universität München80333MunichGermany
| |
Collapse
|