1
|
Butler RM, Schouten AM, van der Eijk AC, van der Elst M, Hendriks BHW, van den Dobbelsteen JJ. Towards automatic quantification of operating table interaction in operating rooms. Int J Comput Assist Radiol Surg 2025:10.1007/s11548-025-03363-8. [PMID: 40319437 DOI: 10.1007/s11548-025-03363-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2024] [Accepted: 03/21/2025] [Indexed: 05/07/2025]
Abstract
PURPOSE Perioperative staff shortages are a problem in hospitals worldwide. Keeping the staff content and motivated is a challenge in the busy hospital setting of today. New operating room technologies aim to increase safety and efficiency. This causes a shift from interaction with patients to interaction with technology. Objectively measuring this shift could aid the design of supportive technological products, or optimal planning for high-tech procedures. METHODS 35 Gynaecological procedures of three different technology levels are recorded: open- (OS), minimally invasive- (MIS) and robot-assisted (RAS) surgery. We annotate interaction between staff and the patient. An algorithm is proposed that detects interaction with the operating table from staff posture and movement. Interaction is expressed as a percentage of total working time. RESULTS The proposed algorithm measures operating table interactions of 70.4%, 70.3% and 30.1% during OS, MIS and RAS. Annotations yield patient interaction percentages of 37.6%, 38.3% and 24.6%. Algorithm measurements over time show operating table and patient interaction peaks at anomalous events or workflow phase transitions. CONCLUSIONS The annotations show less operating table and patient interactions during RAS than OS and MIS. Annotated patient interaction and measured operating table interaction show similar differences between procedures and workflow phases. The visual complexity of operating rooms complicates pose tracking, deteriorating the algorithm input quality. The proposed algorithm shows promise as a component in context-aware event- or workflow phase detection.
Collapse
Affiliation(s)
- Rick M Butler
- Delft University of Technology, Delft, the Netherlands.
| | - Anne M Schouten
- Delft University of Technology, Delft, the Netherlands
- Leiden University Medical Center, Leiden, the Netherlands
| | - Anne C van der Eijk
- Delft University of Technology, Delft, the Netherlands
- Leiden University Medical Center, Leiden, the Netherlands
| | - Maarten van der Elst
- Delft University of Technology, Delft, the Netherlands
- Reinier de Graaf Gasthuis, Delft, the Netherlands
| | - Benno H W Hendriks
- Delft University of Technology, Delft, the Netherlands
- Philips Healthcare, Best, the Netherlands
| | | |
Collapse
|
2
|
Leivaditis V, Maniatopoulos AA, Lausberg H, Mulita F, Papatriantafyllou A, Liolis E, Beltsios E, Adamou A, Kontodimopoulos N, Dahm M. Artificial Intelligence in Thoracic Surgery: A Review Bridging Innovation and Clinical Practice for the Next Generation of Surgical Care. J Clin Med 2025; 14:2729. [PMID: 40283559 PMCID: PMC12027631 DOI: 10.3390/jcm14082729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2025] [Revised: 03/27/2025] [Accepted: 04/08/2025] [Indexed: 04/29/2025] Open
Abstract
Background: Artificial intelligence (AI) is rapidly transforming thoracic surgery by enhancing diagnostic accuracy, surgical precision, intraoperative guidance, and postoperative management. AI-driven technologies, including machine learning (ML), deep learning, computer vision, and robotic-assisted surgery, have the potential to optimize clinical workflows and improve patient outcomes. However, challenges such as data integration, ethical concerns, and regulatory barriers must be addressed to ensure AI's safe and effective implementation. This review aims to analyze the current applications, benefits, limitations, and future directions of AI in thoracic surgery. Methods: This review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A comprehensive literature search was performed using PubMed, Scopus, Web of Science, and Cochrane Library for studies published up to January 2025. Relevant articles were selected based on predefined inclusion and exclusion criteria, focusing on AI applications in thoracic surgery, including diagnostics, robotic-assisted surgery, intraoperative guidance, and postoperative care. A risk of bias assessment was conducted using the Cochrane Risk of Bias Tool and ROBINS-I for non-randomized studies. Results: Out of 279 identified studies, 36 met the inclusion criteria for qualitative synthesis, highlighting AI's growing role in diagnostic accuracy, surgical precision, intraoperative guidance, and postoperative care in thoracic surgery. AI-driven imaging analysis and radiomics have improved pulmonary nodule detection, lung cancer classification, and lymph node metastasis prediction, while robotic-assisted thoracic surgery (RATS) has enhanced surgical accuracy, reduced operative times, and improved recovery rates. Intraoperatively, AI-powered image-guided navigation, augmented reality (AR), and real-time decision-support systems have optimized surgical planning and safety. Postoperatively, AI-driven predictive models and wearable monitoring devices have enabled early complication detection and improved patient follow-up. However, challenges remain, including algorithmic biases, a lack of multicenter validation, high implementation costs, and ethical concerns regarding data security and clinical accountability. Despite these limitations, AI has shown significant potential to enhance surgical outcomes, requiring further research and standardized validation for widespread adoption. Conclusions: AI is poised to revolutionize thoracic surgery by enhancing decision-making, improving patient outcomes, and optimizing surgical workflows. However, widespread adoption requires addressing key limitations through multicenter validation studies, standardized AI frameworks, and ethical AI governance. Future research should focus on digital twin technology, federated learning, and explainable AI (XAI) to improve AI interpretability, reliability, and accessibility. With continued advancements and responsible integration, AI will play a pivotal role in shaping the next generation of precision thoracic surgery.
Collapse
Affiliation(s)
- Vasileios Leivaditis
- Department of Cardiothoracic and Vascular Surgery, Westpfalz Klinikum, 67655 Kaiserslautern, Germany; (V.L.); (H.L.); (A.P.); (M.D.)
| | | | - Henning Lausberg
- Department of Cardiothoracic and Vascular Surgery, Westpfalz Klinikum, 67655 Kaiserslautern, Germany; (V.L.); (H.L.); (A.P.); (M.D.)
| | - Francesk Mulita
- Department of General Surgery, General Hospital of Eastern Achaia—Unit of Aigio, 25100 Aigio, Greece
| | - Athanasios Papatriantafyllou
- Department of Cardiothoracic and Vascular Surgery, Westpfalz Klinikum, 67655 Kaiserslautern, Germany; (V.L.); (H.L.); (A.P.); (M.D.)
| | - Elias Liolis
- Department of Oncology, General University Hospital of Patras, 26504 Patras, Greece;
| | - Eleftherios Beltsios
- Department of Anesthesiology and Intensive Care, Hannover Medical School, 30625 Hannover, Germany;
| | - Antonis Adamou
- Institute of Diagnostic and Interventional Neuroradiology, Hannover Medical School, 30625 Hannover, Germany;
| | - Nikolaos Kontodimopoulos
- Department of Economics and Sustainable Development, Harokopio University, 17676 Athens, Greece;
| | - Manfred Dahm
- Department of Cardiothoracic and Vascular Surgery, Westpfalz Klinikum, 67655 Kaiserslautern, Germany; (V.L.); (H.L.); (A.P.); (M.D.)
| |
Collapse
|
3
|
Sugiyama T, Sugimori H, Tang M, Fujimura M. Artificial Intelligence for Patient Safety and Surgical Education in Neurosurgery. JMA J 2025; 8:76-85. [PMID: 39926071 PMCID: PMC11799567 DOI: 10.31662/jmaj.2024-0141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Accepted: 07/17/2024] [Indexed: 02/11/2025] Open
Abstract
Neurosurgery has evolved alongside technological innovations; however, these advances have also introduced greater complexity into clinical practice. Neurosurgery remains a demanding and high-risk field that requires a broad range of skills. Artificial intelligence (AI) has immense potential in neurosurgery given its ability to rapidly analyze large volumes of clinical data generated in modern clinical environments. An expanding body of literature has demonstrated that AI enhances various aspects of neurosurgery, including diagnostics, prognostication, decision-making, data management, education, and clinical studies. AI applications are expected to reduce medical errors and costs, broaden healthcare accessibility, and ultimately boost patient safety and surgical education. Nevertheless, AI application in neurosurgery remains practically limited because of several challenges, such as the diversity and volume of clinical training data collection, concerns regarding data quality, algorithmic bias, transparency (explainability and interpretability), ethical issues, and regulatory implications. To comprehensively discuss the potential benefits, future directions, and limitations of AI in neurosurgery, this review examined recent studies on AI technology and its applications in this field, focusing on intraoperative decision support and surgical education.
Collapse
Affiliation(s)
- Taku Sugiyama
- Department of Neurosurgery, Hokkaido University Graduate School of Medicine, Sapporo, Japan
- Medical AI Research and Development Center, Hokkaido University Hospital, Sapporo, Japan
| | - Hiroyuki Sugimori
- Medical AI Research and Development Center, Hokkaido University Hospital, Sapporo, Japan
- Department of Biomedical Science and Engineering, Faculty of Health Sciences, Hokkaido University, Sapporo, Japan
| | - Minghui Tang
- Medical AI Research and Development Center, Hokkaido University Hospital, Sapporo, Japan
- Department of Diagnostic Imaging, Hokkaido University Faculty of Medicine and Graduate School of Medicine, Sapporo, Japan
| | - Miki Fujimura
- Department of Neurosurgery, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| |
Collapse
|
4
|
Leivaditis V, Beltsios E, Papatriantafyllou A, Grapatsas K, Mulita F, Kontodimopoulos N, Baikoussis NG, Tchabashvili L, Tasios K, Maroulis I, Dahm M, Koletsis E. Artificial Intelligence in Cardiac Surgery: Transforming Outcomes and Shaping the Future. Clin Pract 2025; 15:17. [PMID: 39851800 PMCID: PMC11763739 DOI: 10.3390/clinpract15010017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2024] [Revised: 01/06/2025] [Accepted: 01/08/2025] [Indexed: 01/26/2025] Open
Abstract
Background: Artificial intelligence (AI) has emerged as a transformative technology in healthcare, with its integration into cardiac surgery offering significant advancements in precision, efficiency, and patient outcomes. However, a comprehensive understanding of AI's applications, benefits, challenges, and future directions in cardiac surgery is needed to inform its safe and effective implementation. Methods: A systematic review was conducted following PRISMA guidelines. Literature searches were performed in PubMed, Scopus, Cochrane Library, Google Scholar, and Web of Science, covering publications from January 2000 to November 2024. Studies focusing on AI applications in cardiac surgery, including risk stratification, surgical planning, intraoperative guidance, and postoperative management, were included. Data extraction and quality assessment were conducted using standardized tools, and findings were synthesized narratively. Results: A total of 121 studies were included in this review. AI demonstrated superior predictive capabilities in risk stratification, with machine learning models outperforming traditional scoring systems in mortality and complication prediction. Robotic-assisted systems enhanced surgical precision and minimized trauma, while computer vision and augmented cognition improved intraoperative guidance. Postoperative AI applications showed potential in predicting complications, supporting patient monitoring, and reducing healthcare costs. However, challenges such as data quality, validation, ethical considerations, and integration into clinical workflows remain significant barriers to widespread adoption. Conclusions: AI has the potential to revolutionize cardiac surgery by enhancing decision making, surgical accuracy, and patient outcomes. Addressing limitations related to data quality, bias, validation, and regulatory frameworks is essential for its safe and effective implementation. Future research should focus on interdisciplinary collaboration, robust testing, and the development of ethical and transparent AI systems to ensure equitable and sustainable advancements in cardiac surgery.
Collapse
Affiliation(s)
- Vasileios Leivaditis
- Department of Cardiothoracic and Vascular Surgery, WestpfalzKlinikum, 67655 Kaiserslautern, Germany; (V.L.); (A.P.); (M.D.)
| | - Eleftherios Beltsios
- Department of Anesthesiology and Intensive Care, Hannover Medical School, 30625 Hannover, Germany;
| | - Athanasios Papatriantafyllou
- Department of Cardiothoracic and Vascular Surgery, WestpfalzKlinikum, 67655 Kaiserslautern, Germany; (V.L.); (A.P.); (M.D.)
| | - Konstantinos Grapatsas
- Department of Thoracic Surgery and Thoracic Endoscopy, Ruhrlandklinik, West German Lung Center, University Hospital Essen, University Duisburg-Essen, 45141 Essen, Germany;
| | - Francesk Mulita
- Department of General Surgery, General University Hospital of Patras, 26504 Patras, Greece; (L.T.); (K.T.)
| | - Nikolaos Kontodimopoulos
- Department of Economics and Sustainable Development, Harokopio University, 17778 Athens, Greece;
| | - Nikolaos G. Baikoussis
- Department of Cardiac Surgery, Ippokrateio General Hospital of Athens, 11527 Athens, Greece;
| | - Levan Tchabashvili
- Department of General Surgery, General University Hospital of Patras, 26504 Patras, Greece; (L.T.); (K.T.)
| | - Konstantinos Tasios
- Department of General Surgery, General University Hospital of Patras, 26504 Patras, Greece; (L.T.); (K.T.)
| | - Ioannis Maroulis
- Department of General Surgery, General University Hospital of Patras, 26504 Patras, Greece; (L.T.); (K.T.)
| | - Manfred Dahm
- Department of Cardiothoracic and Vascular Surgery, WestpfalzKlinikum, 67655 Kaiserslautern, Germany; (V.L.); (A.P.); (M.D.)
| | - Efstratios Koletsis
- Department of Cardiothoracic Surgery, General University Hospital of Patras, 26504 Patras, Greece;
| |
Collapse
|
5
|
Gerats BGA, Wolterink JM, Broeders IAMJ. NeRF-OR: neural radiance fields for operating room scene reconstruction from sparse-view RGB-D videos. Int J Comput Assist Radiol Surg 2025; 20:147-156. [PMID: 39271573 PMCID: PMC11758168 DOI: 10.1007/s11548-024-03261-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 08/21/2024] [Indexed: 09/15/2024]
Abstract
PURPOSE RGB-D cameras in the operating room (OR) provide synchronized views of complex surgical scenes. Assimilation of this multi-view data into a unified representation allows for downstream tasks such as object detection and tracking, pose estimation, and action recognition. Neural radiance fields (NeRFs) can provide continuous representations of complex scenes with limited memory footprint. However, existing NeRF methods perform poorly in real-world OR settings, where a small set of cameras capture the room from entirely different vantage points. In this work, we propose NeRF-OR, a method for 3D reconstruction of dynamic surgical scenes in the OR. METHODS Where other methods for sparse-view datasets use either time-of-flight sensor depth or dense depth estimated from color images, NeRF-OR uses a combination of both. The depth estimations mitigate the missing values that occur in sensor depth images due to reflective materials and object boundaries. We propose to supervise with surface normals calculated from the estimated depths, because these are largely scale invariant. RESULTS We fit NeRF-OR to static surgical scenes in the 4D-OR dataset and show that its representations are geometrically accurate, where state of the art collapses to sub-optimal solutions. Compared to earlier work, NeRF-OR grasps fine scene details while training 30 × faster. Additionally, NeRF-OR can capture whole-surgery videos while synthesizing views at intermediate time values with an average PSNR of 24.86 dB. Last, we find that our approach has merit in sparse-view settings beyond those in the OR, by benchmarking on the NVS-RGBD dataset that contains as few as three training views. NeRF-OR synthesizes images with a PSNR of 26.72 dB, a 1.7% improvement over state of the art. CONCLUSION Our results show that NeRF-OR allows for novel view synthesis with videos captured by a small number of cameras with entirely different vantage points, which is the typical camera setting in the OR. Code is available via: github.com/Beerend/NeRF-OR .
Collapse
Affiliation(s)
- Beerend G A Gerats
- AI & Data Science Center, Meander Medical Center, Amersfoort, The Netherlands.
- Department of Robotics and Mechatronics, University of Twente, Enschede, The Netherlands.
| | - Jelmer M Wolterink
- Department of Applied Mathematics and Technical Medical Center, University of Twente, Enschede, The Netherlands
| | - Ivo A M J Broeders
- AI & Data Science Center, Meander Medical Center, Amersfoort, The Netherlands
- Department of Robotics and Mechatronics, University of Twente, Enschede, The Netherlands
| |
Collapse
|
6
|
Pei J, Guo D, Zhang J, Lin M, Jin Y, Heng PA. S²Former-OR: Single-Stage Bi-Modal Transformer for Scene Graph Generation in OR. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:361-372. [PMID: 39146166 DOI: 10.1109/tmi.2024.3444279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/17/2024]
Abstract
Scene graph generation (SGG) of surgical procedures is crucial in enhancing holistically cognitive intelligence in the operating room (OR). However, previous works have primarily relied on multi-stage learning, where the generated semantic scene graphs depend on intermediate processes with pose estimation and object detection. This pipeline may potentially compromise the flexibility of learning multimodal representations, consequently constraining the overall effectiveness. In this study, we introduce a novel single-stage bi-modal transformer framework for SGG in the OR, termed S2Former-OR, aimed to complementally leverage multi-view 2D scenes and 3D point clouds for SGG in an end-to-end manner. Concretely, our model embraces a View-Sync Transfusion scheme to encourage multi-view visual information interaction. Concurrently, a Geometry-Visual Cohesion operation is designed to integrate the synergic 2D semantic features into 3D point cloud features. Moreover, based on the augmented feature, we propose a novel relation-sensitive transformer decoder that embeds dynamic entity-pair queries and relational trait priors, which enables the direct prediction of entity-pair relations for graph generation without intermediate steps. Extensive experiments have validated the superior SGG performance and lower computational cost of S2Former-OR on 4D-OR benchmark, compared with current OR-SGG methods, e.g., 3 percentage points Precision increase and 24.2M reduction in model parameters. We further compared our method with generic single-stage SGG methods with broader metrics for a comprehensive evaluation, with consistently better performance achieved. Our source code can be made available at: https://github.com/PJLallen/S2Former-OR.
Collapse
|
7
|
Özsoy E, Czempiel T, Örnek EP, Eck U, Tombari F, Navab N. Holistic OR domain modeling: a semantic scene graph approach. Int J Comput Assist Radiol Surg 2024; 19:791-799. [PMID: 37823976 PMCID: PMC11098880 DOI: 10.1007/s11548-023-03022-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 09/12/2023] [Indexed: 10/13/2023]
Abstract
PURPOSE Surgical procedures take place in highly complex operating rooms (OR), involving medical staff, patients, devices and their interactions. Until now, only medical professionals are capable of comprehending these intricate links and interactions. This work advances the field toward automated, comprehensive and semantic understanding and modeling of the OR domain by introducing semantic scene graphs (SSG) as a novel approach to describing and summarizing surgical environments in a structured and semantically rich manner. METHODS We create the first open-source 4D SSG dataset. 4D-OR includes simulated total knee replacement surgeries captured by RGB-D sensors in a realistic OR simulation center. It includes annotations for SSGs, human and object pose, clinical roles and surgical phase labels. We introduce a neural network-based SSG generation pipeline for semantic reasoning in the OR and apply our approach to two downstream tasks: clinical role prediction and surgical phase recognition. RESULTS We show that our pipeline can successfully reason within the OR domain. The capabilities of our scene graphs are further highlighted by their successful application to clinical role prediction and surgical phase recognition tasks. CONCLUSION This work paves the way for multimodal holistic operating room modeling, with the potential to significantly enhance the state of the art in surgical data analysis, such as enabling more efficient and precise decision-making during surgical procedures, and ultimately improving patient safety and surgical outcomes. We release our code and dataset at github.com/egeozsoy/4D-OR.
Collapse
Affiliation(s)
- Ege Özsoy
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany.
| | - Tobias Czempiel
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany
| | - Evin Pınar Örnek
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany
| | - Ulrich Eck
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany
| | - Federico Tombari
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany
- Google, Zurich, Switzerland
| | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Garching, Germany
| |
Collapse
|
8
|
Lindroth H, Nalaie K, Raghu R, Ayala IN, Busch C, Bhattacharyya A, Moreno Franco P, Diedrich DA, Pickering BW, Herasevich V. Applied Artificial Intelligence in Healthcare: A Review of Computer Vision Technology Application in Hospital Settings. J Imaging 2024; 10:81. [PMID: 38667979 PMCID: PMC11050909 DOI: 10.3390/jimaging10040081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/08/2024] [Accepted: 03/11/2024] [Indexed: 04/28/2024] Open
Abstract
Computer vision (CV), a type of artificial intelligence (AI) that uses digital videos or a sequence of images to recognize content, has been used extensively across industries in recent years. However, in the healthcare industry, its applications are limited by factors like privacy, safety, and ethical concerns. Despite this, CV has the potential to improve patient monitoring, and system efficiencies, while reducing workload. In contrast to previous reviews, we focus on the end-user applications of CV. First, we briefly review and categorize CV applications in other industries (job enhancement, surveillance and monitoring, automation, and augmented reality). We then review the developments of CV in the hospital setting, outpatient, and community settings. The recent advances in monitoring delirium, pain and sedation, patient deterioration, mechanical ventilation, mobility, patient safety, surgical applications, quantification of workload in the hospital, and monitoring for patient events outside the hospital are highlighted. To identify opportunities for future applications, we also completed journey mapping at different system levels. Lastly, we discuss the privacy, safety, and ethical considerations associated with CV and outline processes in algorithm development and testing that limit CV expansion in healthcare. This comprehensive review highlights CV applications and ideas for its expanded use in healthcare.
Collapse
Affiliation(s)
- Heidi Lindroth
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Center for Aging Research, Regenstrief Institute, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
- Center for Health Innovation and Implementation Science, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Keivan Nalaie
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Roshini Raghu
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Ivan N. Ayala
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Charles Busch
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- College of Engineering, University of Wisconsin-Madison, Madison, WI 53705, USA
| | | | - Pablo Moreno Franco
- Department of Transplantation Medicine, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Daniel A. Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| |
Collapse
|
9
|
Keller S, Jelsma JGM, Tschan F, Sevdalis N, Löllgen RM, Creutzfeldt J, Kennedy-Metz LR, Eppich W, Semmer NK, Van Herzeele I, Härenstam KP, de Bruijne MC. Behavioral sciences applied to acute care teams: a research agenda for the years ahead by a European research network. BMC Health Serv Res 2024; 24:71. [PMID: 38218788 PMCID: PMC10788034 DOI: 10.1186/s12913-024-10555-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 01/03/2024] [Indexed: 01/15/2024] Open
Abstract
BACKGROUND Multi-disciplinary behavioral research on acute care teams has focused on understanding how teams work and on identifying behaviors characteristic of efficient and effective team performance. We aimed to define important knowledge gaps and establish a research agenda for the years ahead of prioritized research questions in this field of applied health research. METHODS In the first step, high-priority research questions were generated by a small highly specialized group of 29 experts in the field, recruited from the multinational and multidisciplinary "Behavioral Sciences applied to Acute care teams and Surgery (BSAS)" research network - a cross-European, interdisciplinary network of researchers from social sciences as well as from the medical field committed to understanding the role of behavioral sciences in the context of acute care teams. A consolidated list of 59 research questions was established. In the second step, 19 experts attending the 2020 BSAS annual conference quantitatively rated the importance of each research question based on four criteria - usefulness, answerability, effectiveness, and translation into practice. In the third step, during half a day of the BSAS conference, the same group of 19 experts discussed the prioritization of the research questions in three online focus group meetings and established recommendations. RESULTS Research priorities identified were categorized into six topics: (1) interventions to improve team process; (2) dealing with and implementing new technologies; (3) understanding and measuring team processes; (4) organizational aspects impacting teamwork; (5) training and health professions education; and (6) organizational and patient safety culture in the healthcare domain. Experts rated the first three topics as particularly relevant in terms of research priorities; the focus groups identified specific research needs within each topic. CONCLUSIONS Based on research priorities within the BSAS community and the broader field of applied health sciences identified through this work, we advocate for the prioritization for funding in these areas.
Collapse
Affiliation(s)
- Sandra Keller
- Department of Visceral Surgery and Medicine, Bern University Hospital, Bern, Switzerland.
- Department for BioMedical Research (DBMR), Bern University, Bern, Switzerland.
| | - Judith G M Jelsma
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Franziska Tschan
- Institute for Work and Organizational Psychology, University of Neuchâtel, Neuchâtel, Switzerland
| | - Nick Sevdalis
- Centre for Implementation Science, Health Service and Population Research Department, KCL, London, UK
| | - Ruth M Löllgen
- Pediatric Emergency Department, Astrid Lindgrens Children's Hospital; Karolinska University Hospital, Stockholm, Sweden
- Department of Women's and Children's Health, Karolinska Institute, Stockholm, Sweden
| | - Johan Creutzfeldt
- Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden
- Center for Advanced Medical Simulation and Training, (CAMST), Karolinska University Hospital and Karolinska Institutet, Stockholm, Sweden
| | - Lauren R Kennedy-Metz
- Department of Surgery, Harvard Medical School, Boston, MA, USA
- Division of Cardiac Surgery, VA Boston Healthcare System, Boston, MA, USA
- Psychology Department, Roanoke College, Salem, VA, USA
| | - Walter Eppich
- Department of Medical Education & Collaborative Practice Centre, University of Melbourne, Melbourne, Australia
| | - Norbert K Semmer
- Department of Work Psychology, University of Bern, Bern, Switzerland
| | - Isabelle Van Herzeele
- Department of Thoracic and Vascular Surgery, Ghent University Hospital, Ghent, Belgium
| | - Karin Pukk Härenstam
- Pediatric Emergency Department, Astrid Lindgrens Children's Hospital; Karolinska University Hospital, Stockholm, Sweden
- Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Stockholm, Sweden
| | - Martine C de Bruijne
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
10
|
Hongxia W, Juanjuan G, Han W, Wenlong L, Yasir M, Xiaojing L. An integration of hybrid MCDA framework to the statistical analysis of computer-based health monitoring applications. Front Public Health 2024; 11:1341871. [PMID: 38259786 PMCID: PMC10800702 DOI: 10.3389/fpubh.2023.1341871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 12/12/2023] [Indexed: 01/24/2024] Open
Abstract
The surge in computer-based health surveillance applications, leveraging technologies like big data analytics, artificial intelligence, and the Internet of Things, aims to provide personalized and streamlined medical services. These applications encompass diverse functionalities, from portable health trackers to remote patient monitoring systems, covering aspects such as heart rate tracking, task monitoring, glucose level checking, medication reminders, and sleep pattern assessment. Despite the anticipated benefits, concerns about performance, security, and alignment with healthcare professionals' needs arise with their widespread deployment. This study introduces a Hybrid Multi-Criteria Decision Analysis (MCDA) paradigm, combining the strengths of Additive Ratio Assessment (ARAS) and Analytic Hierarchy Process (AHP), to address the intricate nature of decision-making processes. The method involves selecting and structuring criteria hierarchically, providing a detailed evaluation of application efficacy. Professional stakeholders quantify the relative importance of each criterion through pairwise comparisons, generating criteria weights using AHP. The ARAS methodology then ranks applications based on their performance concerning the weighted criteria. This approach delivers a comprehensive assessment, considering factors like real-time capabilities, surgical services, and other crucial aspects. The research results provide valuable insights for healthcare practitioners, legislators, and technologists, aiding in deciding the adoption and integration of computer-based health monitoring applications, ultimately enhancing medical services and healthcare outcomes.
Collapse
Affiliation(s)
- Wang Hongxia
- Qingdao Municipal Center for Disease Control and Prevention, Qingdao, China
| | - Guo Juanjuan
- Affiliated Qingdao Third People’s Hospital, Qingdao University, Qingdao, China
| | - Wang Han
- Qingdao Municipal Center for Disease Control and Prevention, Qingdao, China
| | - Lan Wenlong
- Qingdao Municipal Center for Disease Control and Prevention, Qingdao, China
| | - Muhammad Yasir
- College of Oceanography and Space Informatics, China University of Petroleum, Qingdao, China
| | - Li Xiaojing
- Qingdao Municipal Center for Disease Control and Prevention, Qingdao, China
| |
Collapse
|
11
|
Eckhoff JA, Rosman G, Altieri MS, Speidel S, Stoyanov D, Anvari M, Meier-Hein L, März K, Jannin P, Pugh C, Wagner M, Witkowski E, Shaw P, Madani A, Ban Y, Ward T, Filicori F, Padoy N, Talamini M, Meireles OR. SAGES consensus recommendations on surgical video data use, structure, and exploration (for research in artificial intelligence, clinical quality improvement, and surgical education). Surg Endosc 2023; 37:8690-8707. [PMID: 37516693 PMCID: PMC10616217 DOI: 10.1007/s00464-023-10288-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 07/05/2023] [Indexed: 07/31/2023]
Abstract
BACKGROUND Surgery generates a vast amount of data from each procedure. Particularly video data provides significant value for surgical research, clinical outcome assessment, quality control, and education. The data lifecycle is influenced by various factors, including data structure, acquisition, storage, and sharing; data use and exploration, and finally data governance, which encompasses all ethical and legal regulations associated with the data. There is a universal need among stakeholders in surgical data science to establish standardized frameworks that address all aspects of this lifecycle to ensure data quality and purpose. METHODS Working groups were formed, among 48 representatives from academia and industry, including clinicians, computer scientists and industry representatives. These working groups focused on: Data Use, Data Structure, Data Exploration, and Data Governance. After working group and panel discussions, a modified Delphi process was conducted. RESULTS The resulting Delphi consensus provides conceptualized and structured recommendations for each domain related to surgical video data. We identified the key stakeholders within the data lifecycle and formulated comprehensive, easily understandable, and widely applicable guidelines for data utilization. Standardization of data structure should encompass format and quality, data sources, documentation, metadata, and account for biases within the data. To foster scientific data exploration, datasets should reflect diversity and remain adaptable to future applications. Data governance must be transparent to all stakeholders, addressing legal and ethical considerations surrounding the data. CONCLUSION This consensus presents essential recommendations around the generation of standardized and diverse surgical video databanks, accounting for multiple stakeholders involved in data generation and use throughout its lifecycle. Following the SAGES annotation framework, we lay the foundation for standardization of data use, structure, and exploration. A detailed exploration of requirements for adequate data governance will follow.
Collapse
Affiliation(s)
- Jennifer A Eckhoff
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA.
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany.
| | - Guy Rosman
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - Maria S Altieri
- Stony Brook University Hospital, Washington University in St. Louis, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Stefanie Speidel
- National Center for Tumor Diseases (NCT), Fiedlerstraße 23, 01307, Dresden, Germany
| | - Danail Stoyanov
- University College London, 43-45 Foley Street, London, W1W 7TY, UK
| | - Mehran Anvari
- Center for Surgical Invention and Innovation, Department of Surgery, McMaster University, Hamilton, ON, Canada
| | - Lena Meier-Hein
- German Cancer Research Center, Deutsches Krebsforschungszentrum (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Keno März
- German Cancer Research Center, Deutsches Krebsforschungszentrum (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Pierre Jannin
- MediCIS, University of Rennes - Campus Beaulieu, 2 Av. du Professeur Léon Bernard, 35043, Rennes, France
| | - Carla Pugh
- Department of Surgery, Stanford School of Medicine, 291 Campus Drive, Stanford, CA, 94305, USA
| | - Martin Wagner
- Department of Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Elan Witkowski
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| | - Paresh Shaw
- New York University Langone, 530 1St Ave. Floor 12, New York, NY, 10016, USA
| | - Amin Madani
- Surgical Artifcial Intelligence Research Academy, Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Yutong Ban
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - Thomas Ward
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, New York, NY, USA
| | - Nicolas Padoy
- Ihu Strasbourg - Institute Surgery Guided Par L'image, 1 Pl. de L'Hôpital, 67000, Strasbourg, France
| | - Mark Talamini
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Ozanan R Meireles
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| |
Collapse
|
12
|
Pore A, Li Z, Dall'Alba D, Hernansanz A, De Momi E, Menciassi A, Casals Gelpí A, Dankelman J, Fiorini P, Poorten EV. Autonomous Navigation for Robot-Assisted Intraluminal and Endovascular Procedures: A Systematic Review. IEEE T ROBOT 2023; 39:2529-2548. [DOI: 10.1109/tro.2023.3269384] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Affiliation(s)
- Ameya Pore
- Department of Computer Science, University of Verona, Verona, Italy
| | - Zhen Li
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Diego Dall'Alba
- Department of Computer Science, University of Verona, Verona, Italy
| | - Albert Hernansanz
- Center of Research in Biomedical Engineering, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | | | - Alicia Casals Gelpí
- Center of Research in Biomedical Engineering, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Jenny Dankelman
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - Paolo Fiorini
- Department of Computer Science, University of Verona, Verona, Italy
| | | |
Collapse
|
13
|
Iqbal J, Jahangir K, Mashkoor Y, Sultana N, Mehmood D, Ashraf M, Iqbal A, Hafeez MH. The future of artificial intelligence in neurosurgery: A narrative review. Surg Neurol Int 2022; 13:536. [DOI: 10.25259/sni_877_2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 10/27/2022] [Indexed: 11/19/2022] Open
Abstract
Background:
Artificial intelligence (AI) and machine learning (ML) algorithms are on the tremendous rise for being incorporated into the field of neurosurgery. AI and ML algorithms are different from other technological advances as giving the capability for the computer to learn, reason, and problem-solving skills that a human inherits. This review summarizes the current use of AI in neurosurgery, the challenges that need to be addressed, and what the future holds.
Methods:
A literature review was carried out with a focus on the use of AI in the field of neurosurgery and its future implication in neurosurgical research.
Results:
The online literature on the use of AI in the field of neurosurgery shows the diversity of topics in terms of its current and future implications. The main areas that are being studied are diagnostic, outcomes, and treatment models.
Conclusion:
Wonders of AI in the field of medicine and neurosurgery hold true, yet there are a lot of challenges that need to be addressed before its implications can be seen in the field of neurosurgery from patient privacy, to access to high-quality data and overreliance on surgeons on AI. The future of AI in neurosurgery is pointed toward a patient-centric approach, managing clinical tasks, and helping in diagnosing and preoperative assessment of the patients.
Collapse
Affiliation(s)
- Javed Iqbal
- School of Medicine, King Edward Medical University Lahore, Punjab, Pakistan,
| | - Kainat Jahangir
- School of Medicine, Dow University of Health Sciences, Karachi, Sindh, Pakistan,
| | - Yusra Mashkoor
- Department of Internal Medicine, Dow University of Health Sciences, Karachi, Sindh, Pakistan,
| | - Nazia Sultana
- School of Medicine, Government Medical College, Siddipet, Telangana, India,
| | - Dalia Mehmood
- Department of Community Medicine, Fatima Jinnah Medical University, Lahore, Punjab, Pakistan,
| | - Mohammad Ashraf
- Wolfson School of Medicine, University of Glasgow, Scotland, United Kingdom,
| | - Ather Iqbal
- House Officer, Holy Family Hospital Rawalpindi, Punjab, Pakistan,
| | | |
Collapse
|
14
|
Irani CSS, Chu CH. Evolving with technology: Machine learning as an opportunity for operating room nurses to improve surgical care-A commentary. J Nurs Manag 2022; 30:3802-3805. [PMID: 35816560 DOI: 10.1111/jonm.13736] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 07/03/2022] [Indexed: 12/30/2022]
Abstract
AIMS To describe machine learning applications in an operating room setting, raise awareness of the lack of nursing inclusion on machine learning algorithm development, and show how operating room nurses can co-create this new technology. BACKGROUND Operating room nurses and managers perform anticipatory work on a daily basis to manage intrinsic and extrinsic factors that can cause surgical delays. EVALUATION Recent literature on machine learning and its potential use in operating room settings was reviewed along with literature on the role of the nurse in co-creating novel technology. KEY ISSUE Machine learning technology is rapidly evolving and being created for the operating room environment to improve patient safety and flow. Operating room nurses and managers are not being included in the development of machine learning algorithms, meaning products may be created that are not usable for all members of the surgical team. CONCLUSION This commentary highlights the ways machine learning effectively assists nurses and nursing managers, suggesting a pathway forward for surgical nursing as co-creators and implementers. IMPLICATION FOR NURSING MANAGEMENT Nursing managers will be exposed to machine learning programmes in the near future and need to understand the benefits they have for patient safety and patient flow.
Collapse
Affiliation(s)
- Cameron S S Irani
- Lawrence S. Bloomberg Faculty of Nursing, University of Toronto, Toronto, Ontario, Canada
| | - Charlene H Chu
- Lawrence S. Bloomberg Faculty of Nursing, University of Toronto, Toronto, Ontario, Canada.,KITE- Toronto Rehab Institution, University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
15
|
Hartwig R, Berlet M, Czempiel T, Fuchtmann J, Rückert T, Feussner H, Wilhelm D. [Image-based supportive measures for future application in surgery]. CHIRURGIE (HEIDELBERG, GERMANY) 2022; 93:956-965. [PMID: 35737019 DOI: 10.1007/s00104-022-01668-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/02/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND The development of assistive technologies will become of increasing importance in the coming years and not only in surgery. The comprehensive perception of the actual situation is the basis of every autonomous action. Different sensor systems can be used for this purpose, of which video-based systems have a special potential. METHOD Based on the available literature and on own research projects, central aspects of image-based support systems for surgery are presented. In this context, not only the potential but also the limitations of the methods are explained. RESULTS An established application is the phase detection of surgical interventions, for which surgical videos are analyzed using neural networks. Through a time-based and transformative analysis the results of the prediction could only recently be significantly improved. Robotic camera guidance systems will also use image data to autonomously navigate laparoscopes in the near future. The reliability of the systems needs to be adapted to the high requirements in surgery by means of additional information. A comparable multimodal approach has already been implemented for navigation and localization during laparoscopic procedures. For this purpose, video data are analyzed using various methods and these data are fused with other sensor modalities. DISCUSSION Image-based supportive methods are already available for various tasks and will become an important aspect for the surgery of the future; however, in order to be able to be reliably implemented for autonomous functions, they must be embedded in multimodal approaches in the future in order to provide the necessary security.
Collapse
Affiliation(s)
- R Hartwig
- Forschungsgruppe MITI, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland
| | - M Berlet
- Forschungsgruppe MITI, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland
- Fakultät für Medizin, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland
| | - T Czempiel
- Computer Aided Medical Procedures, Technische Universitat München, München, Deutschland
| | - J Fuchtmann
- Forschungsgruppe MITI, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland
| | - T Rückert
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Deutschland
| | - H Feussner
- Forschungsgruppe MITI, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland
| | - D Wilhelm
- Forschungsgruppe MITI, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland.
- Fakultät für Medizin, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland.
| |
Collapse
|
16
|
Gendia A. Cloud Based AI-Driven Video Analytics (CAVs) in Laparoscopic Surgery: A Step Closer to a Virtual Portfolio. Cureus 2022; 14:e29087. [PMID: 36259009 PMCID: PMC9559410 DOI: 10.7759/cureus.29087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/12/2022] [Indexed: 11/17/2022] Open
Abstract
Aims: To outline the use of cloud-based artificial intelligence (AI)-driven video analytics (CAVs) in minimally invasive surgery and to propose their potential as a virtual portfolio for trainee and established surgeons. Methods: An independent online demonstration was requested from three platforms, namely Theator (Palo Alto, California, USA), Touch Surgery™ (Medtronic, London, England, UK), and C-SATS® (Seattle, Washington, USA). The assessed domains were online and app-based accessibility, the ability for timely trainee feedback, and AI integration for operation-specific steps and critical views. Results: The CAVs enable users to record surgeries with the advantage of limitless video storage through clouding and smart integration into theatre settings. This can be used to view surgeries and review trainee videos through a medium of communication and sharing with the ability to provide feedback. Theator and C-SATS® provide their users with surgical skills scoring systems with customizable options that can be used to provide structured feedback to trainees. Additionally, AI plays an important role in all three platforms by providing time-based analysis of steps and highlighting critical milestones. Conclusion: Cloud-based AI-driven video analytics is an emerging new technology that enables users to store, analyze, and review videos. This technology has the potential to improve training, governance, and standardization procedures. Moreover, with the future adaptation of the technology, CAVs can be integrated into the trainees’ portfolios as part of their virtual curriculum. This can enable a structured assessment of a surgeon’s progression and degree of experience throughout their surgical career.
Collapse
|
17
|
Quero G, Mascagni P, Kolbinger FR, Fiorillo C, De Sio D, Longo F, Schena CA, Laterza V, Rosa F, Menghi R, Papa V, Tondolo V, Cina C, Distler M, Weitz J, Speidel S, Padoy N, Alfieri S. Artificial Intelligence in Colorectal Cancer Surgery: Present and Future Perspectives. Cancers (Basel) 2022; 14:3803. [PMID: 35954466 PMCID: PMC9367568 DOI: 10.3390/cancers14153803] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/29/2022] [Accepted: 08/03/2022] [Indexed: 02/05/2023] Open
Abstract
Artificial intelligence (AI) and computer vision (CV) are beginning to impact medicine. While evidence on the clinical value of AI-based solutions for the screening and staging of colorectal cancer (CRC) is mounting, CV and AI applications to enhance the surgical treatment of CRC are still in their early stage. This manuscript introduces key AI concepts to a surgical audience, illustrates fundamental steps to develop CV for surgical applications, and provides a comprehensive overview on the state-of-the-art of AI applications for the treatment of CRC. Notably, studies show that AI can be trained to automatically recognize surgical phases and actions with high accuracy even in complex colorectal procedures such as transanal total mesorectal excision (TaTME). In addition, AI models were trained to interpret fluorescent signals and recognize correct dissection planes during total mesorectal excision (TME), suggesting CV as a potentially valuable tool for intraoperative decision-making and guidance. Finally, AI could have a role in surgical training, providing automatic surgical skills assessment in the operating room. While promising, these proofs of concept require further development, validation in multi-institutional data, and clinical studies to confirm AI as a valuable tool to enhance CRC treatment.
Collapse
Affiliation(s)
- Giuseppe Quero
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Pietro Mascagni
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
| | - Fiona R. Kolbinger
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Claudio Fiorillo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Davide De Sio
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Fabio Longo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Carlo Alberto Schena
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vito Laterza
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Fausto Rosa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Roberta Menghi
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Valerio Papa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vincenzo Tondolo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Caterina Cina
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Marius Distler
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Juergen Weitz
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Stefanie Speidel
- National Center for Tumor Diseases (NCT), Partner Site Dresden, 01307 Dresden, Germany
| | - Nicolas Padoy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
- ICube, Centre National de la Recherche Scientifique (CNRS), University of Strasbourg, 67000 Strasbourg, France
| | - Sergio Alfieri
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| |
Collapse
|
18
|
Mumtaz H, Saqib M, Ansar F, Zargar D, Hameed M, Hasan M, Muskan P. The future of Cardiothoracic surgery in Artificial intelligence. Ann Med Surg (Lond) 2022; 80:104251. [PMID: 36045824 PMCID: PMC9422274 DOI: 10.1016/j.amsu.2022.104251] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/19/2022] [Accepted: 07/20/2022] [Indexed: 12/23/2022] Open
Abstract
Humans' great and quick technological breakthroughs in the previous decade have undoubtedly influenced how surgical procedures are executed in the operating room. AI is becoming incredibly influential for surgical decision-making to help surgeons make better projections about the implications of surgical operations by considering different sources of data such as patient health conditions, disease natural history, patient values, and finance. Although the application of artificial intelligence in healthcare settings is rapidly increasing, its mainstream application in clinical practice remains limited. The use of machine learning algorithms in thoracic surgery is extensive, including different clinical stages. By leveraging techniques such as machine learning, computer vision, and robotics, AI may play a key role in diagnostic augmentation, operative management, pre-and post-surgical patient management, and upholding safety standards. AI, particularly in complex surgical procedures such as cardiothoracic surgery, may be a significant help to surgeons in executing more intricate surgeries with greater success, fewer complications, and ensuring patient safety, while also providing resources for robust research and better dissemination of knowledge. In this paper, we present an overview of AI applications in thoracic surgery and its related components, including contemporary projects and technology that use AI in cardiothoracic surgery and general care. We also discussed the future of AI and how high-tech operating rooms will use human-machine collaboration to improve performance and patient safety, as well as its future directions and limitations. It is vital for the surgeons to keep themselves acquainted with the latest technological advancement in AI order to grasp this technology and easily integrate it into clinical practice when it becomes accessible. This review is a great addition to literature, keeping practicing and aspiring surgeons up to date on the most recent advances in AI and cardiothoracic surgery. This literature review tells about the role of Artificial Intelligence in Cardiothoracic Surgery. Discussed the future of AI and how high-tech operating rooms will use human-machine collaboration to improve performance and patient safety, as well as its future directions and limitations. Vital for the surgeons to keep themselves acquainted with the latest technological advancement in AI order to grasp this technology and easily integrate it into clinical practice when it becomes accessible.
Collapse
|
19
|
Dias RD, Kennedy-Metz LR, Yule SJ, Gombolay M, Zenati MA. Assessing Team Situational Awareness in the Operating Room via Computer Vision. IEEE CONFERENCE ON COGNITIVE AND COMPUTATIONAL ASPECTS OF SITUATION MANAGEMENT (COGSIMA) 2022; 2022:94-96. [PMID: 35994041 PMCID: PMC9386571 DOI: 10.1109/cogsima54611.2022.9830664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Situational awareness (SA) at both individual and team levels, plays a critical role in the operating room (OR). During the pre-incision time-out, the entire OR team comes together to deploy the surgical safety checklist (SSC). Worldwide, the implementation of the SSC has been shown to reduce intraoperative complications and mortality among surgical patients. In this study, we investigated the feasibility of applying computer vision analysis on surgical videos to extract team motion metrics that could differentiate teams with good SA from those with poor SA during the pre-incision time-out. We used a validated observation-based tool to assess SA, and a computer vision software to measure body position and motion patterns in the OR. Our findings showed that it is feasible to extract surgical team motion metrics captured via off-the-shelf OR cameras. Entropy as a measure of the level of team organization was able to distinguish surgical teams with good and poor SA. These findings corroborate existing studies showing that computer vision-based motion metrics have the potential to integrate traditional observation-based performance assessments in the OR.
Collapse
Affiliation(s)
- Roger D Dias
- Department of Emergency Medicine, Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Lauren R Kennedy-Metz
- Department of Surgery, Harvard Medical School, VA Boston Healthcare System, West Roxbury, MA, USA
| | - Steven J Yule
- Department of Clinical Surgery, University of Edinburgh, Edinburgh, Scotland
| | - Matthew Gombolay
- College of Computing, Georgia Institute of Technology, Atlanta, GA, USA
| | - Marco A Zenati
- Department of Surgery, Harvard Medical School, VA Boston Healthcare System, West Roxbury, MA, USA
| |
Collapse
|
20
|
Loftus TJ, Vlaar APJ, Hung AJ, Bihorac A, Dennis BM, Juillard C, Hashimoto DA, Kaafarani HMA, Tighe PJ, Kuo PC, Miyashita S, Wexner SD, Behrns KE. Executive summary of the artificial intelligence in surgery series. Surgery 2022; 171:1435-1439. [PMID: 34815097 PMCID: PMC9379376 DOI: 10.1016/j.surg.2021.10.047] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 10/19/2021] [Accepted: 10/22/2021] [Indexed: 12/17/2022]
Abstract
As opportunities for artificial intelligence to augment surgical care expand, the accompanying surge in published literature has generated both substantial enthusiasm and grave concern regarding the safety and efficacy of artificial intelligence in surgery. For surgeons and surgical data scientists, it is increasingly important to understand the state-of-the-art, recognize knowledge and technology gaps, and critically evaluate the deluge of literature accordingly. This article summarizes the experiences and perspectives of a global, multi-disciplinary group of experts who have faced development and implementation challenges, overcome them, and produced incipient evidence thereof. Collectively, evidence suggests that artificial intelligence has the potential to augment surgeons via decision-support, technical skill assessment, and the semi-autonomous performance of tasks ranging from resource allocation to patching foregut defects. Most applications remain in preclinical phases. As technologies and their implementations improve and positive evidence accumulates, surgeons will face professional imperatives to lead the safe, effective clinical implementation of artificial intelligence in surgery. Substantial challenges remain; recent progress in using artificial intelligence to achieve performance advantages in surgery suggests that remaining challenges can and will be overcome.
Collapse
Affiliation(s)
- Tyler J Loftus
- Department of Surgery, University of Florida Health, Gainesville, FL.
| | - Alexander P J Vlaar
- Amsterdam UMC, location AMC, University of Amsterdam, Department of Intensive Care, Amsterdam, Netherlands
| | - Andrew J Hung
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, University of Southern California Institute of Urology, Los Angeles, CA
| | - Azra Bihorac
- Department of Medicine, University of Florida Health, Gainesville, FL
| | - Bradley M Dennis
- Division of Trauma, Surgical Critical Care and Emergency General Surgery, Department of Surgery, Vanderbilt University Medical Center, Nashville, TN
| | - Catherine Juillard
- University of California, Los Angeles, Department of Surgery, Los Angeles, CA
| | - Daniel A Hashimoto
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Haytham M A Kaafarani
- Division of Trauma, Emergency Surgery & Surgical Critical Care, Department of Surgery, Massachusetts General Hospital, Boston, MA
| | - Patrick J Tighe
- Department of Anesthesiology, University of Florida College of Medicine, Gainesville, FL
| | - Paul C Kuo
- Department of General Surgery, University of South Florida Morsani College of Medicine, Tampa, FL
| | - Shuhei Miyashita
- Department of Automatic Control and Systems Engineering, University of Sheffield, UK
| | | | | |
Collapse
|
21
|
Gumbs AA, Frigerio I, Spolverato G, Croner R, Illanes A, Chouillard E, Elyan E. Artificial Intelligence Surgery: How Do We Get to Autonomous Actions in Surgery? SENSORS (BASEL, SWITZERLAND) 2021; 21:5526. [PMID: 34450976 PMCID: PMC8400539 DOI: 10.3390/s21165526] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/03/2021] [Accepted: 08/11/2021] [Indexed: 12/30/2022]
Abstract
Most surgeons are skeptical as to the feasibility of autonomous actions in surgery. Interestingly, many examples of autonomous actions already exist and have been around for years. Since the beginning of this millennium, the field of artificial intelligence (AI) has grown exponentially with the development of machine learning (ML), deep learning (DL), computer vision (CV) and natural language processing (NLP). All of these facets of AI will be fundamental to the development of more autonomous actions in surgery, unfortunately, only a limited number of surgeons have or seek expertise in this rapidly evolving field. As opposed to AI in medicine, AI surgery (AIS) involves autonomous movements. Fortuitously, as the field of robotics in surgery has improved, more surgeons are becoming interested in technology and the potential of autonomous actions in procedures such as interventional radiology, endoscopy and surgery. The lack of haptics, or the sensation of touch, has hindered the wider adoption of robotics by many surgeons; however, now that the true potential of robotics can be comprehended, the embracing of AI by the surgical community is more important than ever before. Although current complete surgical systems are mainly only examples of tele-manipulation, for surgeons to get to more autonomously functioning robots, haptics is perhaps not the most important aspect. If the goal is for robots to ultimately become more and more independent, perhaps research should not focus on the concept of haptics as it is perceived by humans, and the focus should be on haptics as it is perceived by robots/computers. This article will discuss aspects of ML, DL, CV and NLP as they pertain to the modern practice of surgery, with a focus on current AI issues and advances that will enable us to get to more autonomous actions in surgery. Ultimately, there may be a paradigm shift that needs to occur in the surgical community as more surgeons with expertise in AI may be needed to fully unlock the potential of AIS in a safe, efficacious and timely manner.
Collapse
Affiliation(s)
- Andrew A. Gumbs
- Centre Hospitalier Intercommunal de POISSY/SAINT-GERMAIN-EN-LAYE 10, Rue Champ de Gaillard, 78300 Poissy, France;
| | - Isabella Frigerio
- Department of Hepato-Pancreato-Biliary Surgery, Pederzoli Hospital, 37019 Peschiera del Garda, Italy;
| | - Gaya Spolverato
- Department of Surgical, Oncological and Gastroenterological Sciences, University of Padova, 35122 Padova, Italy;
| | - Roland Croner
- Department of General-, Visceral-, Vascular- and Transplantation Surgery, University of Magdeburg, Haus 60a, Leipziger Str. 44, 39120 Magdeburg, Germany;
| | - Alfredo Illanes
- INKA–Innovation Laboratory for Image Guided Therapy, Medical Faculty, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany;
| | - Elie Chouillard
- Centre Hospitalier Intercommunal de POISSY/SAINT-GERMAIN-EN-LAYE 10, Rue Champ de Gaillard, 78300 Poissy, France;
| | - Eyad Elyan
- School of Computing, Robert Gordon University, Aberdeen AB10 7JG, UK;
| |
Collapse
|
22
|
Seo S, Kennedy-Metz LR, Zenati MA, Shah JA, Dias RD, Unhelkar VV. Towards an AI Coach to Infer Team Mental Model Alignment in Healthcare. IEEE COGSIMA : PROCEEDINGS, 2021 IEEE CONFERENCE ON COGNITIVE AND COMPUTATIONAL ASPECTS OF SITUATION MANAGEMENT (COGSIMA) : VIRTUAL CONFERENCE, 14-22 MAY 2021. IEEE CONFERENCE ON COGNITIVE AND COMPUTATIONAL ASPECTS OF SITUATION MANAGEME... 2021; 2021:39-44. [PMID: 35253018 DOI: 10.1109/cogsima51574.2021.9475925] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Shared mental models are critical to team success; however, in practice, team members may have misaligned models due to a variety of factors. In safety-critical domains (e.g., aviation, healthcare), lack of shared mental models can lead to preventable errors and harm. Towards the goal of mitigating such preventable errors, here, we present a Bayesian approach to infer misalignment in team members' mental models during complex healthcare task execution. As an exemplary application, we demonstrate our approach using two simulated team-based scenarios, derived from actual teamwork in cardiac surgery. In these simulated experiments, our approach inferred model misalignment with over 75% recall, thereby providing a building block for enabling computer-assisted interventions to augment human cognition in the operating room and improve teamwork.
Collapse
Affiliation(s)
- Sangwon Seo
- Department of Computer Science, Rice University, Houston, TX, USA
| | | | - Marco A Zenati
- Harvard Medical School and U.S. Dept. of Veterans Affairs, Boston, MA, USA
| | - Julie A Shah
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Roger D Dias
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | | |
Collapse
|