1
|
Yangi K, On TJ, Xu Y, Gholami AS, Hong J, Reed AG, Puppalla P, Chen J, Tangsrivimol JA, Li B, Santello M, Lawton MT, Preul MC. Artificial intelligence integration in surgery through hand and instrument tracking: a systematic literature review. Front Surg 2025; 12:1528362. [PMID: 40078701 PMCID: PMC11897506 DOI: 10.3389/fsurg.2025.1528362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Accepted: 01/31/2025] [Indexed: 03/14/2025] Open
Abstract
Objective This systematic literature review of the integration of artificial intelligence (AI) applications in surgical practice through hand and instrument tracking provides an overview of recent advancements and analyzes current literature on the intersection of surgery with AI. Distinct AI algorithms and specific applications in surgical practice are also examined. Methods An advanced search using medical subject heading terms was conducted in Medline (via PubMed), SCOPUS, and Embase databases for articles published in English. A strict selection process was performed, adhering to PRISMA guidelines. Results A total of 225 articles were retrieved. After screening, 77 met inclusion criteria and were included in the review. Use of AI algorithms in surgical practice was uncommon during 2013-2017 but has gained significant popularity since 2018. Deep learning algorithms (n = 62) are increasingly preferred over traditional machine learning algorithms (n = 15). These technologies are used in surgical fields such as general surgery (n = 19), neurosurgery (n = 10), and ophthalmology (n = 9). The most common functional sensors and systems used were prerecorded videos (n = 29), cameras (n = 21), and image datasets (n = 7). The most common applications included laparoscopic (n = 13), robotic-assisted (n = 13), basic (n = 12), and endoscopic (n = 8) surgical skills training, as well as surgical simulation training (n = 8). Conclusion AI technologies can be tailored to address distinct needs in surgical education and patient care. The use of AI in hand and instrument tracking improves surgical outcomes by optimizing surgical skills training. It is essential to acknowledge the current technical and social limitations of AI and work toward filling those gaps in future studies.
Collapse
Affiliation(s)
- Kivanc Yangi
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Department of Neurosurgery, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Thomas J. On
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Department of Neurosurgery, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Yuan Xu
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Department of Neurosurgery, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Arianna S. Gholami
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Department of Neurosurgery, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Jinpyo Hong
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Department of Neurosurgery, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Alexander G. Reed
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Department of Neurosurgery, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Pravarakhya Puppalla
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Department of Neurosurgery, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Jiuxu Chen
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Department of Neurosurgery, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, United States
| | - Jonathan A. Tangsrivimol
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Department of Neurosurgery, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Baoxin Li
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, United States
| | - Marco Santello
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, United States
| | - Michael T. Lawton
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Department of Neurosurgery, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| | - Mark C. Preul
- The Loyal and Edith Davis Neurosurgical Research Laboratory, Department of Neurosurgery, Barrow Neurological Institute, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, United States
| |
Collapse
|
2
|
Carciumaru TZ, Tang CM, Farsi M, Bramer WM, Dankelman J, Raman C, Dirven CMF, Gholinejad M, Vasilic D. Systematic review of machine learning applications using nonoptical motion tracking in surgery. NPJ Digit Med 2025; 8:28. [PMID: 39809851 PMCID: PMC11733004 DOI: 10.1038/s41746-024-01412-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 12/21/2024] [Indexed: 01/16/2025] Open
Abstract
This systematic review explores machine learning (ML) applications in surgical motion analysis using non-optical motion tracking systems (NOMTS), alone or with optical methods. It investigates objectives, experimental designs, model effectiveness, and future research directions. From 3632 records, 84 studies were included, with Artificial Neural Networks (38%) and Support Vector Machines (11%) being the most common ML models. Skill assessment was the primary objective (38%). NOMTS used included internal device kinematics (56%), electromagnetic (17%), inertial (15%), mechanical (11%), and electromyography (1%) sensors. Surgical settings were robotic (60%), laparoscopic (18%), open (16%), and others (6%). Procedures focused on bench-top tasks (67%), clinical models (17%), clinical simulations (9%), and non-clinical simulations (7%). Over 90% accuracy was achieved in 36% of studies. Literature shows NOMTS and ML can enhance surgical precision, assessment, and training. Future research should advance ML in surgical environments, ensure model interpretability and reproducibility, and use larger datasets for accurate evaluation.
Collapse
Affiliation(s)
- Teona Z Carciumaru
- Department of Plastic and Reconstructive Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands.
- Department of Neurosurgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands.
| | - Cadey M Tang
- Department of Plastic and Reconstructive Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - Mohsen Farsi
- Department of Plastic and Reconstructive Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - Wichor M Bramer
- Medical Library, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - Jenny Dankelman
- Department of Biomechanical Engineering, Delft University of Technology, Delft, the Netherlands
| | - Chirag Raman
- Department of Pattern Recognition and Bioinformatics, Delft University of Technology, Delft, the Netherlands
| | - Clemens M F Dirven
- Department of Neurosurgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - Maryam Gholinejad
- Department of Plastic and Reconstructive Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
- Department of Biomechanical Engineering, Delft University of Technology, Delft, the Netherlands
| | - Dalibor Vasilic
- Department of Plastic and Reconstructive Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
3
|
Sang T, Yu F, Zhao J, Wu B, Ding X, Shen C. A novel deep learning method to segment parathyroid glands on intraoperative videos of thyroid surgery. Front Surg 2024; 11:1370017. [PMID: 38708363 PMCID: PMC11066234 DOI: 10.3389/fsurg.2024.1370017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Accepted: 04/08/2024] [Indexed: 05/07/2024] Open
Abstract
Introduction The utilization of artificial intelligence (AI) augments intraoperative safety and surgical training. The recognition of parathyroid glands (PGs) is difficult for inexperienced surgeons. The aim of this study was to find out whether deep learning could be used to auxiliary identification of PGs on intraoperative videos in patients undergoing thyroid surgery. Methods In this retrospective study, 50 patients undergoing thyroid surgery between 2021 and 2023 were randomly assigned (7:3 ratio) to a training cohort (n = 35) and a validation cohort (n = 15). The combined datasets included 98 videos with 9,944 annotated frames. An independent test cohort included 15 videos (1,500 frames) from an additional 15 patients. We developed a deep-learning model Video-Trans-U-HRNet to segment parathyroid glands in surgical videos, comparing it with three advanced medical AI methods on the internal validation cohort. Additionally, we assessed its performance against four surgeons (2 senior surgeons and 2 junior surgeons) on the independent test cohort, calculating precision and recall metrics for the model. Results Our model demonstrated superior performance compared to other AI models on the internal validation cohort. The DICE and accuracy achieved by our model were 0.760 and 74.7% respectively, surpassing Video-TransUnet (0.710, 70.1%), Video-SwinUnet (0.754, 73.6%), and TransUnet (0.705, 69.4%). For the external test, our method got 89.5% precision 77.3% recall and 70.8% accuracy. In the statistical analysis, our model demonstrated results comparable to those of senior surgeons (senior surgeon 1: χ2 = 0.989, p = 0.320; senior surgeon 2: χ2 = 1.373, p = 0.241) and outperformed 2 junior surgeons (junior surgeon 1: χ2 = 3.889, p = 0.048; junior surgeon 2: χ2 = 4.763, p = 0.029). Discussion We introduce an innovative intraoperative video method for identifying PGs, highlighting the potential advancements of AI in the surgical domain. The segmentation method employed for parathyroid glands in intraoperative videos offer surgeons supplementary guidance in locating real PGs. The method developed may have utility in facilitating training and decreasing the learning curve associated with the use of this technology.
Collapse
Affiliation(s)
- Tian Sang
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Fan Yu
- Department of Nuclear Medicine, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Junjuan Zhao
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Bo Wu
- Department of Thyroid, Breast and Hernia Surgery, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xuehai Ding
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Chentian Shen
- Department of Nuclear Medicine, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
4
|
Ghamsarian N, El-Shabrawi Y, Nasirihaghighi S, Putzgruber-Adamitsch D, Zinkernagel M, Wolf S, Schoeffmann K, Sznitman R. Cataract-1K Dataset for Deep-Learning-Assisted Analysis of Cataract Surgery Videos. Sci Data 2024; 11:373. [PMID: 38609405 PMCID: PMC11014927 DOI: 10.1038/s41597-024-03193-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 03/28/2024] [Indexed: 04/14/2024] Open
Abstract
In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons' skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.
Collapse
Affiliation(s)
- Negin Ghamsarian
- Center for Artificial Intelligence in Medicine (CAIM), Department of Medicine, University of Bern, Bern, Switzerland
| | - Yosuf El-Shabrawi
- Department of Ophthalmology, Klinikum Klagenfurt, Klagenfurt, Austria
| | - Sahar Nasirihaghighi
- Department of Information Technology, University of Klagenfurt, Klagenfurt, Austria
| | | | | | - Sebastian Wolf
- Department of Ophthalmology, Inselspital, Bern, Switzerland
| | - Klaus Schoeffmann
- Department of Information Technology, University of Klagenfurt, Klagenfurt, Austria.
| | - Raphael Sznitman
- Center for Artificial Intelligence in Medicine (CAIM), Department of Medicine, University of Bern, Bern, Switzerland
| |
Collapse
|
5
|
El-Sayed C, Yiu A, Burke J, Vaughan-Shaw P, Todd J, Lin P, Kasmani Z, Munsch C, Rooshenas L, Campbell M, Bach SP. Measures of performance and proficiency in robotic assisted surgery: a systematic review. J Robot Surg 2024; 18:16. [PMID: 38217749 DOI: 10.1007/s11701-023-01756-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 11/07/2023] [Indexed: 01/15/2024]
Abstract
Robotic assisted surgery (RAS) has seen a global rise in adoption. Despite this, there is not a standardised training curricula nor a standardised measure of performance. We performed a systematic review across the surgical specialties in RAS and evaluated tools used to assess surgeons' technical performance. Using the PRISMA 2020 guidelines, Pubmed, Embase and the Cochrane Library were searched systematically for full texts published on or after January 2020-January 2022. Observational studies and RCTs were included; review articles and systematic reviews were excluded. The papers' quality and bias score were assessed using the Newcastle Ottawa Score for the observational studies and Cochrane Risk Tool for the RCTs. The initial search yielded 1189 papers of which 72 fit the eligibility criteria. 27 unique performance metrics were identified. Global assessments were the most common tool of assessment (n = 13); the most used was GEARS (Global Evaluative Assessment of Robotic Skills). 11 metrics (42%) were objective tools of performance. Automated performance metrics (APMs) were the most widely used objective metrics whilst the remaining (n = 15, 58%) were subjective. The results demonstrate variation in tools used to assess technical performance in RAS. A large proportion of the metrics are subjective measures which increases the risk of bias amongst users. A standardised objective metric which measures all domains of technical performance from global to cognitive is required. The metric should be applicable to all RAS procedures and easily implementable. Automated performance metrics (APMs) have demonstrated promise in their wide use of accurate measures.
Collapse
Affiliation(s)
- Charlotte El-Sayed
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom.
| | - A Yiu
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - J Burke
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - P Vaughan-Shaw
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - J Todd
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - P Lin
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - Z Kasmani
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - C Munsch
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - L Rooshenas
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - M Campbell
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - S P Bach
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
6
|
AIM in Neurology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
7
|
Motaharifar M, Norouzzadeh A, Abdi P, Iranfar A, Lotfi F, Moshiri B, Lashay A, Mohammadi SF, Taghirad HD. Applications of Haptic Technology, Virtual Reality, and Artificial Intelligence in Medical Training During the COVID-19 Pandemic. Front Robot AI 2021; 8:612949. [PMID: 34476241 PMCID: PMC8407078 DOI: 10.3389/frobt.2021.612949] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 07/29/2021] [Indexed: 12/15/2022] Open
Abstract
This paper examines how haptic technology, virtual reality, and artificial intelligence help to reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education process might lead to undesired complications for the patient. Therefore, training of the medical skills to the trainees have always been a challenging issue for the expert surgeons, and this is even more challenging in pandemics. The current method of surgery training needs the novice surgeons to attend some courses, watch some procedure, and conduct their initial operations under the direct supervision of an expert surgeon. Owing to the requirement of physical contact in this method of medical training, the involved people including the novice and expert surgeons confront a potential risk of infection to the virus. This survey paper reviews recent technological breakthroughs along with new areas in which assistive technologies might provide a viable solution to reduce the physical contact in the medical institutes during the COVID-19 pandemic and similar crises.
Collapse
Affiliation(s)
- Mohammad Motaharifar
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
- Department of Electrical Engineering, University of Isfahan, Isfahan, Iran
| | - Alireza Norouzzadeh
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Parisa Abdi
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Arash Iranfar
- School of Electrical and Computer Engineering, University College of Engineering, University of Tehran, Tehran, Iran
| | - Faraz Lotfi
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Behzad Moshiri
- School of Electrical and Computer Engineering, University College of Engineering, University of Tehran, Tehran, Iran
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Alireza Lashay
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Farzad Mohammadi
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Hamid D. Taghirad
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| |
Collapse
|
8
|
Liu Z, Petersen L, Zhang Z, Singapogu R. A Method for Segmenting the Process of Needle Insertion during Simulated Cannulation using Sensor Data. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:6090-6094. [PMID: 33019360 DOI: 10.1109/embc44109.2020.9176158] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Cannulation is a routine yet challenging medical procedure resulting in a direct impact on patient outcomes. While current training programs provide guidelines to learn this complex procedure, the lack of objective and quantitative feedback impedes learning this skill more effectively. In this paper, we present a simulator for performing hemodialysis cannulation that captures the process using multiple sensing modalities that provide a multi-faceted assessment of cannulation. Further, we describe an algorithm towards segmenting the cannulation process using specific events in the sensor data for detailed analysis. Results from three participants with varying levels of clinical cannulation expertise are presented along with a metric that successfully differentiates the three participants. This work could lead to sensor-based cannulation skill assessment and training in the future potentially resulting in improved patient outcomes.
Collapse
|
9
|
Zhang D, Xiao B, Huang B, Zhang L, Liu J, Yang GZ. A Self-Adaptive Motion Scaling Framework for Surgical Robot Remote Control. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2018.2890200] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|