1
|
Carciumaru TZ, Tang CM, Farsi M, Bramer WM, Dankelman J, Raman C, Dirven CMF, Gholinejad M, Vasilic D. Systematic review of machine learning applications using nonoptical motion tracking in surgery. NPJ Digit Med 2025; 8:28. [PMID: 39809851 PMCID: PMC11733004 DOI: 10.1038/s41746-024-01412-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 12/21/2024] [Indexed: 01/16/2025] Open
Abstract
This systematic review explores machine learning (ML) applications in surgical motion analysis using non-optical motion tracking systems (NOMTS), alone or with optical methods. It investigates objectives, experimental designs, model effectiveness, and future research directions. From 3632 records, 84 studies were included, with Artificial Neural Networks (38%) and Support Vector Machines (11%) being the most common ML models. Skill assessment was the primary objective (38%). NOMTS used included internal device kinematics (56%), electromagnetic (17%), inertial (15%), mechanical (11%), and electromyography (1%) sensors. Surgical settings were robotic (60%), laparoscopic (18%), open (16%), and others (6%). Procedures focused on bench-top tasks (67%), clinical models (17%), clinical simulations (9%), and non-clinical simulations (7%). Over 90% accuracy was achieved in 36% of studies. Literature shows NOMTS and ML can enhance surgical precision, assessment, and training. Future research should advance ML in surgical environments, ensure model interpretability and reproducibility, and use larger datasets for accurate evaluation.
Collapse
Affiliation(s)
- Teona Z Carciumaru
- Department of Plastic and Reconstructive Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands.
- Department of Neurosurgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands.
| | - Cadey M Tang
- Department of Plastic and Reconstructive Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - Mohsen Farsi
- Department of Plastic and Reconstructive Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - Wichor M Bramer
- Medical Library, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - Jenny Dankelman
- Department of Biomechanical Engineering, Delft University of Technology, Delft, the Netherlands
| | - Chirag Raman
- Department of Pattern Recognition and Bioinformatics, Delft University of Technology, Delft, the Netherlands
| | - Clemens M F Dirven
- Department of Neurosurgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - Maryam Gholinejad
- Department of Plastic and Reconstructive Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
- Department of Biomechanical Engineering, Delft University of Technology, Delft, the Netherlands
| | - Dalibor Vasilic
- Department of Plastic and Reconstructive Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
2
|
Moglia A, Georgiou K, Georgiou E, Satava RM, Cuschieri A. A systematic review on artificial intelligence in robot-assisted surgery. Int J Surg 2021; 95:106151. [PMID: 34695601 DOI: 10.1016/j.ijsu.2021.106151] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 10/04/2021] [Accepted: 10/19/2021] [Indexed: 12/12/2022]
Abstract
BACKGROUND Despite the extensive published literature on the significant potential of artificial intelligence (AI) there are no reports on its efficacy in improving patient safety in robot-assisted surgery (RAS). The purposes of this work are to systematically review the published literature on AI in RAS, and to identify and discuss current limitations and challenges. MATERIALS AND METHODS A literature search was conducted on PubMed, Web of Science, Scopus, and IEEExplore according to PRISMA 2020 statement. Eligible articles were peer-review studies published in English language from January 1, 2016 to December 31, 2020. Amstar 2 was used for quality assessment. Risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data of the studies were visually presented in tables using SPIDER tool. RESULTS Thirty-five publications, representing 3436 patients, met the search criteria and were included in the analysis. The selected reports concern: motion analysis (n = 17), urology (n = 12), gynecology (n = 1), other specialties (n = 1), training (n = 3), and tissue retraction (n = 1). Precision for surgical tools detection varied from 76.0% to 90.6%. Mean absolute error on prediction of urinary continence after robot-assisted radical prostatectomy (RARP) ranged from 85.9 to 134.7 days. Accuracy on prediction of length of stay after RARP was 88.5%. Accuracy on recognition of the next surgical task during robot-assisted partial nephrectomy (RAPN) achieved 75.7%. CONCLUSION The reviewed studies were of low quality. The findings are limited by the small size of the datasets. Comparison between studies on the same topic was restricted due to algorithms and datasets heterogeneity. There is no proof that currently AI can identify the critical tasks of RAS operations, which determine patient outcome. There is an urgent need for studies on large datasets and external validation of the AI algorithms used. Furthermore, the results should be transparent and meaningful to surgeons, enabling them to inform patients in layman's words. REGISTRATION Review Registry Unique Identifying Number: reviewregistry1225.
Collapse
Affiliation(s)
- Andrea Moglia
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, 56124, Pisa, Italy 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Greece MPLSC, Athens Medical School, National and Kapodistrian University of Athens, Greece Department of Surgery, University of Washington Medical Center, Seattle, WA, United States Scuola Superiore Sant'Anna of Pisa, 56214, Pisa, Italy Institute for Medical Science and Technology, University of Dundee, Dundee, DD2 1FD, United Kingdom
| | | | | | | | | |
Collapse
|
3
|
Meli D, Fiorini P. Unsupervised Identification of Surgical Robotic Actions From Small Non-Homogeneous Datasets. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3104880] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
4
|
van Amsterdam B, Clarkson MJ, Stoyanov D. Gesture Recognition in Robotic Surgery: A Review. IEEE Trans Biomed Eng 2021; 68:2021-2035. [PMID: 33497324 DOI: 10.1109/tbme.2021.3054828] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Surgical activity recognition is a fundamental step in computer-assisted interventions. This paper reviews the state-of-the-art in methods for automatic recognition of fine-grained gestures in robotic surgery focusing on recent data-driven approaches and outlines the open questions and future research directions. METHODS An article search was performed on 5 bibliographic databases with the following search terms: robotic, robot-assisted, JIGSAWS, surgery, surgical, gesture, fine-grained, surgeme, action, trajectory, segmentation, recognition, parsing. Selected articles were classified based on the level of supervision required for training and divided into different groups representing major frameworks for time series analysis and data modelling. RESULTS A total of 52 articles were reviewed. The research field is showing rapid expansion, with the majority of articles published in the last 4 years. Deep-learning-based temporal models with discriminative feature extraction and multi-modal data integration have demonstrated promising results on small surgical datasets. Currently, unsupervised methods perform significantly less well than the supervised approaches. CONCLUSION The development of large and diverse open-source datasets of annotated demonstrations is essential for development and validation of robust solutions for surgical gesture recognition. While new strategies for discriminative feature extraction and knowledge transfer, or unsupervised and semi-supervised approaches, can mitigate the need for data and labels, they have not yet been demonstrated to achieve comparable performance. Important future research directions include detection and forecast of gesture-specific errors and anomalies. SIGNIFICANCE This paper is a comprehensive and structured analysis of surgical gesture recognition methods aiming to summarize the status of this rapidly evolving field.
Collapse
|
5
|
Application of artificial intelligence in surgery. Front Med 2020; 14:417-430. [DOI: 10.1007/s11684-020-0770-0] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 03/05/2020] [Indexed: 12/14/2022]
|
6
|
Lin CJ, Lee CL. Deployment and navigation of multiple robots using a self-clustering method and type-2 fuzzy controller in dynamic environments. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-182003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Cheng-Jian Lin
- Department of Computer Science and Information Engineering, National Chin-Yi University of Technology, Taiwan, ROC
| | - Chin-Ling Lee
- Department of International Business, National Taichung University of Science and Technology, Taiwan, ROC
| |
Collapse
|
7
|
Forestier G, Petitjean F, Senin P, Despinoy F, Huaulmé A, Fawaz HI, Weber J, Idoumghar L, Muller PA, Jannin P. Surgical motion analysis using discriminative interpretable patterns. Artif Intell Med 2018; 91:3-11. [PMID: 30172445 DOI: 10.1016/j.artmed.2018.08.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 07/06/2018] [Accepted: 08/13/2018] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The analysis of surgical motion has received a growing interest with the development of devices allowing their automatic capture. In this context, the use of advanced surgical training systems makes an automated assessment of surgical trainee possible. Automatic and quantitative evaluation of surgical skills is a very important step in improving surgical patient care. MATERIAL AND METHOD In this paper, we present an approach for the discovery and ranking of discriminative and interpretable patterns of surgical practice from recordings of surgical motions. A pattern is defined as a series of actions or events in the kinematic data that together are distinctive of a specific gesture or skill level. Our approach is based on the decomposition of continuous kinematic data into a set of overlapping gestures represented by strings (bag of words) for which we compute comparative numerical statistic (tf-idf) enabling the discriminative gesture discovery via its relative occurrence frequency. RESULTS We carried out experiments on three surgical motion datasets. The results show that the patterns identified by the proposed method can be used to accurately classify individual gestures, skill levels and surgical interfaces. We also present how the patterns provide a detailed feedback on the trainee skill assessment. CONCLUSIONS The proposed approach is an interesting addition to existing learning tools for surgery as it provides a way to obtain a feedback on which parts of an exercise have been used to classify the attempt as correct or incorrect.
Collapse
Affiliation(s)
- Germain Forestier
- IRIMAS, Université de Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - François Petitjean
- Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - Pavel Senin
- Los Alamos National Laboratory, University Of Hawai'i at Mānoa, United States.
| | - Fabien Despinoy
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| | - Arnaud Huaulmé
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| | | | | | | | | | - Pierre Jannin
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| |
Collapse
|
8
|
|
9
|
A computationally efficient method for hand-eye calibration. Int J Comput Assist Radiol Surg 2017; 12:1775-1787. [PMID: 28726116 PMCID: PMC5608875 DOI: 10.1007/s11548-017-1646-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Accepted: 07/10/2017] [Indexed: 11/05/2022]
Abstract
Purpose Surgical robots with cooperative control and semiautonomous features have shown increasing clinical potential, particularly for repetitive tasks under imaging and vision guidance. Effective performance of an autonomous task requires accurate hand–eye calibration so that the transformation between the robot coordinate frame and the camera coordinates is well defined. In practice, due to changes in surgical instruments, online hand–eye calibration must be performed regularly. In order to ensure seamless execution of the surgical procedure without affecting the normal surgical workflow, it is important to derive fast and efficient hand–eye calibration methods. Methods We present a computationally efficient iterative method for hand–eye calibration. In this method, dual quaternion is introduced to represent the rigid transformation, and a two-step iterative method is proposed to recover the real and dual parts of the dual quaternion simultaneously, and thus the estimation of rotation and translation of the transformation. Results The proposed method was applied to determine the rigid transformation between the stereo laparoscope and the robot manipulator. Promising experimental and simulation results have shown significant convergence speed improvement to 3 iterations from larger than 30 with regard to standard optimization method, which illustrates the effectiveness and efficiency of the proposed method.
Collapse
|
10
|
Fard MJ, Ameri S, Darin Ellis R, Chinnam RB, Pandya AK, Klein MD. Automated robot-assisted surgical skill evaluation: Predictive analytics approach. Int J Med Robot 2017; 14. [PMID: 28660725 DOI: 10.1002/rcs.1850] [Citation(s) in RCA: 88] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2016] [Revised: 06/01/2017] [Accepted: 06/02/2017] [Indexed: 12/29/2022]
Abstract
BACKGROUND Surgical skill assessment has predominantly been a subjective task. Recently, technological advances such as robot-assisted surgery have created great opportunities for objective surgical evaluation. In this paper, we introduce a predictive framework for objective skill assessment based on movement trajectory data. Our aim is to build a classification framework to automatically evaluate the performance of surgeons with different levels of expertise. METHODS Eight global movement features are extracted from movement trajectory data captured by a da Vinci robot for surgeons with two levels of expertise - novice and expert. Three classification methods - k-nearest neighbours, logistic regression and support vector machines - are applied. RESULTS The result shows that the proposed framework can classify surgeons' expertise as novice or expert with an accuracy of 82.3% for knot tying and 89.9% for a suturing task. CONCLUSION This study demonstrates and evaluates the ability of machine learning methods to automatically classify expert and novice surgeons using global movement features.
Collapse
Affiliation(s)
- Mahtab J Fard
- Department of Industrial and Systems Engineering, Wayne State University, Detroit, Michigan, USA
| | - Sattar Ameri
- Department of Industrial and Systems Engineering, Wayne State University, Detroit, Michigan, USA
| | - R Darin Ellis
- Department of Industrial and Systems Engineering, Wayne State University, Detroit, Michigan, USA
| | - Ratna B Chinnam
- Department of Industrial and Systems Engineering, Wayne State University, Detroit, Michigan, USA
| | - Abhilash K Pandya
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan, USA
| | - Michael D Klein
- Department of Surgery, Wayne State University School of Medicine and Pediatric Surgery, Children's Hospital of Michigan, Detroit, Michigan, USA
| |
Collapse
|
11
|
Fard MJ, Pandya AK, Chinnam RB, Klein MD, Ellis RD. Distance-based time series classification approach for task recognition with application in surgical robot autonomy. Int J Med Robot 2016; 13. [DOI: 10.1002/rcs.1766] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2016] [Revised: 07/09/2016] [Accepted: 07/12/2016] [Indexed: 11/09/2022]
Affiliation(s)
- Mahtab J. Fard
- Department of Industrial and Systems Engineering; Wayne State University; Detroit MI USA
| | - Abhilash K. Pandya
- Department of Electrical and Computer Engineering; Wayne State University; Detroit MI USA
| | - Ratna B. Chinnam
- Department of Industrial and Systems Engineering; Wayne State University; Detroit MI USA
| | - Michael D. Klein
- Department of Surgery; Wayne State University School of Medicine and Pediatric Surgery, Children's Hospital of Michigan; Detroit MI USA
| | - R. Darin Ellis
- Department of Industrial and Systems Engineering; Wayne State University; Detroit MI USA
| |
Collapse
|