1
|
Stan C, Ujvary PL, Blebea C, Tănase MI, Tănase M, Pop SS, Maniu AA, Cosgarea M, Rădeanu DG. Hand Motion Analysis Using Accelerometer-Based Sensors and Sheep's Head Model for Basic Training in Functional Endoscopic Sinus Surgery. Cureus 2024; 16:e59725. [PMID: 38841010 PMCID: PMC11151713 DOI: 10.7759/cureus.59725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/06/2024] [Indexed: 06/07/2024] Open
Abstract
INTRODUCTION Motion analysis, the study of movement patterns to evaluate performance, plays a crucial role in surgical training. It provides objective data that can be used to assess and improve trainee's precision, efficiency, and overall surgical technique. The primary aim of this study is to employ accelerometer-based sensors placed on the wrist to analyze hand motions during endoscopic sinus surgery training using the sheep's head. By capturing detailed movement data, the study seeks to quantify the motion characteristics that distinguish different levels of surgical expertise. This approach seeks to quantify motion characteristics indicative of surgical expertise and enhance the objectivity and effectiveness of surgical training feedback mechanisms. MATERIALS AND METHODS Twenty-four participants were divided into three groups based on their experience with endoscopic endonasal surgery. Each participant was tasked with performing specified procedures on an individual sheep's head, concentrating on exploring both nasal passages. A single Bluetooth Accelerometer WitMotion sensor was mounted on the dorsal surface of each hand. This facilitates the evaluation of efficiency parameters such as time, path length, and acceleration during the training procedures. Accelerometer data were collected and imported in CSV format (comma-separated values) for each group of surgeons-senior, specialist, and resident-mean values and standard deviations were computed. The Shapiro-Wilk Test assessed the normality of the distribution. The Kruskal-Wallis test was employed to compare procedural time, acceleration, and path length differences across the three surgeon experience levels. RESULTS For the procedural time, statistical significance appears in all surgical steps (p<0.001), with the biggest difference in the septoplasty group in favor of the senior group. A clear difference can be observed between the resulting acceleration of the dominant hands (instrument hand) and the non-dominant hand (endoscopic hand) and between the study groups. The difference between groups reaches statistical significance with a p-value <0.001. A statistically significant difference can be seen between the paths covered by each hand of every participant (p<0.001). Also, senior doctors covered significantly less movement with both hands than the specialists and the resident doctors (p<0.001). CONCLUSIONS The data show a clear learning curve from resident to senior, with residents taking more time and using more hand movements to complete the same tasks. Specialists are in the intermediate phase, showing signs of honing their technique towards efficiency. This comprehensive data set can help tailor training programs to focus on both efficiency (quicker procedures) and economy of motion (reduced path length and acceleration), especially in more complex procedures where the difference in performance is more pronounced.
Collapse
Affiliation(s)
- Constantin Stan
- Otolaryngology, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, ROU
- Surgical Clinical, Faculty of Medicine and Pharmacy, "Dunarea de Jos" University of Galati, Galati, ROU
| | - Peter L Ujvary
- Otolaryngology, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj Napoca, ROU
| | - Cristina Blebea
- Otolaryngology, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, ROU
| | - Mihai I Tănase
- Otolaryngology, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, ROU
| | - Mara Tănase
- Otolaryngology, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, ROU
| | - Septimiu Sever Pop
- Otolaryngology, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, ROU
| | - Alma A Maniu
- Otolaryngology, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, ROU
| | - Marcel Cosgarea
- Otolaryngology, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, ROU
| | - Doinel G Rădeanu
- Otolaryngology, Iuliu Hațieganu University of Medicine and Pharmacy, Cluj-Napoca, ROU
| |
Collapse
|
2
|
Halperin L, Sroka G, Zuckerman I, Laufer S. Automatic performance evaluation of the intracorporeal suture exercise. Int J Comput Assist Radiol Surg 2024; 19:83-86. [PMID: 37278834 DOI: 10.1007/s11548-023-02963-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 05/16/2023] [Indexed: 06/07/2023]
Abstract
PURPOSE This work uses deep learning algorithms to provide automated feedback on the suture with intracorporeal knot exercise in the fundamentals of laparoscopic surgery simulator. Different metrics were designed to provide informative feedback to the user on how to complete the task more efficiently. The automation of the feedback will allow students to practice at any time without the supervision of experts. METHODS Five residents and five senior surgeons participated in the study. Object detection, image classification, and semantic segmentation deep learning algorithms were used to collect statistics on the practitioner's performance. Three task-specific metrics were defined. The metrics refer to the way the practitioner holds the needle before the insertion to the Penrose drain, and the amount of movement of the Penrose drain during the needle's insertion. RESULTS Good agreement between the human labeling and the different algorithms' performance and metric values was achieved. The difference between the scores of the senior surgeons and the surgical residents was statistically significant for one of the metrics. CONCLUSION We developed a system that provides performance metrics of the intracorporeal suture exercise. These metrics can help surgical residents practice independently and receive informative feedback on how they entered the needle into the Penrose.
Collapse
Affiliation(s)
- Liran Halperin
- Faculty of Data and Decision Sciences, Technion - Israel Institute of Technology, 3200003, Haifa, Israel.
| | - Gideon Sroka
- Department of General Surgery, Bnai-Zion Medical Center, Haifa, Israel
| | - Ido Zuckerman
- Faculty of Data and Decision Sciences, Technion - Israel Institute of Technology, 3200003, Haifa, Israel
| | - Shlomi Laufer
- Faculty of Data and Decision Sciences, Technion - Israel Institute of Technology, 3200003, Haifa, Israel
| |
Collapse
|
3
|
Pedrett R, Mascagni P, Beldi G, Padoy N, Lavanchy JL. Technical skill assessment in minimally invasive surgery using artificial intelligence: a systematic review. Surg Endosc 2023; 37:7412-7424. [PMID: 37584774 PMCID: PMC10520175 DOI: 10.1007/s00464-023-10335-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 07/20/2023] [Indexed: 08/17/2023]
Abstract
BACKGROUND Technical skill assessment in surgery relies on expert opinion. Therefore, it is time-consuming, costly, and often lacks objectivity. Analysis of intraoperative data by artificial intelligence (AI) has the potential for automated technical skill assessment. The aim of this systematic review was to analyze the performance, external validity, and generalizability of AI models for technical skill assessment in minimally invasive surgery. METHODS A systematic search of Medline, Embase, Web of Science, and IEEE Xplore was performed to identify original articles reporting the use of AI in the assessment of technical skill in minimally invasive surgery. Risk of bias (RoB) and quality of the included studies were analyzed according to Quality Assessment of Diagnostic Accuracy Studies criteria and the modified Joanna Briggs Institute checklists, respectively. Findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. RESULTS In total, 1958 articles were identified, 50 articles met eligibility criteria and were analyzed. Motion data extracted from surgical videos (n = 25) or kinematic data from robotic systems or sensors (n = 22) were the most frequent input data for AI. Most studies used deep learning (n = 34) and predicted technical skills using an ordinal assessment scale (n = 36) with good accuracies in simulated settings. However, all proposed models were in development stage, only 4 studies were externally validated and 8 showed a low RoB. CONCLUSION AI showed good performance in technical skill assessment in minimally invasive surgery. However, models often lacked external validity and generalizability. Therefore, models should be benchmarked using predefined performance metrics and tested in clinical implementation studies.
Collapse
Affiliation(s)
- Romina Pedrett
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Pietro Mascagni
- IHU Strasbourg, Strasbourg, France
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| | - Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- IHU Strasbourg, Strasbourg, France.
- University Digestive Health Care Center Basel - Clarunis, PO Box, 4002, Basel, Switzerland.
| |
Collapse
|
4
|
Liu Z, Hitchcock DB, Singapogu RB. Cannulation Skill Assessment Using Functional Data Analysis. IEEE J Biomed Health Inform 2023; 27:4512-4523. [PMID: 37310836 PMCID: PMC10519736 DOI: 10.1109/jbhi.2023.3283188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE A clinician's operative skill-the ability to safely and effectively perform a procedure-directly impacts patient outcomes and well-being. Therefore, it is necessary to accurately assess skill progression during medical training as well as develop methods to most efficiently train healthcare professionals. METHODS In this study, we explore whether time-series needle angle data recorded during cannulation on a simulator can be analyzed using functional data analysis methods to (1) identify skilled versus unskilled performance and (2) relate angle profiles to degree of success of the procedure. RESULTS Our methods successfully differentiated between types of needle angle profiles. In addition, the identified profile types were associated with degrees of skilled and unskilled behavior of subjects. Furthermore, the types of variability in the dataset were analyzed, providing particular insight into the overall range of needle angles used as well as the rate of change of angle as cannulation progressed in time. Finally, cannulation angle profiles also demonstrated an observable correlation with degree of cannulation success, a metric that is closely related to clinical outcome. CONCLUSION In summary, the methods presented here enable rich assessment of clinical skill since the functional (i.e., dynamic) nature of the data is duly considered.
Collapse
|
5
|
Baghdadi A, Lama S, Singh R, Sutherland GR. Tool-tissue force segmentation and pattern recognition for evaluating neurosurgical performance. Sci Rep 2023; 13:9591. [PMID: 37311965 DOI: 10.1038/s41598-023-36702-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 06/08/2023] [Indexed: 06/15/2023] Open
Abstract
Surgical data quantification and comprehension expose subtle patterns in tasks and performance. Enabling surgical devices with artificial intelligence provides surgeons with personalized and objective performance evaluation: a virtual surgical assist. Here we present machine learning models developed for analyzing surgical finesse using tool-tissue interaction force data in surgical dissection obtained from a sensorized bipolar forceps. Data modeling was performed using 50 neurosurgery procedures that involved elective surgical treatment for various intracranial pathologies. The data collection was conducted by 13 surgeons of varying experience levels using sensorized bipolar forceps, SmartForceps System. The machine learning algorithm constituted design and implementation for three primary purposes, i.e., force profile segmentation for obtaining active periods of tool utilization using T-U-Net, surgical skill classification into Expert and Novice, and surgical task recognition into two primary categories of Coagulation versus non-Coagulation using FTFIT deep learning architectures. The final report to surgeon was a dashboard containing recognized segments of force application categorized into skill and task classes along with performance metrics charts compared to expert level surgeons. Operating room data recording of > 161 h containing approximately 3.6 K periods of tool operation was utilized. The modeling resulted in Weighted F1-score = 0.95 and AUC = 0.99 for force profile segmentation using T-U-Net, Weighted F1-score = 0.71 and AUC = 0.81 for surgical skill classification, and Weighted F1-score = 0.82 and AUC = 0.89 for surgical task recognition using a subset of hand-crafted features augmented to FTFIT neural network. This study delivers a novel machine learning module in a cloud, enabling an end-to-end platform for intraoperative surgical performance monitoring and evaluation. Accessed through a secure application for professional connectivity, a paradigm for data-driven learning is established.
Collapse
Affiliation(s)
- Amir Baghdadi
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Sanju Lama
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Rahul Singh
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Garnette R Sutherland
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada.
| |
Collapse
|
6
|
Lazar A, Sroka G, Laufer S. Automatic assessment of performance in the FLS trainer using computer vision. Surg Endosc 2023:10.1007/s00464-023-10132-8. [PMID: 37253868 DOI: 10.1007/s00464-023-10132-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 05/08/2023] [Indexed: 06/01/2023]
Abstract
BACKGROUND Fundamentals of Laparoscopic Surgery (FLS) box trainer is a well-accepted method for training and evaluating laparoscopic skills. It mandates an observer that will measure and evaluate the trainee's performance. Measuring performance in the Peg Transfer task includes time and penalty for dropping pegs. This study aimed to assess whether computer vision (CV) may be used to automatically measure performance in the FLS box trainer. METHODS Four groups of metrics were defined and measured automatically using CV. Validity was assessed by dividing participants to 3 groups of experience levels. Twenty-seven participants were recorded performing the Peg Transfer task 2-4 times, amounting to 72 videos. Frames were sampled from the videos and labeled to create an image dataset. Using these images, we trained a deep neural network (YOLOv4) to detect the different objects in the video. We developed an evaluation system that tracks the transfer of the triangles and produces a feedback report with the metrics being the main criteria. The metric groups were Time, Grasper Movement Speed, Path Efficiency, and Grasper Coordination. The performance was compared based on their last video (3 participants were excluded due to technical issues). RESULTS The ANOVA tests show that for all metrics except one, the variance in performance can be explained by the experience level of participants. Senior surgeons and residents significantly outperform students and interns on almost every metric. Senior surgeons usually outperform residents, but the gap is not always significant. CONCLUSION The statistical analysis shows that the metrics can differentiate between the experts and novices performing the task in several aspects. Thus, they may provide a more detailed performance analysis than is currently used. Moreover, these metrics calculation is automatic and relies solely on the video camera of the FLS trainer. As a result, they allow independent training and assessment.
Collapse
Affiliation(s)
- Aviad Lazar
- Faculty of Data and Decision Sciences, Technion, Bloomfield 515, 32000, Haifa, Israel
- Department of General Surgery, Bnai-Zion Medical Center, Haifa, Israel
| | - Gideon Sroka
- Faculty of Data and Decision Sciences, Technion, Bloomfield 515, 32000, Haifa, Israel
- Department of General Surgery, Bnai-Zion Medical Center, Haifa, Israel
| | - Shlomi Laufer
- Faculty of Data and Decision Sciences, Technion, Bloomfield 515, 32000, Haifa, Israel.
- Department of General Surgery, Bnai-Zion Medical Center, Haifa, Israel.
| |
Collapse
|
7
|
Jackson KL, Durić Z, Engdahl SM, Santago II AC, DeStefano S, Gerber LH. Computer-assisted approaches for measuring, segmenting, and analyzing functional upper extremity movement: a narrative review of the current state, limitations, and future directions. FRONTIERS IN REHABILITATION SCIENCES 2023; 4:1130847. [PMID: 37113748 PMCID: PMC10126348 DOI: 10.3389/fresc.2023.1130847] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 03/23/2023] [Indexed: 04/29/2023]
Abstract
The analysis of functional upper extremity (UE) movement kinematics has implications across domains such as rehabilitation and evaluating job-related skills. Using movement kinematics to quantify movement quality and skill is a promising area of research but is currently not being used widely due to issues associated with cost and the need for further methodological validation. Recent developments by computationally-oriented research communities have resulted in potentially useful methods for evaluating UE function that may make kinematic analyses easier to perform, generally more accessible, and provide more objective information about movement quality, the importance of which has been highlighted during the COVID-19 pandemic. This narrative review provides an interdisciplinary perspective on the current state of computer-assisted methods for analyzing UE kinematics with a specific focus on how to make kinematic analyses more accessible to domain experts. We find that a variety of methods exist to more easily measure and segment functional UE movement, with a subset of those methods being validated for specific applications. Future directions include developing more robust methods for measurement and segmentation, validating these methods in conjunction with proposed kinematic outcome measures, and studying how to integrate kinematic analyses into domain expert workflows in a way that improves outcomes.
Collapse
Affiliation(s)
- Kyle L. Jackson
- Department of Computer Science, George Mason University, Fairfax, VA, United States
- MITRE Corporation, McLean, VA, United States
| | - Zoran Durić
- Department of Computer Science, George Mason University, Fairfax, VA, United States
- Center for Adaptive Systems and Brain-Body Interactions, George Mason University, Fairfax, VA, United States
| | - Susannah M. Engdahl
- Center for Adaptive Systems and Brain-Body Interactions, George Mason University, Fairfax, VA, United States
- Department of Bioengineering, George Mason University, Fairfax, VA, United States
- American Orthotic & Prosthetic Association, Alexandria, VA, United States
| | | | | | - Lynn H. Gerber
- Center for Adaptive Systems and Brain-Body Interactions, George Mason University, Fairfax, VA, United States
- College of Public Health, George Mason University, Fairfax, VA, United States
- Inova Health System, Falls Church, VA, United States
| |
Collapse
|
8
|
Louis N, Zhou L, Yule SJ, Dias RD, Manojlovich M, Pagani FD, Likosky DS, Corso JJ. Temporally guided articulated hand pose tracking in surgical videos. Int J Comput Assist Radiol Surg 2023; 18:117-125. [PMID: 36190616 PMCID: PMC9883342 DOI: 10.1007/s11548-022-02761-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 09/13/2022] [Indexed: 02/01/2023]
Abstract
PURPOSE Articulated hand pose tracking is an under-explored problem that carries the potential for use in an extensive number of applications, especially in the medical domain. With a robust and accurate tracking system on surgical videos, the motion dynamics and movement patterns of the hands can be captured and analyzed for many rich tasks. METHODS In this work, we propose a novel hand pose estimation model, CondPose, which improves detection and tracking accuracy by incorporating a pose prior into its prediction. We show improvements over state-of-the-art methods which provide frame-wise independent predictions, by following a temporally guided approach that effectively leverages past predictions. RESULTS We collect Surgical Hands, the first dataset that provides multi-instance articulated hand pose annotations for videos. Our dataset provides over 8.1k annotated hand poses from publicly available surgical videos and bounding boxes, pose annotations, and tracking IDs to enable multi-instance tracking. When evaluated on Surgical Hands, we show our method outperforms the state-of-the-art approach using mean Average Precision, to measure pose estimation accuracy, and Multiple Object Tracking Accuracy, to assess pose tracking performance. CONCLUSION In comparison to a frame-wise independent strategy, we show greater performance in detecting and tracking hand poses and more substantial impact on localization accuracy. This has positive implications in generating more accurate representations of hands in the scene to be used for targeted downstream tasks.
Collapse
Affiliation(s)
| | | | - Steven J. Yule
- Clinical Surgery, University of Edinburgh, Edinburgh, Scotland, UK
| | - Roger D. Dias
- Emergency Medicine, Harvard Medical School, Boston, MA USA
| | | | | | | | | |
Collapse
|
9
|
A virtual surgical prototype system based on gesture recognition for virtual surgical training in maxillofacial surgery. Int J Comput Assist Radiol Surg 2022; 18:909-919. [PMID: 36418763 PMCID: PMC10113313 DOI: 10.1007/s11548-022-02790-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 11/02/2022] [Indexed: 11/25/2022]
Abstract
Abstract
Background
Virtual reality (VR) technology is an ideal alternative of operation training and surgical teaching. However, virtual surgery is usually carried out using the mouse or data gloves, which affects the authenticity of virtual operation. A virtual surgery system with gesture recognition and real-time image feedback was explored to realize more authentic immersion.
Method
Gesture recognition technology proposed with an efficient and real-time algorithm and high fidelity was explored. The recognition of hand contour, palm and fingertip was firstly realized by hand data extraction. Then, an Support Vector Machine classifier was utilized to classify and recognize common gestures after extraction of feature recognition. The algorithm of collision detection adopted Axis Aligned Bounding Box binary tree to build hand and scalpel collision models. What’s more, nominal radius theorem (NRT) and separating axis theorem (SAT) were applied for speeding up collision detection. Based on the maxillofacial virtual surgical system we proposed before, the feasibility of integration of the above technologies in this prototype system was evaluated.
Results
Ten kinds of signal static gestures were designed to test gesture recognition algorithms. The accuracy of gestures recognition is more than 80%, some of which were over 90%. The generation speed of collision detection model met the software requirements with the method of NRT and SAT. The response time of gesture] recognition was less than 40 ms, namely the speed of hand gesture recognition system was greater than 25 Hz. On the condition of integration of hand gesture recognition, typical virtual surgical procedures including grabbing a scalpel, puncture site selection, virtual puncture operation and incision were carried out with realization of real-time image feedback.
Conclusion
Based on the previous maxillofacial virtual surgical system that consisted of VR, triangular mesh collision detection and maxillofacial biomechanical model construction, the integration of hand gesture recognition was a feasible method to improve the interactivity and immersion of virtual surgical operation training.
Collapse
|
10
|
Gholinejad M, Pelanis E, Aghayan D, Fretland ÅA, Edwin B, Terkivatan T, Elle OJ, Loeve AJ, Dankelman J. Generic surgical process model for minimally invasive liver treatment methods. Sci Rep 2022; 12:16684. [PMID: 36202857 PMCID: PMC9537522 DOI: 10.1038/s41598-022-19891-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 09/06/2022] [Indexed: 11/09/2022] Open
Abstract
Surgical process modelling is an innovative approach that aims to simplify the challenges involved in improving surgeries through quantitative analysis of a well-established model of surgical activities. In this paper, surgical process model strategies are applied for the analysis of different Minimally Invasive Liver Treatments (MILTs), including ablation and surgical resection of the liver lesions. Moreover, a generic surgical process model for these differences in MILTs is introduced. The generic surgical process model was established at three different granularity levels. The generic process model, encompassing thirteen phases, was verified against videos of MILT procedures and interviews with surgeons. The established model covers all the surgical and interventional activities and the connections between them and provides a foundation for extensive quantitative analysis and simulations of MILT procedures for improving computer-assisted surgery systems, surgeon training and evaluation, surgeon guidance and planning systems and evaluation of new technologies.
Collapse
Affiliation(s)
- Maryam Gholinejad
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands.
| | - Egidius Pelanis
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Institute of Clinical Medicine, Medical Faculty, University of Oslo, Oslo, Norway
| | - Davit Aghayan
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Department of Surgery N1, Yerevan State Medical University After M. Heratsi, Yerevan, Armenia
| | - Åsmund Avdem Fretland
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Department of HPB Surgery, Oslo University Hospital, Oslo, Norway
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Institute of Clinical Medicine, Medical Faculty, University of Oslo, Oslo, Norway.,Department of HPB Surgery, Oslo University Hospital, Oslo, Norway
| | - Turkan Terkivatan
- Department of Surgery, Division of HPB and Transplant Surgery, Erasmus MC, University Medical Centre Rotterdam, Rotterdam, The Netherlands
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
| | - Arjo J Loeve
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands
| | - Jenny Dankelman
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
11
|
An explainable machine learning method for assessing surgical skill in liposuction surgery. Int J Comput Assist Radiol Surg 2022; 17:2325-2336. [PMID: 36167953 DOI: 10.1007/s11548-022-02739-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 08/12/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Surgical skill assessment has received growing interest in surgery training and quality control due to its essential role in competency assessment and trainee feedback. However, the current assessment methods rarely provide corresponding feedback guidance while giving ability evaluation. We aim to validate an explainable surgical skill assessment method that automatically evaluates the trainee performance of liposuction surgery and provides visual postoperative and real-time feedback. METHODS In this study, machine learning using a model-agnostic interpretable method based on stroke segmentation was introduced to objectively evaluate surgical skills. We evaluated the method on liposuction surgery datasets that consisted of motion and force data for classification tasks. RESULTS Our classifier achieved optimistic accuracy in clinical and imitation liposuction surgery models, ranging from 89 to 94%. With the help of SHapley Additive exPlanations (SHAP), we deeply explore the potential rules of liposuction operation between surgeons with variant experiences and provide real-time feedback based on the ML model to surgeons with undesirable skills. CONCLUSION Our results demonstrate the strong abilities of explainable machine learning methods in objective surgical skill assessment. We believe that the machine learning model based on interpretive methods proposed in this article can improve the evaluation and training of liposuction surgery and provide objective assessment and training guidance for other surgeries.
Collapse
|
12
|
Continuous monitoring of surgical bimanual expertise using deep neural networks in virtual reality simulation. NPJ Digit Med 2022; 5:54. [PMID: 35473961 PMCID: PMC9042967 DOI: 10.1038/s41746-022-00596-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 03/29/2022] [Indexed: 11/22/2022] Open
Abstract
In procedural-based medicine, the technical ability can be a critical determinant of patient outcomes. Psychomotor performance occurs in real-time, hence a continuous assessment is necessary to provide action-oriented feedback and error avoidance guidance. We outline a deep learning application, the Intelligent Continuous Expertise Monitoring System (ICEMS), to assess surgical bimanual performance at 0.2-s intervals. A long-short term memory network was built using neurosurgeon and student performance in 156 virtually simulated tumor resection tasks. Algorithm predictive ability was tested separately on 144 procedures by scoring the performance of neurosurgical trainees who are at different training stages. The ICEMS successfully differentiated between neurosurgeons, senior trainees, junior trainees, and students. Trainee average performance score correlated with the year of training in neurosurgery. Furthermore, coaching and risk assessment for critical metrics were demonstrated. This work presents a comprehensive technical skill monitoring system with predictive validation throughout surgical residency training, with the ability to detect errors.
Collapse
|
13
|
Koskinen J, Huotarinen A, Elomaa AP, Zheng B, Bednarik R. Movement-level process modeling of microsurgical bimanual and unimanual tasks. Int J Comput Assist Radiol Surg 2021; 17:305-314. [PMID: 34913139 PMCID: PMC8784365 DOI: 10.1007/s11548-021-02537-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 11/19/2021] [Indexed: 11/09/2022]
Abstract
Purpose Microsurgical techniques require highly skilled manual handling of specialized surgical instruments. Surgical process models are central for objective evaluation of these skills, enabling data-driven solutions that can improve intraoperative efficiency. Method We built a surgical process model, defined at movement level in terms of elementary surgical actions (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$n=4$$\end{document}n=4) and targets (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$n=4$$\end{document}n=4). The model also included nonproductive movements, which enabled us to evaluate suturing efficiency and bi-manual dexterity. The elementary activities were used to investigate differences between novice (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$n=5$$\end{document}n=5) and expert surgeons (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$n=5$$\end{document}n=5) by comparing the cosine similarity of vector representations of a microsurgical suturing training task and its different segments. Results Based on our model, the experts were significantly more efficient than the novices at using their tools individually and simultaneously. At suture level, the experts were significantly more efficient at using their left hand tool, but the differences were not significant for the right hand tool. At the level of individual suture segments, the experts had on average 21.0 % higher suturing efficiency and 48.2 % higher bi-manual efficiency, and the results varied between segments. Similarity of the manual actions showed that expert and novice surgeons could be distinguished by their movement patterns. Conclusions The surgical process model allowed us to identify differences between novices’ and experts’ movements and to evaluate their uni- and bi-manual tool use efficiency. Analyzing surgical tasks in this manner could be used to evaluate surgical skill and help surgical trainees detect problems in their performance computationally.
Collapse
Affiliation(s)
- Jani Koskinen
- School of Computing, University of Eastern Finland, 80110, Joensuu, Finland.
| | - Antti Huotarinen
- Department of Neurosurgery, Institute of Clinical Medicine, Kuopio University Hospital, 70211, Kuopio, Finland
- Microsurgery Center, Kuopio University Hospital, 70211, Kuopio, Finland
| | - Antti-Pekka Elomaa
- Department of Neurosurgery, Institute of Clinical Medicine, Kuopio University Hospital, 70211, Kuopio, Finland
- Microsurgery Center, Kuopio University Hospital, 70211, Kuopio, Finland
| | - Bin Zheng
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, Canada
| | - Roman Bednarik
- School of Computing, University of Eastern Finland, 80110, Joensuu, Finland
| |
Collapse
|
14
|
Bilgic E, Gorgy A, Yang A, Cwintal M, Ranjbar H, Kahla K, Reddy D, Li K, Ozturk H, Zimmermann E, Quaiattini A, Abbasgholizadeh-Rahimi S, Poenaru D, Harley JM. Exploring the roles of artificial intelligence in surgical education: A scoping review. Am J Surg 2021; 224:205-216. [PMID: 34865736 DOI: 10.1016/j.amjsurg.2021.11.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/19/2021] [Accepted: 11/22/2021] [Indexed: 01/02/2023]
Abstract
BACKGROUND Technology-enhanced teaching and learning, including Artificial Intelligence (AI) applications, has started to evolve in surgical education. Hence, the purpose of this scoping review is to explore the current and future roles of AI in surgical education. METHODS Nine bibliographic databases were searched from January 2010 to January 2021. Full-text articles were included if they focused on AI in surgical education. RESULTS Out of 14,008 unique sources of evidence, 93 were included. Out of 93, 84 were conducted in the simulation setting, and 89 targeted technical skills. Fifty-six studies focused on skills assessment/classification, and 36 used multiple AI techniques. Also, increasing sample size, having balanced data, and using AI to provide feedback were major future directions mentioned by authors. CONCLUSIONS AI can help optimize the education of trainees and our results can help educators and researchers identify areas that need further investigation.
Collapse
Affiliation(s)
- Elif Bilgic
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrew Gorgy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Alison Yang
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Michelle Cwintal
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Hamed Ranjbar
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kalin Kahla
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Dheeksha Reddy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kexin Li
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Helin Ozturk
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Eric Zimmermann
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrea Quaiattini
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada
| | - Samira Abbasgholizadeh-Rahimi
- Department of Family Medicine, McGill University, Montreal, Quebec, Canada; Department of Electrical and Computer Engineering, McGill University, Montreal, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Canada; Mila Quebec AI Institute, Montreal, Canada
| | - Dan Poenaru
- Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Department of Pediatric Surgery, McGill University, Canada
| | - Jason M Harley
- Department of Surgery, McGill University, Montreal, Quebec, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada; Steinberg Centre for Simulation and Interactive Learning, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
15
|
Castillo-Segura P, Fernández-Panadero C, Alario-Hoyos C, Muñoz-Merino PJ, Delgado Kloos C. A cost-effective IoT learning environment for the training and assessment of surgical technical skills with visual learning analytics. J Biomed Inform 2021; 124:103952. [PMID: 34798158 DOI: 10.1016/j.jbi.2021.103952] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/05/2021] [Accepted: 11/07/2021] [Indexed: 10/19/2022]
Abstract
BACKGROUND Surgeons need to train and certify their technical skills. This is usually done with the intervention of experts who monitor and assess trainees. Nevertheless, this is a time-consuming task that is subject to variations among evaluators. In recent decades, subjectivity has been significantly reduced through 1) the introduction of standard curricula, such as the Fundamentals of Laparoscopic Surgery (FLS) program, which measures students' performance in specific exercises, and 2) rubrics, which are widely accepted in the literature and serve to provide feedback about the overall technical skills of the trainees. Although these two elements reduce subjectivity, they do not, however, eliminate the figure of the expert evaluator, and so the process remains time consuming. OBJECTIVES The objective of this work is to automate those parts of the work of the expert evaluator that the technology can measure objectively, using sensors to collect evidence, and visualizations to provide feedback. We designed and developed 1) a cost-effective IoT (Internet of Things) learning environment for the training and assessment of surgical technical skills and 2) visualizations supported by the literature on visual learning analytics (VLA) to provide feedback about the exercises (in real time) and overall performance (at the end of the training) of the trainee. METHODS A hybrid approach was followed based on previous research for the design of the sensor based IoT learning environment. Previous studies were used as the basis for getting best practices on the tracking of surgical instruments and on the detection of the force applied to the tissue, with a focus on reducing the costs of data collection. The monitoring of the specific exercises required the design of sensors and collection mechanisms from scratch as there is little existing research on this subject. Moreover, it was necessary to design the overall architecture to collect, process, synchronize and communicate the data coming from the different sensors to provide high-level information relevant to the end user. The information to be presented was already validated by the literature and the focus was on how to visualize this information and the optimal time for its presentation to end users. The visualizations were validated with 18 VLA experts assessing the technical aspects of the visualizations and 4 medical experts assessing their functional aspects. RESULTS This IoT learning environment amplifies the evaluation mechanisms already validated by the literature, allowing automatic data collection. First, it uses IoT sensors to automatically correct two of the exercises defined in the FLS (peg transfer and precision cutting), providing real-time visualizations. Second it monitors the movement of the surgical instruments and the force applied to the tissues during the exercise, computing 6 of the high-level indicators used by expert evaluators in their rubrics (efficiency, economy of movement, hand tremor, depth perception, bimanual dexterity, and respect for tissue), providing feedback about the technical skills of the trainee using a radar chart with these six indicators at the end of the training (summative visualizations). CONCLUSIONS The proposed IoT learning environment is a promising and cost-effective alternative to help in the training and assessment of surgical technical skills. The system shows the trainees' progress and presents new indicators about the correctness of each specific exercise through real-time visualizations, as well as their general technical skills through summative visualizations, aligned with the 6 more frequent indicators in standardized scales. Early results suggest that although both types of visualizations are useful, it is necessary to reduce the cognitive load of the graphs presented in real time during training. Nevertheless, an additional evaluation is needed to confirm these results.
Collapse
Affiliation(s)
- Pablo Castillo-Segura
- Universidad Carlos III de Madrid, Avenida Universidad 30, 28911 Leganés, Madrid, Spain.
| | | | - Carlos Alario-Hoyos
- Universidad Carlos III de Madrid, Avenida Universidad 30, 28911 Leganés, Madrid, Spain.
| | - Pedro J Muñoz-Merino
- Universidad Carlos III de Madrid, Avenida Universidad 30, 28911 Leganés, Madrid, Spain.
| | - Carlos Delgado Kloos
- Universidad Carlos III de Madrid, Avenida Universidad 30, 28911 Leganés, Madrid, Spain.
| |
Collapse
|
16
|
Abstract
Among the various robotic devices that exist for urologic surgery, the most common are synergistic telemanipulator systems. Several have achieved clinical feasibility and have been licensed for use in humans: the standard da Vinci, Avatera, Hinotori, Revo-i, Senhance, Versius, and Surgenius. Handheld and hands-on synergistic systems are also clinically relevant for use in urologic surgeries, including minimally invasive and endoscopic approaches. Future trends of robotic innovation include an exploration of more robust haptic systems that offer kinesthetic and tactile feedback; miniaturization and microrobotics; enhanced visual feedback with greater magnification and higher fidelity detail; and autonomous robots.
Collapse
|
17
|
Kitaguchi D, Takeshita N, Matsuzaki H, Igaki T, Hasegawa H, Ito M. Development and Validation of a 3-Dimensional Convolutional Neural Network for Automatic Surgical Skill Assessment Based on Spatiotemporal Video Analysis. JAMA Netw Open 2021; 4:e2120786. [PMID: 34387676 PMCID: PMC8363914 DOI: 10.1001/jamanetworkopen.2021.20786] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
IMPORTANCE A high level of surgical skill is essential to prevent intraoperative problems. One important aspect of surgical education is surgical skill assessment, with pertinent feedback facilitating efficient skill acquisition by novices. OBJECTIVES To develop a 3-dimensional (3-D) convolutional neural network (CNN) model for automatic surgical skill assessment and to evaluate the performance of the model in classification tasks by using laparoscopic colorectal surgical videos. DESIGN, SETTING, AND PARTICIPANTS This prognostic study used surgical videos acquired prior to 2017. In total, 650 laparoscopic colorectal surgical videos were provided for study purposes by the Japan Society for Endoscopic Surgery, and 74 were randomly extracted. Every video had highly reliable scores based on the Endoscopic Surgical Skill Qualification System (ESSQS, range 1-100, with higher scores indicating greater surgical skill) established by the society. Data were analyzed June to December 2020. MAIN OUTCOMES AND MEASURES From the groups with scores less than the difference between the mean and 2 SDs, within the range spanning the mean and 1 SD, and greater than the sum of the mean and 2 SDs, 17, 26, and 31 videos, respectively, were randomly extracted. In total, 1480 video clips with a length of 40 seconds each were extracted for each surgical step (medial mobilization, lateral mobilization, inferior mesenteric artery transection, and mesorectal transection) and separated into 1184 training sets and 296 test sets. Automatic surgical skill classification was performed based on spatiotemporal video analysis using the fully automated 3-D CNN model, and classification accuracies and screening accuracies for the groups with scores less than the mean minus 2 SDs and greater than the mean plus 2 SDs were calculated. RESULTS The mean (SD) ESSQS score of all 650 intraoperative videos was 66.2 (8.6) points and for the 74 videos used in the study, 67.6 (16.1) points. The proposed 3-D CNN model automatically classified video clips into groups with scores less than the mean minus 2 SDs, within 1 SD of the mean, and greater than the mean plus 2 SDs with a mean (SD) accuracy of 75.0% (6.3%). The highest accuracy was 83.8% for the inferior mesenteric artery transection. The model also screened for the group with scores less than the mean minus 2 SDs with 94.1% sensitivity and 96.5% specificity and for group with greater than the mean plus 2 SDs with 87.1% sensitivity and 86.0% specificity. CONCLUSIONS AND RELEVANCE The results of this prognostic study showed that the proposed 3-D CNN model classified laparoscopic colorectal surgical videos with sufficient accuracy to be used for screening groups with scores greater than the mean plus 2 SDs and less than the mean minus 2 SDs. The proposed approach was fully automatic and easy to use for various types of surgery, and no special annotations or kinetics data extraction were required, indicating that this approach warrants further development for application to automatic surgical skill assessment.
Collapse
Affiliation(s)
- Daichi Kitaguchi
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Nobuyoshi Takeshita
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Hiroki Matsuzaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Takahiro Igaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Hiro Hasegawa
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Masaaki Ito
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| |
Collapse
|
18
|
|
19
|
Lefor AK, Harada K, Dosis A, Mitsuishi M. Motion analysis of the JHU-ISI Gesture and Skill Assessment Working Set II: learning curve analysis. Int J Comput Assist Radiol Surg 2021; 16:589-595. [PMID: 33723706 DOI: 10.1007/s11548-021-02339-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 01/12/2023]
Abstract
PURPOSE The Johns Hopkins-Intuitive Gesture and Skill Assessment Working Set (JIGSAWS) dataset is used to develop robotic surgery skill assessment tools, but there has been no detailed analysis of this dataset. The aim of this study is to perform a learning curve analysis of the existing JIGSAWS dataset. METHODS Five trials were performed in JIGSAWS by eight participants (four novices, two intermediates and two experts) for three exercises (suturing, knot-tying and needle passing). Global Rating Scores and time, path length and movements were analyzed quantitatively and qualitatively by graphical analysis. RESULTS There are no significant differences in Global Rating Scale scores over time. Time in the suturing exercise and path length in needle passing had significant differences. Other kinematic parameters were not significantly different. Qualitative analysis shows a learning curve only for suturing. Cumulative sum analysis suggests completion of the learning curve for suturing by trial 4. CONCLUSIONS The existing JIGSAWS dataset does not show a quantitative learning curve for Global Rating Scale scores, or most kinematic parameters which may be due in part to the limited size of the dataset. Qualitative analysis shows a learning curve for suturing. Cumulative sum analysis suggests completion of the suturing learning curve by trial 4. An expanded dataset is needed to facilitate subset analyses.
Collapse
Affiliation(s)
- Alan Kawarai Lefor
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Kanako Harada
- Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| | | | - Mamoru Mitsuishi
- Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
20
|
Visual Intelligence: Prediction of Unintentional Surgical-Tool-Induced Bleeding during Robotic and Laparoscopic Surgery. ROBOTICS 2021. [DOI: 10.3390/robotics10010037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Unintentional vascular damage can result from a surgical instrument’s abrupt movements during minimally invasive surgery (laparoscopic or robotic). A novel real-time image processing algorithm based on local entropy is proposed that can detect abrupt movements of surgical instruments and predict bleeding occurrence. The uniform nature of the texture of surgical tools is utilized to segment the tools from the background. By comparing changes in entropy over time, the algorithm determines when the surgical instruments are moved abruptly. We tested the algorithm using 17 videos of minimally invasive surgery, 11 of which had tool-induced bleeding. Our preliminary testing shows that the algorithm is 88% accurate and 90% precise in predicting bleeding. The average advance warning time for the 11 videos is 0.662 s, with the standard deviation being 0.427 s. The proposed approach has the potential to eventually lead to a surgical early warning system or even proactively attenuate tool movement (for robotic surgery) to avoid dangerous surgical outcomes.
Collapse
|
21
|
Davids J, Makariou SG, Ashrafian H, Darzi A, Marcus HJ, Giannarou S. Automated Vision-Based Microsurgical Skill Analysis in Neurosurgery Using Deep Learning: Development and Preclinical Validation. World Neurosurg 2021; 149:e669-e686. [PMID: 33588081 DOI: 10.1016/j.wneu.2021.01.117] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 01/22/2021] [Accepted: 01/23/2021] [Indexed: 12/22/2022]
Abstract
BACKGROUND/OBJECTIVE Technical skill acquisition is an essential component of neurosurgical training. Educational theory suggests that optimal learning and improvement in performance depends on the provision of objective feedback. Therefore, the aim of this study was to develop a vision-based framework based on a novel representation of surgical tool motion and interactions capable of automated and objective assessment of microsurgical skill. METHODS Videos were obtained from 1 expert, 6 intermediate, and 12 novice surgeons performing arachnoid dissection in a validated clinical model using a standard operating microscope. A mask region convolutional neural network framework was used to segment the tools present within the operative field in a recorded video frame. Tool motion analysis was achieved using novel triangulation metrics. Performance of the framework in classifying skill levels was evaluated using the area under the curve and accuracy. Objective measures of classifying the surgeons' skill level were also compared using the Mann-Whitney U test, and a value of P < 0.05 was considered statistically significant. RESULTS The area under the curve was 0.977 and the accuracy was 84.21%. A number of differences were found, which included experts having a lower median dissector velocity (P = 0.0004; 190.38 ms-1 vs. 116.38 ms-1), and a smaller inter-tool tip distance (median 46.78 vs. 75.92; P = 0.0002) compared with novices. CONCLUSIONS Automated and objective analysis of microsurgery is feasible using a mask region convolutional neural network, and a novel tool motion and interaction representation. This may support technical skills training and assessment in neurosurgery.
Collapse
Affiliation(s)
- Joseph Davids
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom; Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Savvas-George Makariou
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom
| | - Hutan Ashrafian
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom
| | - Ara Darzi
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom
| | - Hani J Marcus
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom; Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Stamatia Giannarou
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom.
| |
Collapse
|
22
|
Alnafisee N, Zafar S, Vedula SS, Sikder S. Current methods for assessing technical skill in cataract surgery. J Cataract Refract Surg 2021; 47:256-264. [PMID: 32675650 DOI: 10.1097/j.jcrs.0000000000000322] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 06/19/2020] [Indexed: 12/18/2022]
Abstract
Surgery is a major source of errors in patient care. Preventing complications from surgical errors in the operating room is estimated to lead to reduction of up to 41 846 readmissions and save $620.3 million per year. It is now established that poor technical skill is associated with an increased risk of severe adverse events postoperatively and traditional models to train surgeons are being challenged by rapid advances in technology, an intensified patient-safety culture, and a need for value-driven health systems. This review discusses the current methods available for evaluating technical skills in cataract surgery and the recent technological advancements that have enabled capture and analysis of large amounts of complex surgical data for more automated objective skills assessment.
Collapse
Affiliation(s)
- Nouf Alnafisee
- From the The Wilmer Eye Institute, Johns Hopkins University School of Medicine (Alnafisee, Zafar, Sikder), Baltimore, and the Department of Computer Science, Malone Center for Engineering in Healthcare, The Johns Hopkins University Whiting School of Engineering (Vedula), Baltimore, Maryland, USA
| | | | | | | |
Collapse
|
23
|
Castillo-Segura P, Fernández-Panadero C, Alario-Hoyos C, Muñoz-Merino PJ, Delgado Kloos C. Objective and automated assessment of surgical technical skills with IoT systems: A systematic literature review. Artif Intell Med 2021; 112:102007. [PMID: 33581827 DOI: 10.1016/j.artmed.2020.102007] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 11/25/2020] [Accepted: 12/28/2020] [Indexed: 11/18/2022]
Abstract
The assessment of surgical technical skills to be acquired by novice surgeons has been traditionally done by an expert surgeon and is therefore of a subjective nature. Nevertheless, the recent advances on IoT (Internet of Things), the possibility of incorporating sensors into objects and environments in order to collect large amounts of data, and the progress on machine learning are facilitating a more objective and automated assessment of surgical technical skills. This paper presents a systematic literature review of papers published after 2013 discussing the objective and automated assessment of surgical technical skills. 101 out of an initial list of 537 papers were analyzed to identify: 1) the sensors used; 2) the data collected by these sensors and the relationship between these data, surgical technical skills and surgeons' levels of expertise; 3) the statistical methods and algorithms used to process these data; and 4) the feedback provided based on the outputs of these statistical methods and algorithms. Particularly, 1) mechanical and electromagnetic sensors are widely used for tool tracking, while inertial measurement units are widely used for body tracking; 2) path length, number of sub-movements, smoothness, fixation, saccade and total time are the main indicators obtained from raw data and serve to assess surgical technical skills such as economy, efficiency, hand tremor, or mind control, and distinguish between two or three levels of expertise (novice/intermediate/advanced surgeons); 3) SVM (Support Vector Machines) and Neural Networks are the preferred statistical methods and algorithms for processing the data collected, while new opportunities are opened up to combine various algorithms and use deep learning; and 4) feedback is provided by matching performance indicators and a lexicon of words and visualizations, although there is considerable room for research in the context of feedback and visualizations, taking, for example, ideas from learning analytics.
Collapse
Affiliation(s)
- Pablo Castillo-Segura
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | | | - Carlos Alario-Hoyos
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | - Pedro J Muñoz-Merino
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | - Carlos Delgado Kloos
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| |
Collapse
|
24
|
Lefor AK, Harada K, Dosis A, Mitsuishi M. Motion analysis of the JHU-ISI Gesture and Skill Assessment Working Set using Robotics Video and Motion Assessment Software. Int J Comput Assist Radiol Surg 2020; 15:2017-2025. [PMID: 33025366 PMCID: PMC7671974 DOI: 10.1007/s11548-020-02259-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 09/04/2020] [Indexed: 12/01/2022]
Abstract
Purpose The JIGSAWS dataset is a fixed dataset of robot-assisted surgery kinematic data used to develop predictive models of skill. The purpose of this study is to analyze the relationships of self-defined skill level with global rating scale scores and kinematic data (time, path length and movements) from three exercises (suturing, knot-tying and needle passing) (right and left hands) in the JIGSAWS dataset. Methods Global rating scale scores are reported in the JIGSAWS dataset and kinematic data were calculated using ROVIMAS software. Self-defined skill levels are in the dataset (novice, intermediate, expert). Correlation coefficients (global rating scale-skill level and global rating scale-kinematic parameters) were calculated. Kinematic parameters were compared among skill levels. Results Global rating scale scores correlated with skill in the knot-tying exercise (r = 0.55, p = 0.0005). In the suturing exercise, time, path length (left) and movements (left) were significantly different (p < 0.05) for novices and experts. For knot-tying, time, path length (right and left) and movements (right) differed significantly for novices and experts. For needle passing, no kinematic parameter was significantly different comparing novices and experts. The only kinematic parameter that correlated with global rating scale scores is time in the knot-tying exercise. Conclusion Global rating scale scores weakly correlate with skill level and kinematic parameters. The ability of kinematic parameters to differentiate among self-defined skill levels is inconsistent. Additional data are needed to enhance the dataset and facilitate subset analyses and future model development.
Collapse
Affiliation(s)
- Alan Kawarai Lefor
- Department of Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Kanako Harada
- Department of Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan.,Department of Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| | | | - Mamoru Mitsuishi
- Department of Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan.,Department of Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
25
|
Bidirectional long short-term memory for surgical skill classification of temporally segmented tasks. Int J Comput Assist Radiol Surg 2020; 15:2079-2088. [PMID: 33000365 DOI: 10.1007/s11548-020-02269-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 09/23/2020] [Indexed: 10/23/2022]
Abstract
PURPOSE The majority of historical surgical skill research typically analyzes holistic summary task-level metrics to create a skill classification for a performance. Recent advances in machine learning allow time series classification at the sub-task level, allowing predictions on segments of tasks, which could improve task-level technical skill assessment. METHODS A bidirectional long short-term memory (LSTM) network was used with 8-s windows of multidimensional time-series data from the Basic Laparoscopic Urologic Skills dataset. The network was trained on experts and novices from four common surgical tasks. Stratified cross-validation with regularization was used to avoid overfitting. The misclassified cases were re-submitted for surgical technical skill assessment to crowds using Amazon Mechanical Turk to re-evaluate and to analyze the level of agreement with previous scores. RESULTS Performance was best for the suturing task, with 96.88% accuracy at predicting whether a performance was an expert or novice, with 1 misclassification, when compared to previously obtained crowd evaluations. When compared with expert surgeon ratings, the LSTM predictions resulted in a Spearman coefficient of 0.89 for suturing tasks. When crowds re-evaluated misclassified performances, it was found that for all 5 misclassified cases from peg transfer and suturing tasks, the crowds agreed more with our LSTM model than with the previously obtained crowd scores. CONCLUSION The technique presented shows results not incomparable with labels which would be obtained from crowd-sourced labels of surgical tasks. However, these results bring about questions of the reliability of crowd sourced labels in videos of surgical tasks. We, as a research community, should take a closer look at crowd labeling with higher scrutiny, systematically look at biases, and quantify label noise.
Collapse
|
26
|
|
27
|
Close MF, Mehta CH, Liu Y, Isaac MJ, Costello MS, Kulbarsh KD, Meyer TA. Subjective vs Computerized Assessment of Surgeon Skill Level During Mastoidectomy. Otolaryngol Head Neck Surg 2020; 163:1255-1257. [PMID: 32600121 DOI: 10.1177/0194599820933882] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This pilot study examines the use of surgical instrument tracking and motion analysis in objectively measuring surgical performance. Accuracy of objective measures in distinguishing between surgeons of different levels was compared to that of subjective assessments. Twenty-four intraoperative video clips of mastoidectomies performed by junior residents (n = 12), senior residents (n = 8), and faculty (n = 4) were sent to otolaryngology programs via survey, yielding 708 subjective ratings of surgical experience level. Tracking software captured the total distance traveled by the drill, suction irrigator, and patient's head. Measurements were used to predict surgeon level of training, and accuracy was estimated via area under the curve (AUC) of receiver operating characteristic curves. Key objective metrics proved more accurate than subjective evaluations in determining both faculty vs resident level and senior vs junior resident level. The findings of this study suggest that objective analysis using computer software has the potential to improve the accuracy of surgical skill assessment.
Collapse
Affiliation(s)
- Michaela F Close
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Charmee H Mehta
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Yuan Liu
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Mitchell J Isaac
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Mark S Costello
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Kyle D Kulbarsh
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Ted A Meyer
- Medical University of South Carolina, Charleston, South Carolina, USA
| |
Collapse
|
28
|
Zhang D, Wu Z, Chen J, Gao A, Chen X, Li P, Wang Z, Yang G, Lo B, Yang GZ. Automatic Microsurgical Skill Assessment Based on Cross-Domain Transfer Learning. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2989075] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
29
|
Anh NX, Nataraja RM, Chauhan S. Towards near real-time assessment of surgical skills: A comparison of feature extraction techniques. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105234. [PMID: 31794913 DOI: 10.1016/j.cmpb.2019.105234] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 10/31/2019] [Accepted: 11/18/2019] [Indexed: 05/22/2023]
Abstract
BACKGROUND AND OBJECTIVE Surgical skill assessment aims to objectively evaluate and provide constructive feedback for trainee surgeons. Conventional methods require direct observation with assessment from surgical experts which are both unscalable and subjective. The recent involvement of surgical robotic systems in the operating room has facilitated the ability of automated evaluation of the expertise level of trainees for certain representative maneuvers by using machine learning for motion analysis. The features extraction technique plays a critical role in such an automated surgical skill assessment system. METHODS We present a direct comparison of nine well-known feature extraction techniques which are statistical features, principal component analysis, discrete Fourier/Cosine transform, codebook, deep learning models and auto-encoder for automated surgical skills evaluation. Towards near real-time evaluation, we also investigate the effect of time interval on the classification accuracy and efficiency. RESULTS We validate the study on the benchmark robotic surgical training JIGSAWS dataset. An accuracy of 95.63, 90.17 and 90.26% by the Principal Component Analysis and 96.84, 92.75 and 95.36% by the deep Convolutional Neural Network for suturing, knot tying and needle passing, respectively, highlighted the effectiveness of these two techniques in extracting the most discriminative features among different surgical skill levels. CONCLUSIONS This study contributes toward the development of an online automated and efficient surgical skills assessment technique.
Collapse
Affiliation(s)
- Nguyen Xuan Anh
- Department of Mechanical and Aerospace Engineering, Monash University, Melbourne, Australia
| | - Ramesh Mark Nataraja
- Department of Surgical Simulation, Monash Children's Hospital, Melbourne, Australia
| | - Sunita Chauhan
- Department of Mechanical and Aerospace Engineering, Monash University, Melbourne, Australia.
| |
Collapse
|
30
|
Khalid S, Goldenberg M, Grantcharov T, Taati B, Rudzicz F. Evaluation of Deep Learning Models for Identifying Surgical Actions and Measuring Performance. JAMA Netw Open 2020; 3:e201664. [PMID: 32227178 DOI: 10.1001/jamanetworkopen.2020.1664] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
IMPORTANCE When evaluating surgeons in the operating room, experienced physicians must rely on live or recorded video to assess the surgeon's technical performance, an approach prone to subjectivity and error. Owing to the large number of surgical procedures performed daily, it is infeasible to review every procedure; therefore, there is a tremendous loss of invaluable performance data that would otherwise be useful for improving surgical safety. OBJECTIVE To evaluate a framework for assessing surgical video clips by categorizing them based on the surgical step being performed and the level of the surgeon's competence. DESIGN, SETTING, AND PARTICIPANTS This quality improvement study assessed 103 video clips of 8 surgeons of various levels performing knot tying, suturing, and needle passing from the Johns Hopkins University-Intuitive Surgical Gesture and Skill Assessment Working Set. Data were collected before 2015, and data analysis took place from March to July 2019. MAIN OUTCOMES AND MEASURES Deep learning models were trained to estimate categorical outputs such as performance level (ie, novice, intermediate, and expert) and surgical actions (ie, knot tying, suturing, and needle passing). The efficacy of these models was measured using precision, recall, and model accuracy. RESULTS The provided architectures achieved accuracy in surgical action and performance calculation tasks using only video input. The embedding representation had a mean (root mean square error [RMSE]) precision of 1.00 (0) for suturing, 0.99 (0.01) for knot tying, and 0.91 (0.11) for needle passing, resulting in a mean (RMSE) precision of 0.97 (0.01). Its mean (RMSE) recall was 0.94 (0.08) for suturing, 1.00 (0) for knot tying, and 0.99 (0.01) for needle passing, resulting in a mean (RMSE) recall of 0.98 (0.01). It also estimated scores on the Objected Structured Assessment of Technical Skill Global Rating Scale categories, with a mean (RMSE) precision of 0.85 (0.09) for novice level, 0.67 (0.07) for intermediate level, and 0.79 (0.12) for expert level, resulting in a mean (RMSE) precision of 0.77 (0.04). Its mean (RMSE) recall was 0.85 (0.05) for novice level, 0.69 (0.14) for intermediate level, and 0.80 (0.13) for expert level, resulting in a mean (RMSE) recall of 0.78 (0.03). CONCLUSIONS AND RELEVANCE The proposed models and the accompanying results illustrate that deep machine learning can identify associations in surgical video clips. These are the first steps to creating a feedback mechanism for surgeons that would allow them to learn from their experiences and refine their skills.
Collapse
Affiliation(s)
- Shuja Khalid
- Surgical Safety Technologies, Toronto, Ontario, Canada
| | | | | | - Babak Taati
- Surgical Safety Technologies, Toronto, Ontario, Canada
| | - Frank Rudzicz
- Surgical Safety Technologies, Toronto, Ontario, Canada
| |
Collapse
|
31
|
Nguyen XA, Ljuhar D, Pacilli M, Nataraja RM, Chauhan S. Surgical skill levels: Classification and analysis using deep neural network model and motion signals. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 177:1-8. [PMID: 31319938 DOI: 10.1016/j.cmpb.2019.05.008] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 04/11/2019] [Accepted: 05/11/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES Currently, the assessment of surgical skills relies primarily on the observations of expert surgeons. This may be time-consuming, non-scalable, inconsistent and subjective. Therefore, an automated system that can objectively identify the actual skills level of a junior trainee is highly desirable. This study aims to design an automated surgical skills evaluation system. METHODS We propose to use a deep neural network model that can analyze raw surgical motion data with minimal preprocessing. A platform with inertial measurement unit sensors was developed and participants with different levels of surgical experience were recruited to perform core open surgical skills tasks. JIGSAWS a publicly available robot based surgical training dataset was used to evaluate the generalization of our deep network model. 15 participants (4 experts, 4 intermediates and 7 novices) were recruited into the study. RESULTS The proposed deep model achieved an accuracy of 98.2%. With comparison to JIGSAWS; our method outperformed some existing approaches with an accuracy of 98.4%, 98.4% and 94.7% for suturing, needle-passing, and knot-tying, respectively. The experimental results demonstrated the applicability of this method in both open surgery and robot-assisted minimally invasive surgery. CONCLUSIONS This study demonstrated the potential ability of the proposed deep network model to learn the discriminative features between different surgical skills levels.
Collapse
Affiliation(s)
- Xuan Anh Nguyen
- Department of Mechanical and Aerospace Engineering, Monash University, Clayton, Victoria, 3800, Australia
| | - Damir Ljuhar
- Department of Surgical Simulation, Monash Children's Hospital, Melbourne, Australia; Department of Paediatrics, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Maurizio Pacilli
- Department of Surgical Simulation, Monash Children's Hospital, Melbourne, Australia; Department of Paediatrics, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Ramesh Mark Nataraja
- Department of Surgical Simulation, Monash Children's Hospital, Melbourne, Australia; Department of Paediatrics, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Sunita Chauhan
- Department of Mechanical and Aerospace Engineering, Monash University, Clayton, Victoria, 3800, Australia.
| |
Collapse
|
32
|
Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networks. Int J Comput Assist Radiol Surg 2019; 14:1611-1617. [DOI: 10.1007/s11548-019-02039-4] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Accepted: 07/22/2019] [Indexed: 10/26/2022]
|
33
|
Ismail Fawaz H, Forestier G, Weber J, Petitjean F, Idoumghar L, Muller PA. Automatic Alignment of Surgical Videos Using Kinematic Data. Artif Intell Med 2019. [DOI: 10.1007/978-3-030-21642-9_14] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
34
|
Teije AT, Popow C, Holmes JH, Sacchi L. Preface: AIME 2017. Artif Intell Med 2018; 91:1-2. [PMID: 30409394 DOI: 10.1016/j.artmed.2018.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|