1
|
Goldenberg MG. Surgical Artificial Intelligence in Urology: Educational Applications. Urol Clin North Am 2024; 51:105-115. [PMID: 37945096 DOI: 10.1016/j.ucl.2023.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Surgical education has seen immense change recently. Increased demand for iterative evaluation of trainees from medical school to independent practice has led to the generation of an overwhelming amount of data related to an individual's competency. Artificial intelligence has been proposed as a solution to automate and standardize the ability of stakeholders to assess the technical and nontechnical abilities of a surgical trainee. In both the simulation and clinical environments, evidence supports the use of machine learning algorithms to both evaluate trainee skill and provide real-time and automated feedback, enabling a shortened learning curve for many key procedural skills and ensuring patient safety.
Collapse
Affiliation(s)
- Mitchell G Goldenberg
- Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, 1441 Eastlake Avenue, Suite 7416, Los Angeles, CA 90033, USA.
| |
Collapse
|
2
|
Kil I, Eidt JF, Groff RE, Singapogu RB. Assessment of open surgery suturing skill: Simulator platform, force-based, and motion-based metrics. Front Med (Lausanne) 2022; 9:897219. [PMID: 36111107 PMCID: PMC9468321 DOI: 10.3389/fmed.2022.897219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 08/05/2022] [Indexed: 11/29/2022] Open
Abstract
Objective This paper focuses on simulator-based assessment of open surgery suturing skill. We introduce a new surgical simulator designed to collect synchronized force, motion, video and touch data during a radial suturing task adapted from the Fundamentals of Vascular Surgery (FVS) skill assessment. The synchronized data is analyzed to extract objective metrics for suturing skill assessment. Methods The simulator has a camera positioned underneath the suturing membrane, enabling visual tracking of the needle during suturing. Needle tracking data enables extraction of meaningful metrics related to both the process and the product of the suturing task. To better simulate surgical conditions, the height of the system and the depth of the membrane are both adjustable. Metrics for assessment of suturing skill based on force/torque, motion, and physical contact are presented. Experimental data are presented from a study comparing attending surgeons and surgery residents. Results Analysis shows force metrics (absolute maximum force/torque in z-direction), motion metrics (yaw, pitch, roll), physical contact metric, and image-enabled force metrics (orthogonal and tangential forces) are found to be statistically significant in differentiating suturing skill between attendings and residents. Conclusion and significance The results suggest that this simulator and accompanying metrics could serve as a useful tool for assessing and teaching open surgery suturing skill.
Collapse
Affiliation(s)
- Irfan Kil
- Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, United States
| | - John F. Eidt
- Division of Vascular Surgery, Baylor Scott & White Heart and Vascular Hospital, Dallas, TX, United States
| | - Richard E. Groff
- Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, United States
| | - Ravikiran B. Singapogu
- Department of Bioengineering, Clemson University, Clemson, SC, United States
- *Correspondence: Ravikiran B. Singapogu
| |
Collapse
|
3
|
Hutchinson K, Li Z, Cantrell LA, Schenkman NS, Alemzadeh H. Analysis of executional and procedural errors in dry‐lab robotic surgery experiments. Int J Med Robot 2022; 18:e2375. [PMID: 35114732 PMCID: PMC9285717 DOI: 10.1002/rcs.2375] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 01/25/2022] [Accepted: 01/29/2022] [Indexed: 11/10/2022]
Abstract
Background Analysing kinematic and video data can help identify potentially erroneous motions that lead to sub‐optimal surgeon performance and safety‐critical events in robot‐assisted surgery. Methods We develop a rubric for identifying task and gesture‐specific executional and procedural errors and evaluate dry‐lab demonstrations of suturing and needle passing tasks from the JIGSAWS dataset. We characterise erroneous parts of demonstrations by labelling video data, and use distribution similarity analysis and trajectory averaging on kinematic data to identify parameters that distinguish erroneous gestures. Results Executional error frequency varies by task and gesture, and correlates with skill level. Some predominant error modes in each gesture are distinguishable by analysing error‐specific kinematic parameters. Procedural errors could lead to lower performance scores and increased demonstration times but also depend on surgical style. Conclusions This study provides insights into context‐dependent errors that can be used to design automated error detection mechanisms and improve training and skill assessment.
Collapse
Affiliation(s)
- Kay Hutchinson
- Department of Electrical and Computer Engineering University of Virginia Charlottesville Virginia USA
| | - Zongyu Li
- Department of Electrical and Computer Engineering University of Virginia Charlottesville Virginia USA
| | - Leigh A. Cantrell
- Department of Obstetrics and Gynecology University of Virginia Charlottesville Virginia USA
| | - Noah S. Schenkman
- Department of Urology University of Virginia Charlottesville Virginia USA
| | - Homa Alemzadeh
- Department of Electrical and Computer Engineering University of Virginia Charlottesville Virginia USA
| |
Collapse
|
4
|
Yan Y, Zhuang N, Ni B, Zhang J, Xu M, Zhang Q, Zhang Z, Cheng S, Tian Q, Xu Y, Yang X, Zhang W. Fine-Grained Video Captioning via Graph-based Multi-Granularity Interaction Learning. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:666-683. [PMID: 31613750 DOI: 10.1109/tpami.2019.2946823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Learning to generate continuous linguistic descriptions for multi-subject interactive videos in great details has particular applications in team sports auto-narrative. In contrast to traditional video caption, this task is more challenging as it requires simultaneous modeling of fine-grained individual actions, uncovering of spatio-temporal dependency structures of frequent group interactions, and then accurate mapping of these complex interaction details into long and detailed commentary. To explicitly address these challenges, we propose a novel framework Graph-based Learning for Multi-Granularity Interaction Representation (GLMGIR) for fine-grained team sports auto-narrative task. A multi-granular interaction modeling module is proposed to extract among-subjects' interactive actions in a progressive way for encoding both intra- and inter-team interactions. Based on the above multi-granular representations, a multi-granular attention module is developed to consider action/event descriptions of multiple spatio-temporal resolutions. Both modules are integrated seamlessly and work in a collaborative way to generate the final narrative. In the meantime, to facilitate reproducible research, we collect a new video dataset from YouTube.com called Sports Video Narrative dataset (SVN). It is a novel direction as it contains 6K team sports videos (i.e., NBA basketball games) with 10K ground-truth narratives(e.g., sentences). Furthermore, as previous metrics such as METEOR (i.e., used in coarse-grained video caption task) DO NOT cope with fine-grained sports narrative task well, we hence develop a novel evaluation metric named Fine-grained Captioning Evaluation (FCE), which measures how accurate the generated linguistic description reflects fine-grained action details as well as the overall spatio-temporal interactional structure. Extensive experiments on our SVN dataset have demonstrated the effectiveness of the proposed framework for fine-grained team sports video auto-narrative.
Collapse
|
5
|
Yan J, Huang K, Lindgren K, Bonaci T, Chizeck HJ. Continuous Operator Authentication for Teleoperated Systems Using Hidden Markov Models. ACM TRANSACTIONS ON CYBER-PHYSICAL SYSTEMS 2022. [DOI: 10.1145/3488901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
In this article, we present a novel approach for continuous operator authentication in teleoperated robotic processes based on Hidden Markov Models (HMM). While HMMs were originally developed and widely used in speech recognition, they have shown great performance in human motion and activity modeling. We make an analogy between human language and teleoperated robotic processes (i.e., words are analogous to a teleoperator’s gestures, sentences are analogous to the entire teleoperated task or process) and implement HMMs to model the teleoperated task. To test the continuous authentication performance of the proposed method, we conducted two sets of analyses. We built a virtual reality (VR) experimental environment using a commodity VR headset (HTC Vive) and haptic feedback enabled controller (Sensable PHANToM Omni) to simulate a real teleoperated task. An experimental study with 10 subjects was then conducted. We also performed simulated continuous operator authentication by using the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). The performance of the model was evaluated based on the continuous (real-time) operator authentication accuracy as well as resistance to a simulated impersonation attack. The results suggest that the proposed method is able to achieve 70% (VR experiment) and 81% (JIGSAWS dataset) continuous classification accuracy with as short as a 1-second sample window. It is also capable of detecting an impersonation attack in real-time.
Collapse
|
6
|
Application of Design Structure Matrix to Simulate Surgical Procedures and Predict Surgery Duration. Minim Invasive Surg 2021; 2021:6340754. [PMID: 34912579 PMCID: PMC8668307 DOI: 10.1155/2021/6340754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 10/05/2021] [Indexed: 12/02/2022] Open
Abstract
Background The complexities of surgery require an efficient and explicit method to evaluate and standardize surgical procedures. A reliable surgical evaluation tool will be able to serve various purposes such as development of surgery training programs and improvement of surgical skills. Objectives (a) To develop a modeling framework based on integration of dexterity analysis and design structure matrix (DSM), to be generally applicable to predict total duration of a surgical procedure, and (b) to validate the model by comparing its results with laparoscopic cholecystectomy surgery protocol. Method A modeling framework is developed through DSM, a tool used in engineering design, systems engineering and management, to hierarchically decompose and describe relationships among individual surgical activities. Individual decomposed activities are assumed to have uncertain parameters so that a rework probability is introduced. The simulation produces a distribution of the duration of the modeled procedure. A statistical approach is then taken to evaluate surgery duration through integrated numerical parameters. The modeling framework is applied for the first time to analyze a surgery; laparoscopic cholecystectomy, a common surgical procedure, is selected for the analysis. Results The present simulation model is validated by comparing its results of predicted surgery duration with the standard laparoscopic cholecystectomy protocols from the Atlas of Minimally Invasive Surgery with 2.5% error and that from the Atlas of Pediatric Laparoscopy and Thoracoscopy with 4% error. Conclusion The present model, developed based on dexterity analysis and DSM, demonstrates a validated capability of predicting laparoscopic cholecystectomy surgery duration. Future studies will explore its potential applications to other surgery procedures and in improving surgeons' performance and training novices.
Collapse
|
7
|
Qin Y, Allan M, Burdick JW, Azizian M. Autonomous Hierarchical Surgical State Estimation During Robot-Assisted Surgery Through Deep Neural Networks. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3091728] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
8
|
Endowing Robots with Longer-term Autonomy by Recovering from External Disturbances in Manipulation Through Grounded Anomaly Classification and Recovery Policies. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01312-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractRobots are poised to interact with humans in unstructured environments. Despite increasingly robust control algorithms, failure modes arise whenever the underlying dynamics are poorly modeled, especially in unstructured environments. We contribute a set of recovery policies to deal with anomalies produced by external disturbances. The recoveries work when various different types of anomalies are triggered any number of times at any point in the task, including during already running recoveries. Our recovery critic stands atop of a tightly-integrated, graph-based online motion-generation and introspection system. Policies, skills, and introspection models are learned incrementally and contextually over time. Recoveries are studied via a collaborative kitting task where a wide range of anomalous conditions are experienced in the system. We also contribute an extensive analysis of the performance of the tightly integrated anomaly identification, classification, and recovery system under extreme anomalous conditions. We show how the integration of such a system achieves performances greater than the sum of its parts.
Collapse
|
9
|
Visual Intelligence: Prediction of Unintentional Surgical-Tool-Induced Bleeding during Robotic and Laparoscopic Surgery. ROBOTICS 2021. [DOI: 10.3390/robotics10010037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Unintentional vascular damage can result from a surgical instrument’s abrupt movements during minimally invasive surgery (laparoscopic or robotic). A novel real-time image processing algorithm based on local entropy is proposed that can detect abrupt movements of surgical instruments and predict bleeding occurrence. The uniform nature of the texture of surgical tools is utilized to segment the tools from the background. By comparing changes in entropy over time, the algorithm determines when the surgical instruments are moved abruptly. We tested the algorithm using 17 videos of minimally invasive surgery, 11 of which had tool-induced bleeding. Our preliminary testing shows that the algorithm is 88% accurate and 90% precise in predicting bleeding. The average advance warning time for the 11 videos is 0.662 s, with the standard deviation being 0.427 s. The proposed approach has the potential to eventually lead to a surgical early warning system or even proactively attenuate tool movement (for robotic surgery) to avoid dangerous surgical outcomes.
Collapse
|
10
|
Loukas C, Gazis A, Kanakis MA. Surgical Performance Analysis and Classification Based on Video Annotation of Laparoscopic Tasks. JSLS 2020; 24:JSLS.2020.00057. [PMID: 33144823 PMCID: PMC7592956 DOI: 10.4293/jsls.2020.00057] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Background and Objectives Current approaches in surgical skills assessment employ virtual reality simulators, motion sensors, and task-specific checklists. Although accurate, these methods may be complex in the interpretation of the generated measures of performance. The aim of this study is to propose an alternative methodology for skills assessment and classification, based on video annotation of laparoscopic tasks. Methods Two groups of 32 trainees (students and residents) performed two laparoscopic tasks: peg transfer (PT) and knot tying (KT). Each task was annotated via a video analysis software based on a vocabulary of eight surgical gestures (surgemes) that denote the elementary gestures required to perform a task. The extracted metrics included duration/counts of each surgeme, penalty events, and counts of sequential surgemes (transitions). Our analysis focused on trainees' skill level comparison and classification using a nearest neighbor approach. The classification was assessed via accuracy, sensitivity, and specificity. Results For PT, almost all metrics showed significant performance difference between the two groups (p < 0.001). Residents were able to complete the task with fewer, shorter surgemes and fewer penalty events. Moreover, residents performed significantly fewer transitions (p < 0.05). For KT, residents performed two surgemes in significantly shorter time (p < 0.05). The metrics derived from the video annotations were also able to recognize the trainees' skill level with 0.71 - 0.86 accuracy, 0.80 - 1.00 sensitivity, and 0.60 - 0.80 specificity. Conclusion The proposed technique provides a tool for skills assessment and experience classification of surgical trainees, as well as an intuitive way for describing what and how surgemes are performed.
Collapse
Affiliation(s)
- Constantinos Loukas
- Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Athanasios Gazis
- Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Meletios A Kanakis
- Department of Pediatric and Congenital Heart Surgery, Onassis Heart Surgery Centre, Athens, Greece
| |
Collapse
|
11
|
Motion analysis for better understanding of psychomotor skills in laparoscopy: objective assessment-based simulation training using animal organs. Surg Endosc 2020; 35:4399-4416. [PMID: 32909201 PMCID: PMC8263434 DOI: 10.1007/s00464-020-07940-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Accepted: 08/25/2020] [Indexed: 12/18/2022]
Abstract
Background Our aim was to characterize the motions of multiple laparoscopic surgical instruments among participants with different levels of surgical experience in a series of wet-lab training drills, in which participants need to perform a range of surgical procedures including grasping tissue, tissue traction and dissection, applying a Hem-o-lok clip, and suturing/knotting, and digitize the level of surgical competency. Methods Participants performed tissue dissection around the aorta, dividing encountered vessels after applying a Hem-o-lok (Task 1), and renal parenchymal closure (Task 2: suturing, Task 3: suturing and knot-tying), using swine cadaveric organs placed in a box trainer under a motion capture (Mocap) system. Motion-related metrics were compared according to participants’ level of surgical experience (experts: 50 ≤ laparoscopic surgeries, intermediates: 10–49, novices: 0–9), using the Kruskal–Wallis test, and significant metrics were subjected to principal component analysis (PCA). Results A total of 15 experts, 12 intermediates, and 18 novices participated in the training. In Task 1, a shorter path length and faster velocity/acceleration/jerk were observed using both scissors and a Hem-o-lok applier in the experts, and Hem-o-lok-related metrics markedly contributed to the 1st principal component on PCA analysis, followed by scissors-related metrics. Higher-level skills including a shorter path length and faster velocity were observed in both hands of the experts also in tasks 2 and 3. Sub-analysis showed that, in experts with 100 ≤ cases, scissors moved more frequently in the “close zone (0 ≤ to < 2.0 cm from aorta)” than those with 50–99 cases. Conclusion Our novel Mocap system recognized significant differences in several metrics in multiple instruments according to the level of surgical experience. “Applying a Hem-o-lok clip on a pedicle” strongly reflected the level of surgical experience, and zone-metrics may be a promising tool to assess surgical expertise. Our next challenge is to give completely objective feedback to trainees on-site in the wet-lab. Electronic supplementary material The online version of this article (10.1007/s00464-020-07940-7) contains supplementary material, which is available to authorized users.
Collapse
|
12
|
Mohamadipanah H, Perrone KH, Peterson K, Nathwani J, Huang F, Garren A, Garren M, Witt A, Pugh C. Sensors and Psychomotor Metrics: A Unique Opportunity to Close the Gap on Surgical Processes and Outcomes. ACS Biomater Sci Eng 2020; 6:2630-2640. [DOI: 10.1021/acsbiomaterials.9b01019] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Hossein Mohamadipanah
- Department of Surgery, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, California 94305, United States
| | - Kenneth H. Perrone
- Department of Surgery, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, California 94305, United States
| | - Katherine Peterson
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53726, United States
| | - Jay Nathwani
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53726, United States
| | - Felix Huang
- Department of Physical Medicine and Rehabilitation, Feinberg School of Medicine, Northwestern University, 710 North Lake Shore Drive, #1022, Chicago, Illinois 60611, United States
| | - Anna Garren
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53726, United States
| | - Margaret Garren
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53726, United States
| | - Anna Witt
- Department of Surgery, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, California 94305, United States
| | - Carla Pugh
- Department of Surgery, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, California 94305, United States
| |
Collapse
|
13
|
Azari DP, Hu YH, Miller BL, Le BV, Radwin RG. Using Surgeon Hand Motions to Predict Surgical Maneuvers. HUMAN FACTORS 2019; 61:1326-1339. [PMID: 31013463 DOI: 10.1177/0018720819838901] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE This study explores how common machine learning techniques can predict surgical maneuvers from a continuous video record of surgical benchtop simulations. BACKGROUND Automatic computer vision recognition of surgical maneuvers (suturing, tying, and transition) could expedite video review and objective assessment of surgeries. METHOD We recorded hand movements of 37 clinicians performing simple and running subcuticular suturing benchtop simulations, and applied three machine learning techniques (decision trees, random forests, and hidden Markov models) to classify surgical maneuvers every 2 s (60 frames) of video. RESULTS Random forest predictions of surgical video correctly classified 74% of all video segments into suturing, tying, and transition states for a randomly selected test set. Hidden Markov model adjustments improved the random forest predictions to 79% for simple interrupted suturing on a subset of randomly selected participants. CONCLUSION Random forest predictions aided by hidden Markov modeling provided the best prediction of surgical maneuvers. Training of models across all users improved prediction accuracy by 10% compared with a random selection of participants. APPLICATION Marker-less video hand tracking can predict surgical maneuvers from a continuous video record with similar accuracy as robot-assisted surgical platforms, and may enable more efficient video review of surgical procedures for training and coaching.
Collapse
Affiliation(s)
| | - Yu Hen Hu
- University of Wisconsin-Madison, USA
| | | | | | | |
Collapse
|
14
|
Zahedi E, Khosravian F, Wang W, Armand M, Dargahi J, Zadeh M. Towards Skill Transfer via Learning-Based Guidance in Human-Robot Interaction: An Application to Orthopaedic Surgical Drilling Skill. J INTELL ROBOT SYST 2019. [DOI: 10.1007/s10846-019-01082-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
15
|
A recurrent convolutional neural network approach for sensorless force estimation in robotic surgery. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.01.011] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
16
|
Dias RD, Gupta A, Yule SJ. Using Machine Learning to Assess Physician Competence: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:427-439. [PMID: 30113364 DOI: 10.1097/acm.0000000000002414] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
PURPOSE To identify the different machine learning (ML) techniques that have been applied to automate physician competence assessment and evaluate how these techniques can be used to assess different competence domains in several medical specialties. METHOD In May 2017, MEDLINE, EMBASE, PsycINFO, Web of Science, ACM Digital Library, IEEE Xplore Digital Library, PROSPERO, and Cochrane Database of Systematic Reviews were searched for articles published from inception to April 30, 2017. Studies were included if they applied at least one ML technique to assess medical students', residents', fellows', or attending physicians' competence. Information on sample size, participants, study setting and design, medical specialty, ML techniques, competence domains, outcomes, and methodological quality was extracted. MERSQI was used to evaluate quality, and a qualitative narrative synthesis of the medical specialties, ML techniques, and competence domains was conducted. RESULTS Of 4,953 initial articles, 69 met inclusion criteria. General surgery (24; 34.8%) and radiology (15; 21.7%) were the most studied specialties; natural language processing (24; 34.8%), support vector machine (15; 21.7%), and hidden Markov models (14; 20.3%) were the ML techniques most often applied; and patient care (63; 91.3%) and medical knowledge (45; 65.2%) were the most assessed competence domains. CONCLUSIONS A growing number of studies have attempted to apply ML techniques to physician competence assessment. Although many studies have investigated the feasibility of certain techniques, more validation research is needed. The use of ML techniques may have the potential to integrate and analyze pragmatic information that could be used in real-time assessments and interventions.
Collapse
Affiliation(s)
- Roger D Dias
- R.D. Dias is instructor in emergency medicine, Department of Emergency Medicine and STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts; ORCID: http://orcid.org/0000-0003-4959-5052. A. Gupta is research scientist, Center for Surgery and Public Health, Brigham and Women's Hospital, Boston, Massachusetts. S.J. Yule is associate professor of surgery, Harvard Medical School, and faculty, Department of Surgery and STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, Massachusetts
| | | | | |
Collapse
|
17
|
Performance Assessment. COMPREHENSIVE HEALTHCARE SIMULATION: SURGERY AND SURGICAL SUBSPECIALTIES 2019. [DOI: 10.1007/978-3-319-98276-2_9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
18
|
Miura S, Kawamura K, Kobayashi Y, Fujie MG. Using Brain Activation to Evaluate Arrangements Aiding Hand-Eye Coordination in Surgical Robot Systems. IEEE Trans Biomed Eng 2018; 66:2352-2361. [PMID: 30582521 DOI: 10.1109/tbme.2018.2889316] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
GOAL To realize intuitive, minimally invasive surgery, surgical robots are often controlled using master-slave systems. However, the surgical robot's structure often differs from that of the human body, so the arrangement between the monitor and master must reflect this physical difference. In this study, we validate the feasibility of an embodiment evaluation method that determines the arrangement between the monitor and master. In our constructed cognitive model, the brain's intraparietal sulcus activates significantly when somatic and visual feedback match. Using this model, we validate a cognitively appropriate arrangement between the monitor and master. METHODS In experiments, we measure participants' brain activation using an imaging device as they control the virtual surgical simulator. Two experiments are carried out that vary the monitor and hand positions. CONCLUSION There are two common arrangements of the monitor and master at the brain activation's peak: One is placing the monitor behind the master, so the user feels that the system is an extension of his arms into the monitor; the other arranges the monitor in front of the master, so the user feels the correspondence between his own arm and the virtual arm in the monitor. SIGNIFICANCE From these results, we conclude that the arrangement between the monitor and master impacts embodiment, enabling the participant to feel apparent posture matches in master-slave surgical robot systems.
Collapse
|
19
|
Jackson RC, Yuan R, Chow DL, Newman W, Çavuşoğlu MC. Real-Time Visual Tracking of Dynamic Surgical Suture Threads. IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING : A PUBLICATION OF THE IEEE ROBOTICS AND AUTOMATION SOCIETY 2018; 15:1078-1090. [PMID: 29988978 PMCID: PMC6034738 DOI: 10.1109/tase.2017.2726689] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
In order to realize many of the potential benefits associated with robotically assisted minimally invasive surgery, the robot must be more than a remote controlled device. Currently, using a surgical robot can be challenging, fatiguing, and time consuming. Teaching the robot to actively assist surgical tasks, such as suturing, has the potential to vastly improve both patient outlook and the surgeon's efficiency. One obstacle to completing surgical sutures autonomously is the difficulty in tracking surgical suture threads. This paper presents novel stereo image processing algorithms for the detection, initialization, and tracking of a surgical suture thread. A Non Uniform Rational B-Spline (NURBS) curve is used to model a thin, deformable, and dynamic length thread. The NURBS model is initialized and grown from a single selected point located on the thread. The NURBS curve is optimized by minimizing the image matching energy between the projected stereo NURBS image and the segmented thread image. The algorithms are evaluated using suture threads, a calibrated test pattern, and a simulated thread image. Additionally, the accuracy of the algorithms presented are validated as they track a suture thread undergoing translation, deformation, and apparent length changes. All of the tracking is in real-time. Note to Practioners: Abstract-The problem of tracking a surgical suture thread was addressed in this work. Since the suture thread is highly deformable, any tracking algorithm must be robust to intersections, occlusions, knot tying, and length changes. The detection algorithm introduced in this paper is capable of distinguishing different threads when they intersect. The tracking algorithm presented here demonstrate that it is possible, using polynomial curves, to track a suture thread as it deforms, becomes occluded, changes length, and even ties a knot in real time. The detection algorithm can enhance directional thin features while the polynomial curve modeling can track any string like structure. Further integration of the polynomial curve with a feed-forward thread model could improve the stability and robustness of the thread tracking.
Collapse
Affiliation(s)
- Russell C Jackson
- Department of Electrical Engineering and Computer Science (EECS) at Case Western Reserve University in Cleveland, OH, USA
| | - Rick Yuan
- Department of Electrical Engineering and Computer Science (EECS) at Case Western Reserve University in Cleveland, OH, USA
| | - Der-Lin Chow
- Department of Electrical Engineering and Computer Science (EECS) at Case Western Reserve University in Cleveland, OH, USA
| | - Wyatt Newman
- Department of Electrical Engineering and Computer Science (EECS) at Case Western Reserve University in Cleveland, OH, USA
| | - M Cenk Çavuşoğlu
- Department of Electrical Engineering and Computer Science (EECS) at Case Western Reserve University in Cleveland, OH, USA
| |
Collapse
|
20
|
Hu Y, Jiang B, Kim H, Schroen AT, Smith PW, Rasmussen SK. Vessel Ligation Fundamentals: A Comparison of Technical Evaluations by Crowdsourced Nonclinical Personnel and Surgical Faculty. JOURNAL OF SURGICAL EDUCATION 2018; 75:664-670. [PMID: 29249640 DOI: 10.1016/j.jsurg.2017.09.030] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Revised: 09/26/2017] [Accepted: 09/29/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND Evaluation of fundamental surgical skills is invaluable to the training of medical students and junior residents. This study assessed the effectiveness of crowdsourcing nonmedical personnel to evaluate technical proficiency at simulated vessel ligation. STUDY DESIGN Fifteen videos were captured of participants performing vessel ligation using a low-fidelity model (5 attending surgeons and 5 medical students before and after training). These videos were evaluated by nonmedical personnel recruited through Amazon Mechanical Turk, as well as by 3 experienced surgical faculty. Evaluation criteria were based on Objective Structured Assessment of Technical Skills (scale: 5-25). Results were compared using Wilcoxon signed rank-sum and Cronbach's alpha (α). RESULTS Thirty-two crowd workers evaluated all 15 videos. Crowd workers scored attending surgeon videos significantly higher than pretraining medical student videos (20.5 vs 14.9, p < 0.001), demonstrating construct validity. Across all videos, crowd evaluations were more lenient than expert evaluations (19.1 vs 14.5, p < 0.001). However, average volunteer evaluations correlated more strongly with average expert evaluations (α = 0.95) than the strength of correlation between any 2 individual expert evaluators (α = 0.72-0.88). Combined reimbursement for all workers was $80.00. CONCLUSION After adjustments for score inflation, crowdsourced can evaluate surgical fundamentals with excellent validity. This resource is considerably less costly and potentially more reliable than individual expert evaluations.
Collapse
Affiliation(s)
- Yinin Hu
- Department of Surgery, University of Virginia School of Medicine, Charlottesville, Virginia
| | - Boxiang Jiang
- Department of Surgery, University of Virginia School of Medicine, Charlottesville, Virginia
| | - Helen Kim
- Department of Surgery, University of Virginia School of Medicine, Charlottesville, Virginia
| | - Anneke T Schroen
- Department of Surgery, University of Virginia School of Medicine, Charlottesville, Virginia
| | - Philip W Smith
- Department of Surgery, University of Virginia School of Medicine, Charlottesville, Virginia
| | - Sara K Rasmussen
- Department of Surgery, University of Virginia School of Medicine, Charlottesville, Virginia.
| |
Collapse
|
21
|
|
22
|
Comparison of the goals and MISTELS scores for the evaluation of surgeons on training benches. Int J Comput Assist Radiol Surg 2017; 13:95-103. [DOI: 10.1007/s11548-017-1645-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Accepted: 07/10/2017] [Indexed: 12/23/2022]
|
23
|
Nemani A, Ahn W, Cooper C, Schwaitzberg S, De S. Convergent validation and transfer of learning studies of a virtual reality-based pattern cutting simulator. Surg Endosc 2017; 32:1265-1272. [PMID: 28812196 DOI: 10.1007/s00464-017-5802-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Accepted: 07/28/2017] [Indexed: 12/20/2022]
Abstract
INTRODUCTION Research has clearly shown the benefits of surgical simulators to train laparoscopic motor skills required for positive patient outcomes. We have developed the Virtual Basic Laparoscopic Skill Trainer (VBLaST) that simulates tasks from the Fundamentals of Laparoscopic Surgery (FLS) curriculum. This study aims to show convergent validity of the VBLaST pattern cutting module via the CUSUM method to quantify learning curves along with motor skill transfer from simulation environments to ex vivo tissue samples. METHODS 18 medical students at the University at Buffalo, with no prior laparoscopic surgical skills, were placed into the control, FLS training, or VBLaST training groups. Each training group performed pattern cutting trials for 12 consecutive days on their respective simulation trainers. Following a 2-week break period, the trained students performed three pattern cutting trials on each simulation platform to measure skill retention. All subjects then performed one pattern cutting task on ex vivo cadaveric peritoneal tissue. FLS and VBLaST pattern cutting scores, CUSUM scores, and transfer task completion times were reported. RESULTS Results indicate that the FLS and VBLaST trained groups have significantly higher task performance scores than the control group in both the VBLaST and FLS environments (p < 0.05). Learning curve results indicate that three out of seven FLS training subjects and four out of six VBLaST training subjects achieved the "senior" performance level. Furthermore, both the FLS and VBLaST trained groups had significantly lower transfer task completion times on ex vivo peritoneal tissue models (p < 0.05). CONCLUSION We characterized task performance scores for trained VBLaST and FLS subjects via CUSUM analysis of the learning curves and showed evidence that both groups have significant improvements in surgical motor skill. Furthermore, we showed that learned surgical skills in the FLS and VBLaST environments transfer not only to the different simulation environments, but also to ex vivo tissue models.
Collapse
Affiliation(s)
- Arun Nemani
- Rensselaer Polytechnic Institute, 110, 8th Street, Troy, NY, 12180, USA
| | - Woojin Ahn
- Rensselaer Polytechnic Institute, 110, 8th Street, Troy, NY, 12180, USA
| | - Clairice Cooper
- University at Buffalo School of Medicine and Biomedical Sciences, Buffalo, NY, USA
| | - Steven Schwaitzberg
- University at Buffalo School of Medicine and Biomedical Sciences, Buffalo, NY, USA
| | - Suvranu De
- Rensselaer Polytechnic Institute, 110, 8th Street, Troy, NY, 12180, USA.
| |
Collapse
|
24
|
Oussi N, Loukas C, Kjellin A, Lahanas V, Georgiou K, Henningsohn L, Felländer-Tsai L, Georgiou E, Enochsson L. Video analysis in basic skills training: a way to expand the value and use of BlackBox training? Surg Endosc 2017; 32:87-95. [PMID: 28664435 PMCID: PMC5770508 DOI: 10.1007/s00464-017-5641-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2017] [Accepted: 06/06/2017] [Indexed: 01/22/2023]
Abstract
Background Basic skills training in laparoscopic high-fidelity simulators (LHFS) improves laparoscopic skills. However, since LHFS are expensive, their availability is limited. The aim of this study was to assess whether automated video analysis of low-cost BlackBox laparoscopic training could provide an alternative to LHFS in basic skills training. Methods Medical students volunteered to participate during their surgical semester at the Karolinska University Hospital. After written informed consent, they performed two laparoscopic tasks (PEG-transfer and precision-cutting) on a BlackBox trainer. All tasks were videotaped and sent to MPLSC for automated video analysis, generating two parameters (Pl and Prtcl_tot) that assess the total motion activity. The students then carried out final tests on the MIST-VR simulator. This study was a European collaboration among two simulation centers, located in Sweden and Greece, within the framework of ACS-AEI. Results 31 students (19 females and 12 males), mean age of 26.2 ± 0.8 years, participated in the study. However, since two of the students completed only one of the three MIST-VR tasks, they were excluded. The three MIST-VR scores showed significant positive correlations to both the Pl variable in the automated video analysis of the PEG-transfer (RSquare 0.48, P < 0.0001; 0.34, P = 0.0009; 0.45, P < 0.0001, respectively) as well as to the Prtcl_tot variable in that same exercise (RSquare 0.42, P = 0.0002; 0.29, P = 0.0024; 0.45, P < 0.0001). However, the correlations were exclusively shown in the group with less PC gaming experience as well as in the female group. Conclusions Automated video analysis provides accurate results in line with those of the validated MIST-VR. We believe that a more frequent use of automated video analysis could provide an extended value to cost-efficient laparoscopic BlackBox training. However, since there are gender-specific as well as PC gaming experience differences, this should be taken in account regarding the value of automated video analysis.
Collapse
Affiliation(s)
- Ninos Oussi
- The Center for Advanced Medical Simulation and Training (CAMST), Karolinska University Hospital, Stockholm, Sweden.,Division of Surgery, Department of Clinical ScienceIntervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden.,Center for Clinical Research Sörmland, Uppsala University, Uppsala, Sweden
| | - Constantinos Loukas
- Medical Physics Lab-Simulation Center, Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Ann Kjellin
- The Center for Advanced Medical Simulation and Training (CAMST), Karolinska University Hospital, Stockholm, Sweden.,Division of Surgery, Department of Clinical ScienceIntervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden
| | - Vasileios Lahanas
- Medical Physics Lab-Simulation Center, Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Konstantinos Georgiou
- Medical Physics Lab-Simulation Center, Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Lars Henningsohn
- The Center for Advanced Medical Simulation and Training (CAMST), Karolinska University Hospital, Stockholm, Sweden.,Division of Urology, Department of Clinical ScienceIntervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden
| | - Li Felländer-Tsai
- The Center for Advanced Medical Simulation and Training (CAMST), Karolinska University Hospital, Stockholm, Sweden.,Division of Orthopedics and Biotechnology, Department of Clinical ScienceIntervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden
| | - Evangelos Georgiou
- Medical Physics Lab-Simulation Center, Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Lars Enochsson
- The Center for Advanced Medical Simulation and Training (CAMST), Karolinska University Hospital, Stockholm, Sweden. .,Division of Surgery, Department of Clinical ScienceIntervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden. .,Division of Surgery, Department of Surgical and Perioperative Sciences, Umeå University, Umeå, Sweden. .,Division of Surgery, Department of Surgical and Perioperative Sciences, Umeå University, 971 80, Luleå, Sweden.
| |
Collapse
|
25
|
Abstract
Due to the rapidly evolving medical, technological, and technical possibilities, surgical procedures are becoming more and more complex. On the one hand, this offers an increasing number of advantages for patients, such as enhanced patient safety, minimal invasive interventions, and less medical malpractices. On the other hand, it also heightens pressure on surgeons and other clinical staff and has brought about a new policy in hospitals, which must rely on a great number of economic, social, psychological, qualitative, practical, and technological resources. As a result, medical disciplines, such as surgery, are slowly merging with technical disciplines. However, this synergy is not yet fully matured. The current information and communication technology in hospitals cannot manage the clinical and operational sequence adequately. The consequences are breaches in the surgical workflow, extensions in procedure times, and media disruptions. Furthermore, the data accrued in operating rooms (ORs) by surgeons and systems are not sufficiently implemented. A flood of information, “big data”, is available from information systems. That might be deployed in the context of Medicine 4.0 to facilitate the surgical treatment. However, it is unused due to infrastructure breaches or communication errors. Surgical process models (SPMs) alleviate these problems. They can be defined as simplified, formal, or semiformal representations of a network of surgery-related activities, reflecting a predefined subset of interest. They can employ different means of generation, languages, and data acquisition strategies. They can represent surgical interventions with high resolution, offering qualifiable and quantifiable information on the course of the intervention on the level of single, minute, surgical work-steps. The basic idea is to gather information concerning the surgical intervention and its activities, such as performance time, surgical instrument used, trajectories, movements, or intervention phases. These data can be gathered by means of workflow recordings. These recordings are abstracted to represent an individual surgical process as a model and are an essential requirement to enable Medicine 4.0 in the OR. Further abstraction can be generated by merging individual process models to form generic SPMs to increase the validity for a larger number of patients. Furthermore, these models can be applied in a wide variety of use-cases. In this regard, the term “modeling” can be used to support either one or more of the following tasks: “to describe”, “to understand”, “to explain”, to optimize”, “to learn”, “to teach”, or “to automate”. Possible use-cases are requirements analyses, evaluating surgical assist systems, generating surgeon-specific training-recommendation, creating workflow management systems for ORs, and comparing different surgical strategies. The presented chapter will give an introduction into this challenging topic, presenting different methods to generate SPMs from the workflow in the OR, as well as various use-cases, and state-of-the-art research in this field. Although many examples in the article are given according to SPMs that were computed based on observations, the same approaches can be easily applied to SPMs that were measured automatically and mined from big data.
Collapse
Affiliation(s)
- Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), Universität Leipzig, Leipzig, Germany
| |
Collapse
|
26
|
Lahanas V, Loukas C, Georgiou K, Lababidi H, Al-Jaroudi D. Virtual reality-based assessment of basic laparoscopic skills using the Leap Motion controller. Surg Endosc 2017; 31:5012-5023. [PMID: 28466361 DOI: 10.1007/s00464-017-5503-3] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Accepted: 03/08/2017] [Indexed: 11/24/2022]
Abstract
BACKGROUND The majority of the current surgical simulators employ specialized sensory equipment for instrument tracking. The Leap Motion controller is a new device able to track linear objects with sub-millimeter accuracy. The aim of this study was to investigate the potential of a virtual reality (VR) simulator for assessment of basic laparoscopic skills, based on the low-cost Leap Motion controller. METHODS A simple interface was constructed to simulate the insertion point of the instruments into the abdominal cavity. The controller provided information about the position and orientation of the instruments. Custom tools were constructed to simulate the laparoscopic setup. Three basic VR tasks were developed: camera navigation (CN), instrument navigation (IN), and bimanual operation (BO). The experiments were carried out in two simulation centers: MPLSC (Athens, Greece) and CRESENT (Riyadh, Kingdom of Saudi Arabia). Two groups of surgeons (28 experts and 21 novices) participated in the study by performing the VR tasks. Skills assessment metrics included time, pathlength, and two task-specific errors. The face validity of the training scenarios was also investigated via a questionnaire completed by the participants. RESULTS Expert surgeons significantly outperformed novices in all assessment metrics for IN and BO (p < 0.05). For CN, a significant difference was found in one error metric (p < 0.05). The greatest difference between the performances of the two groups occurred for BO. Qualitative analysis of the instrument trajectory revealed that experts performed more delicate movements compared to novices. Subjects' ratings on the feedback questionnaire highlighted the training value of the system. CONCLUSIONS This study provides evidence regarding the potential use of the Leap Motion controller for assessment of basic laparoscopic skills. The proposed system allowed the evaluation of dexterity of the hand movements. Future work will involve comparison studies with validated simulators and development of advanced training scenarios on current Leap Motion controller.
Collapse
Affiliation(s)
- Vasileios Lahanas
- Medical Physics Lab-Simulation Center, School of Medicine, National and Kapodistrian University of Athens, Mikras Asias 75 Str., 11527, Athens, Greece
| | - Constantinos Loukas
- Medical Physics Lab-Simulation Center, School of Medicine, National and Kapodistrian University of Athens, Mikras Asias 75 Str., 11527, Athens, Greece.
| | - Konstantinos Georgiou
- Medical Physics Lab-Simulation Center, School of Medicine, National and Kapodistrian University of Athens, Mikras Asias 75 Str., 11527, Athens, Greece
| | - Hani Lababidi
- Center for Research, Education & Simulation Enhanced Training, King Fahad Medical City, Riyadh, Saudi Arabia
| | - Dania Al-Jaroudi
- Center for Research, Education & Simulation Enhanced Training, King Fahad Medical City, Riyadh, Saudi Arabia
| |
Collapse
|
27
|
Zahedi E, Dargahi J, Kia M, Zadeh M. Gesture-Based Adaptive Haptic Guidance: A Comparison of Discriminative and Generative Modeling Approaches. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2660071] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
28
|
Vedula SS, Ishii M, Hager GD. Objective Assessment of Surgical Technical Skill and Competency in the Operating Room. Annu Rev Biomed Eng 2017; 19:301-325. [PMID: 28375649 DOI: 10.1146/annurev-bioeng-071516-044435] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Training skillful and competent surgeons is critical to ensure high quality of care and to minimize disparities in access to effective care. Traditional models to train surgeons are being challenged by rapid advances in technology, an intensified patient-safety culture, and a need for value-driven health systems. Simultaneously, technological developments are enabling capture and analysis of large amounts of complex surgical data. These developments are motivating a "surgical data science" approach to objective computer-aided technical skill evaluation (OCASE-T) for scalable, accurate assessment; individualized feedback; and automated coaching. We define the problem space for OCASE-T and summarize 45 publications representing recent research in this domain. We find that most studies on OCASE-T are simulation based; very few are in the operating room. The algorithms and validation methodologies used for OCASE-T are highly varied; there is no uniform consensus. Future research should emphasize competency assessment in the operating room, validation against patient outcomes, and effectiveness for surgical training.
Collapse
Affiliation(s)
- S Swaroop Vedula
- Malone Center for Engineering in Healthcare, Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland 21218;
| | - Masaru Ishii
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University School of Medicine, Baltimore, Maryland 21287
| | - Gregory D Hager
- Malone Center for Engineering in Healthcare, Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland 21218;
| |
Collapse
|
29
|
Phase Segmentation Methods for an Automatic Surgical Workflow Analysis. Int J Biomed Imaging 2017; 2017:1985796. [PMID: 28408921 PMCID: PMC5376475 DOI: 10.1155/2017/1985796] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Accepted: 01/05/2017] [Indexed: 11/18/2022] Open
Abstract
In this paper, we present robust methods for automatically segmenting phases in a specified surgical workflow by using latent Dirichlet allocation (LDA) and hidden Markov model (HMM) approaches. More specifically, our goal is to output an appropriate phase label for each given time point of a surgical workflow in an operating room. The fundamental idea behind our work lies in constructing an HMM based on observed values obtained via an LDA topic model covering optical flow motion features of general working contexts, including medical staff, equipment, and materials. We have an awareness of such working contexts by using multiple synchronized cameras to capture the surgical workflow. Further, we validate the robustness of our methods by conducting experiments involving up to 12 phases of surgical workflows with the average length of each surgical workflow being 12.8 minutes. The maximum average accuracy achieved after applying leave-one-out cross-validation was 84.4%, which we found to be a very promising result.
Collapse
|
30
|
Sun X, Byrns S, Cheng I, Zheng B, Basu A. Smart Sensor-Based Motion Detection System for Hand Movement Training in Open Surgery. J Med Syst 2016; 41:24. [DOI: 10.1007/s10916-016-0665-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2015] [Accepted: 12/06/2016] [Indexed: 11/28/2022]
|
31
|
Brown JD, O Brien CE, Leung SC, Dumon KR, Lee DI, Kuchenbecker KJ. Using Contact Forces and Robot Arm Accelerations to Automatically Rate Surgeon Skill at Peg Transfer. IEEE Trans Biomed Eng 2016; 64:2263-2275. [PMID: 28113295 DOI: 10.1109/tbme.2016.2634861] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Most trainees begin learning robotic minimally invasive surgery by performing inanimate practice tasks with clinical robots such as the Intuitive Surgical da Vinci. Expert surgeons are commonly asked to evaluate these performances using standardized five-point rating scales, but doing such ratings is time consuming, tedious, and somewhat subjective. This paper presents an automatic skill evaluation system that analyzes only the contact force with the task materials, the broad-bandwidth accelerations of the robotic instruments and camera, and the task completion time. METHODS We recruited N = 38 participants of varying skill in robotic surgery to perform three trials of peg transfer with a da Vinci Standard robot instrumented with our Smart Task Board. After calibration, three individuals rated these trials on five domains of the Global Evaluative Assessment of Robotic Skill (GEARS) structured assessment tool, providing ground-truth labels for regression and classification machine learning algorithms that predict GEARS scores based on the recorded force, acceleration, and time signals. RESULTS Both machine learning approaches produced scores on the reserved testing sets that were in good to excellent agreement with the human raters, even when the force information was not considered. Furthermore, regression predicted GEARS scores more accurately and efficiently than classification. CONCLUSION A surgeon's skill at robotic peg transfer can be reliably rated via regression using features gathered from force, acceleration, and time sensors external to the robot. SIGNIFICANCE We expect improved trainee learning as a result of providing these automatic skill ratings during inanimate task practice on a surgical robot.
Collapse
|
32
|
Jarc AM, Curet MJ. Viewpoint matters: objective performance metrics for surgeon endoscope control during robot-assisted surgery. Surg Endosc 2016; 31:1192-1202. [PMID: 27422247 PMCID: PMC5315708 DOI: 10.1007/s00464-016-5090-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Accepted: 07/05/2016] [Indexed: 12/16/2022]
Abstract
Background Effective visualization of the operative field is vital to surgical safety and education. However, additional metrics for visualization are needed to complement other common measures of surgeon proficiency, such as time or errors. Unlike other surgical modalities, robot-assisted minimally invasive surgery (RAMIS) enables data-driven feedback to trainees through measurement of camera adjustments. The purpose of this study was to validate and quantify the importance of novel camera metrics during RAMIS. Methods New (n = 18), intermediate (n = 8), and experienced (n = 13) surgeons completed 25 virtual reality simulation exercises on the da Vinci Surgical System. Three camera metrics were computed for all exercises and compared to conventional efficiency measures. Results Both camera metrics and efficiency metrics showed construct validity (p < 0.05) across most exercises (camera movement frequency 23/25, camera movement duration 22/25, camera movement interval 19/25, overall score 24/25, completion time 25/25). Camera metrics differentiated new and experienced surgeons across all tasks as well as efficiency metrics. Finally, camera metrics significantly (p < 0.05) correlated with completion time (camera movement frequency 21/25, camera movement duration 21/25, camera movement interval 20/25) and overall score (camera movement frequency 20/25, camera movement duration 19/25, camera movement interval 20/25) for most exercises. Conclusions We demonstrate construct validity of novel camera metrics and correlation between camera metrics and efficiency metrics across many simulation exercises. We believe camera metrics could be used to improve RAMIS proficiency-based curricula.
Collapse
Affiliation(s)
- Anthony M Jarc
- Medical Research, Intuitive Surgical, Inc., 5655 Spalding Drive, Norcross, GA, 30092, USA.
| | - Myriam J Curet
- Medical Research, Intuitive Surgical, Inc., 5655 Spalding Drive, Norcross, GA, 30092, USA
- VA Palo Alto, Stanford, CA, USA
| |
Collapse
|
33
|
System events: readily accessible features for surgical phase detection. Int J Comput Assist Radiol Surg 2016; 11:1201-9. [PMID: 27177760 DOI: 10.1007/s11548-016-1409-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Accepted: 03/31/2016] [Indexed: 10/21/2022]
Abstract
PURPOSE Surgical phase recognition using sensor data is challenging due to high variation in patient anatomy and surgeon-specific operating styles. Segmenting surgical procedures into constituent phases is of significant utility for resident training, education, self-review, and context-aware operating room technologies. Phase annotation is a highly labor-intensive task and would benefit greatly from automated solutions. METHODS We propose a novel approach using system events-for example, activation of cautery tools-that are easily captured in most surgical procedures. Our method involves extracting event-based features over 90-s intervals and assigning a phase label to each interval. We explore three classification techniques: support vector machines, random forests, and temporal convolution neural networks. Each of these models independently predicts a label for each time interval. We also examine segmental inference using an approach based on the semi-Markov conditional random field, which jointly performs phase segmentation and classification. Our method is evaluated on a data set of 24 robot-assisted hysterectomy procedures. RESULTS Our framework is able to detect surgical phases with an accuracy of 74 % using event-based features over a set of five different phases-ligation, dissection, colpotomy, cuff closure, and background. Precision and recall values for the cuff closure (Precision: 83 %, Recall: 98 %) and dissection (Precision: 75 %, Recall: 88 %) classes were higher than other classes. The normalized Levenshtein distance between predicted and ground truth phase sequence was 25 %. CONCLUSIONS Our findings demonstrate that system events features are useful for automatically detecting surgical phase. Events contain phase information that cannot be obtained from motion data and that would require advanced computer vision algorithms to extract from a video. Many of these events are not specific to robotic surgery and can easily be recorded in non-robotic surgical modalities. In future work, we plan to combine information from system events, tool motion, and videos to automate phase detection in surgical procedures.
Collapse
|
34
|
Vedula SS, Malpani AO, Tao L, Chen G, Gao Y, Poddar P, Ahmidi N, Paxton C, Vidal R, Khudanpur S, Hager GD, Chen CCG. Analysis of the Structure of Surgical Activity for a Suturing and Knot-Tying Task. PLoS One 2016; 11:e0149174. [PMID: 26950551 PMCID: PMC4780814 DOI: 10.1371/journal.pone.0149174] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Accepted: 01/07/2016] [Indexed: 11/17/2022] Open
Abstract
Background Surgical tasks are performed in a sequence of steps, and technical skill evaluation includes assessing task flow efficiency. Our objective was to describe differences in task flow for expert and novice surgeons for a basic surgical task. Methods We used a hierarchical semantic vocabulary to decompose and annotate maneuvers and gestures for 135 instances of a surgeon’s knot performed by 18 surgeons. We compared counts of maneuvers and gestures, and analyzed task flow by skill level. Results Experts used fewer gestures to perform the task (26.29; 95% CI = 25.21 to 27.38 for experts vs. 31.30; 95% CI = 29.05 to 33.55 for novices) and made fewer errors in gestures than novices (1.00; 95% CI = 0.61 to 1.39 vs. 2.84; 95% CI = 2.3 to 3.37). Transitions among maneuvers, and among gestures within each maneuver for expert trials were more predictable than novice trials. Conclusions Activity segments and state flow transitions within a basic surgical task differ by surgical skill level, and can be used to provide targeted feedback to surgical trainees.
Collapse
Affiliation(s)
- S. Swaroop Vedula
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, United States of America
- * E-mail:
| | - Anand O. Malpani
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Lingling Tao
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - George Chen
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Yixin Gao
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Piyush Poddar
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Narges Ahmidi
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Christopher Paxton
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Rene Vidal
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Sanjeev Khudanpur
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, United States of America
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Gregory D. Hager
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Chi Chiung Grace Chen
- Department of Gynecology and Obstetrics, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
| |
Collapse
|
35
|
Gulrez T, Tognetti A, Yoon WJ, Kavakli M, Cabibihan JJ. A Hands-Free Interface for Controlling Virtual Electric-Powered Wheelchairs. INT J ADV ROBOT SYST 2016. [DOI: 10.5772/62028] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
This paper focuses on how to provide mobility to people with motor impairments with the integration of robotics and wearable computing systems. The burden of learning to control powered mobility devices should not fall entirely on the people with disabilities. Instead, the system should be able to learn the user's movements. This requires learning the degrees of freedom of user movement, and mapping these degrees of freedom onto electric-powered wheelchair (EPW) controls. Such mapping cannot be static because in some cases users will eventually improve with practice. Our goal in this paper is to present a hands-free interface (HFI) that can be customized to the varying needs of EPW users with appropriate mapping between the users' degrees of freedom and EPW controls. EPW users with different impairment types must learn how to operate a wheelchair with their residual body motions. EPW interfaces are often customized to fit their needs. An HFI utilizes the signals generated by the user's voluntary shoulder and elbow movements and translates them into an EPW control scheme. We examine the correlation of kinematics that occur during moderately paced repetitive elbow and shoulder movements for a range of motion. The output of upper-limb movements (shoulder and elbows) was tested on six participants, and compared with an output of a precision position tracking (PPT) optical system for validation. We find strong correlations between the HFI signal counts and PPT optical system during different upper-limb movements (ranged from r = 0.86 to 0.94). We also tested the HFI performance in driving the EPW in a virtual reality environment on a spinal-cord-injured (SCI) patient. The results showed that the HFI was able to adapt and translate the residual mobility of the SCI patient into efficient control commands within a week's training. The results are encouraging for the development of more efficient HFIs, especially for wheelchair users.
Collapse
Affiliation(s)
- Tauseef Gulrez
- School of Computing, Science and Engineering, University of Salford, Manchester, UK
| | | | - Woon Jong Yoon
- School of Science, Technology, Engineering, and Mathematics, University of Washington, Bothell, WA, USA
| | - Manolya Kavakli
- Department of Computing, Macquarie University, Sydney, Australia
| | - John-John Cabibihan
- Department of Mechanical and Industrial Engineering, College of Engineering, Qatar University, Doha, Qatar
| |
Collapse
|
36
|
Kassahun Y, Yu B, Tibebu AT, Stoyanov D, Giannarou S, Metzen JH, Vander Poorten E. Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions. Int J Comput Assist Radiol Surg 2015; 11:553-68. [PMID: 26450107 DOI: 10.1007/s11548-015-1305-z] [Citation(s) in RCA: 89] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2015] [Accepted: 09/21/2015] [Indexed: 02/06/2023]
Abstract
PURPOSE Advances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room. METHODS The review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive. RESULTS Studies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices. CONCLUSION ML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the state of the art in surgical robotics. Current devices possess no intelligence whatsoever and are merely advanced and expensive instruments.
Collapse
Affiliation(s)
- Yohannes Kassahun
- Robotics Innovation Center, German Research Center for Artificial Intelligence, Robert-Hooke-Str. 1, 28359, Bremen, Germany.
| | - Bingbin Yu
- Faculty 3 - Mathematics and Computer Science, University of Bremen, Robert-Hooke-Str. 1, 28359, Bremen, Germany
| | - Abraham Temesgen Tibebu
- Faculty 3 - Mathematics and Computer Science, University of Bremen, Robert-Hooke-Str. 1, 28359, Bremen, Germany
| | - Danail Stoyanov
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | | | - Jan Hendrik Metzen
- Faculty 3 - Mathematics and Computer Science, University of Bremen, Robert-Hooke-Str. 1, 28359, Bremen, Germany
| | - Emmanuel Vander Poorten
- Department of Mechanical Engineering, University of Leuven, Celestijnenlaan 300B, 3001, Heverlee, Belgium
| |
Collapse
|
37
|
Loukas C, Georgiou E. Performance comparison of various feature detector-descriptors and temporal models for video-based assessment of laparoscopic skills. Int J Med Robot 2015; 12:387-98. [PMID: 26415583 DOI: 10.1002/rcs.1702] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Revised: 07/17/2015] [Accepted: 08/21/2015] [Indexed: 11/07/2022]
Abstract
BACKGROUND Despite the significant progress in hand gesture analysis for surgical skills assessment, video-based analysis has not received much attention. In this study we investigate the application of various feature detector-descriptors and temporal modeling techniques for laparoscopic skills assessment. METHODS Two different setups were designed: static and dynamic video-histogram analysis. Four well-known feature detection-extraction methods were investigated: SIFT, SURF, STAR-BRIEF and STIP-HOG. For the dynamic setup two temporal models were employed (LDS and GMMAR model). Each method was evaluated for its ability to classify experts and novices on peg transfer and knot tying. RESULTS STIP-HOG yielded the best performance (static: 74-79%; dynamic: 80-89%). Temporal models had equivalent performance. Important differences were found between the two groups with respect to the underlying dynamics of the video-histogram sequences. CONCLUSIONS Temporal modeling of feature histograms extracted from laparoscopic training videos provides information about the skill level and motion pattern of the operator. Copyright © 2015 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Constantinos Loukas
- Medical Physics Lab-Simulation Center, School of Medicine, University of Athens, Greece
| | - Evangelos Georgiou
- Medical Physics Lab-Simulation Center, School of Medicine, University of Athens, Greece
| |
Collapse
|
38
|
Nisky I, Hsieh MH, Okamura AM. The effect of a robot-assisted surgical system on the kinematics of user movements. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2013:6257-60. [PMID: 24111170 DOI: 10.1109/embc.2013.6610983] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Teleoperated robot-assisted surgery (RAS) offers many advantages over traditional minimally invasive surgery. However, RAS has not yet realized its full potential, and it is not clear how to optimally train surgeons to use these systems. We hypothesize that the dynamics of the master manipulator impact the ability of users to make desired movements with the robot. We compared freehand and teleoperated movements of novices and experienced surgeons. To isolate the effects of dynamics from procedural knowledge, we chose simple movements rather than surgical tasks. We found statistically significant effects of teleoperation and user expertise in several aspects of motion, including target acquisition error, movement speed, and movement smoothness. Such quantitative assessment of human motor performance in RAS can impact the design of surgical robots, their control, and surgeon training methods, and eventually, improve patient outcomes.
Collapse
|
39
|
Franke S, Neumuth T. Towards structuring contextual information for workflow-driven surgical assistance functionalities. CURRENT DIRECTIONS IN BIOMEDICAL ENGINEERING 2015. [DOI: 10.1515/cdbme-2015-0042] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
AbstractA workflow-driven cooperative working environment needs to be established in order to successfully unburden the surgeon and the OR staff from technical configuration and information-seeking tasks. An important prerequisite for autonomous situationaware adaptation of medical devices is a comprehensive representation of the operating context regarding the surgical process and situation.We propose a hierarchical structuring of process-related and situation-related information entities and include assessment scores that intraoperative workflow information systems may provide via OR networks. The conducted experiments on the proposed assessment scores included sixty recorded brain tumour removal procedures and considered 344 distinguishable surgical situations.A comprehensive modelling of surgical situations and process context will be a significant pre-requisite for reliable autonomous adaptation of medical devices and systems in digital operating rooms.
Collapse
Affiliation(s)
- Stefan Franke
- 1Innovation Center Computer Assisted Surgery, Universität Leipzig, Semmelweisstr. 14, 04103 Leipzig
| | - Thomas Neumuth
- 1Innovation Center Computer Assisted Surgery, Universität Leipzig, Semmelweisstr. 14, 04103 Leipzig
| |
Collapse
|
40
|
Ozkaynak M, Dziadkowiec O, Mistry R, Callahan T, He Z, Deakyne S, Tham E. Characterizing workflow for pediatric asthma patients in emergency departments using electronic health records. J Biomed Inform 2015; 57:386-98. [PMID: 26327135 DOI: 10.1016/j.jbi.2015.08.018] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2015] [Revised: 07/08/2015] [Accepted: 08/17/2015] [Indexed: 10/23/2022]
Abstract
OBJECTIVE The purpose of this study was to describe a workflow analysis approach and apply it in emergency departments (EDs) using data extracted from the electronic health record (EHR) system. MATERIALS AND METHODS We used data that were obtained during 2013 from the ED of a children's hospital and its four satellite EDs. Workflow-related data were extracted for all patient visits with either a primary or secondary diagnosis on discharge of asthma (ICD-9 code=493). For each patient visit, eight different a priori time-stamped events were identified. Data were also collected on mode of arrival, patient demographics, triage score (i.e. acuity level), and primary/secondary diagnosis. Comparison groups were by acuity levels 2 and 3 with 2 being more acute than 3, arrival mode (ambulance versus walk-in), and site. Data were analyzed using a visualization method and Markov Chains. RESULTS To demonstrate the viability and benefit of the approach, patient care workflows were visually and quantitatively compared. The analysis of the EHR data allowed for exploration of workflow patterns and variation across groups. Results suggest that workflow was different for different arrival modes, settings and acuity levels. DISCUSSION EHRs can be used to explore workflow with statistical and visual analytics techniques novel to the health care setting. The results generated by the proposed approach could be utilized to help institutions identify workflow issues, plan for varied workflows and ultimately improve efficiency in caring for diverse patient groups. CONCLUSION EHR data and novel analytic techniques in health care can expand our understanding of workflow in both large and small ED units.
Collapse
Affiliation(s)
- Mustafa Ozkaynak
- College of Nursing, University of Colorado-Denver
- Anschutz Medical Campus, Aurora, CO, USA.
| | - Oliwier Dziadkowiec
- College of Nursing, University of Colorado-Denver
- Anschutz Medical Campus, Aurora, CO, USA
| | - Rakesh Mistry
- Section of Emergency Medicine, Children's Hospital Colorado, Aurora, CO, USA
| | - Tiffany Callahan
- College of Nursing, University of Colorado-Denver
- Anschutz Medical Campus, Aurora, CO, USA
| | - Ze He
- College of Engineering, University of Massachusetts, Amherst, MA, USA
| | | | - Eric Tham
- Seattle Children's Research Institute, Seattle, WA, USA
| |
Collapse
|
41
|
White LW, Kowalewski TM, Dockter RL, Comstock B, Hannaford B, Lendvay TS. Crowd-Sourced Assessment of Technical Skill: A Valid Method for Discriminating Basic Robotic Surgery Skills. J Endourol 2015; 29:1295-301. [PMID: 26057232 DOI: 10.1089/end.2015.0191] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND A surgeon's skill in the operating room has been shown to correlate with a patient's clinical outcome. The prompt accurate assessment of surgical skill remains a challenge, in part, because expert faculty reviewers are often unavailable. By harnessing the power of large readily available crowds through the Internet, rapid, accurate, and low-cost assessments may be achieved. We hypothesized that assessments provided by crowd workers highly correlate with expert surgeons' assessments. MATERIALS AND METHODS A group of 49 surgeons from two hospitals performed two dry-laboratory robotic surgical skill assessment tasks. The performance of these tasks was video recorded and posted online for evaluation using Amazon Mechanical Turk. The surgical tasks in each video were graded by (n=30) varying crowd workers and (n=3) experts using a modified global evaluative assessment of Robotic Skills (GEARS) grading tool, and the mean scores were compared using Cronbach's alpha statistic. RESULTS GEARS evaluations from the crowd were obtained for each video and task and compared with the GEARS ratings from the expert surgeons. The crowd-based performance scores agreed with the performance assessments by experts with a Cronbach's alpha of 0.84 and 0.92 for the two tasks, respectively. CONCLUSION The assessment of surgical skill by crowd workers resulted in a high degree of agreement with the scores provided by expert surgeons in the evaluation of basic robotic surgical dry-laboratory tasks. Crowd responses cost less and were much faster to acquire. This study provides evidence that crowds may provide an adjunctive method for rapidly providing feedback of skills to training and practicing surgeons.
Collapse
Affiliation(s)
- Lee W White
- 1 School of Medicine, Stanford University, Palo Alto, California. (At time of data collection and analysis: Department of Bioengineering, University of Washington , Seattle, Washington.)
| | - Timothy M Kowalewski
- 2 Department of Mechanical Engineering, University of Minnesota , Minneapolis, Minnesota
| | - Rodney Lee Dockter
- 2 Department of Mechanical Engineering, University of Minnesota , Minneapolis, Minnesota
| | - Bryan Comstock
- 3 Department of Biostatistics, University of Washington , Seattle, Washington
| | - Blake Hannaford
- 4 Department of Electrical Engineering, University of Washington , Seattle, Washington
| | - Thomas S Lendvay
- 5 Department of Urology, Seattle Children's Hospital, University of Washington Medical Center, University of Washington , Seattle, Washington
| |
Collapse
|
42
|
Lahanas V, Loukas C, Georgiou E. A simple sensor calibration technique for estimating the 3D pose of endoscopic instruments. Surg Endosc 2015; 30:1198-204. [PMID: 26123335 DOI: 10.1007/s00464-015-4330-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2015] [Accepted: 06/09/2015] [Indexed: 11/29/2022]
Abstract
INTRODUCTION The aim of this study was to describe a simple and easy-to-use calibration method that is able to estimate the pose (tip position and orientation) of a rigid endoscopic instrument with respect to an electromagnetic tracking device attached to the handle. METHODS A two-step calibration protocol was developed. First, the orientation of the instrument shaft is derived by performing a 360° rotation of the instrument around its shaft using a firmly positioned surgical trocar. Second, the 3D position of the instrument tip is obtained by allowing the tip to come in contact with a planar surface. RESULTS The results indicate submillimeter accuracy in the estimation of the tooltip position, and subdegree accuracy in the estimation of the shaft orientation, both with respect to a known reference frame. The assets of the proposed method are also highlighted by illustrating an indicative application in the field of augmented reality simulation. CONCLUSIONS The proposed method is simple, inexpensive, does not require employment of special calibration frames, and has potential applications not only in training systems but also in the operating room.
Collapse
Affiliation(s)
- Vasileios Lahanas
- Medical Physics Laboratory Simulation Centre, School of Medicine, University of Athens, Mikras Asias St. 75, 11527, Athens, Greece.
| | - Constantinos Loukas
- Medical Physics Laboratory Simulation Centre, School of Medicine, University of Athens, Mikras Asias St. 75, 11527, Athens, Greece
| | - Evangelos Georgiou
- Medical Physics Laboratory Simulation Centre, School of Medicine, University of Athens, Mikras Asias St. 75, 11527, Athens, Greece
| |
Collapse
|
43
|
Jarc AM, Nisky I. Robot-assisted surgery: an emerging platform for human neuroscience research. Front Hum Neurosci 2015; 9:315. [PMID: 26089785 PMCID: PMC4455232 DOI: 10.3389/fnhum.2015.00315] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Accepted: 05/18/2015] [Indexed: 12/26/2022] Open
Abstract
Classic studies in human sensorimotor control use simplified tasks to uncover fundamental control strategies employed by the nervous system. Such simple tasks are critical for isolating specific features of motor, sensory, or cognitive processes, and for inferring causality between these features and observed behavioral changes. However, it remains unclear how these theories translate to complex sensorimotor tasks or to natural behaviors. Part of the difficulty in performing such experiments has been the lack of appropriate tools for measuring complex motor skills in real-world contexts. Robot-assisted surgery (RAS) provides an opportunity to overcome these challenges by enabling unobtrusive measurements of user behavior. In addition, a continuum of tasks with varying complexity-from simple tasks such as those in classic studies to highly complex tasks such as a surgical procedure-can be studied using RAS platforms. Finally, RAS includes a diverse participant population of inexperienced users all the way to expert surgeons. In this perspective, we illustrate how the characteristics of RAS systems make them compelling platforms to extend many theories in human neuroscience, as well as, to develop new theories altogether.
Collapse
Affiliation(s)
- Anthony M Jarc
- Medical Research, Intuitive Surgical, Inc. Sunnyvale, CA, USA
| | - Ilana Nisky
- Biomedical Engineering, Ben-Gurion University of the Negev Beer Sheva, Israel
| |
Collapse
|
44
|
Zhang Q, Li B. Relative Hidden Markov Models for Video-Based Evaluation of Motion Skills in Surgical Training. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2015; 37:1206-1218. [PMID: 26357343 DOI: 10.1109/tpami.2014.2361121] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
A proper temporal model is essential to analysis tasks involving sequential data. In computer-assisted surgical training, which is the focus of this study, obtaining accurate temporal models is a key step towards automated skill-rating. Conventional learning approaches can have only limited success in this domain due to insufficient amount of data with accurate labels. We propose a novel formulation termed Relative Hidden Markov Model and develop algorithms for obtaining a solution under this formulation. The method requires only relative ranking between input pairs, which are readily available from training sessions in the target application, hence alleviating the requirement on data labeling. The proposed algorithm learns a model from the training data so that the attribute under consideration is linked to the likelihood of the input, hence supporting comparing new sequences. For evaluation, synthetic data are first used to assess the performance of the approach, and then we experiment with real videos from a widely-adopted surgical training platform. Experimental results suggest that the proposed approach provides a promising solution to video-based motion skill evaluation. To further illustrate the potential of generalizing the method to other applications of temporal analysis, we also report experiments on using our model on speech-based emotion recognition.
Collapse
|
45
|
Multi-perspective workflow modeling for online surgical situation models. J Biomed Inform 2015; 54:158-66. [PMID: 25752728 DOI: 10.1016/j.jbi.2015.02.005] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Revised: 02/17/2015] [Accepted: 02/17/2015] [Indexed: 11/24/2022]
Abstract
INTRODUCTION Surgical workflow management is expected to enable situation-aware adaptation and intelligent systems behavior in an integrated operating room (OR). The overall aim is to unburden the surgeon and OR staff from both manual maintenance and information seeking tasks. A major step toward intelligent systems behavior is a stable classification of the surgical situation from multiple perspectives based on performed low-level tasks. MATERIAL AND METHODS The present work proposes a method for the classification of surgical situations based on multi-perspective workflow modeling. A model network that interconnects different types of surgical process models is described. Various aspects of a surgical situation description were considered: low-level tasks, high-level tasks, patient status, and the use of medical devices. A study with sixty neurosurgical interventions was conducted to evaluate the performance of our approach and its robustness against incomplete workflow recognition input. RESULTS A correct classification rate of over 90% was measured for high-level tasks and patient status. The device usage models for navigation and neurophysiology classified over 95% of the situations correctly, whereas the ultrasound usage was more difficult to predict. Overall, the classification rate decreased with an increasing level of input distortion. DISCUSSION Autonomous adaptation of medical devices and intelligent systems behavior do not currently depend solely on low-level tasks. Instead, they require a more general type of understanding of the surgical condition. The integration of various surgical process models in a network provided a comprehensive representation of the interventions and allowed for the generation of extensive situation descriptions. CONCLUSION Multi-perspective surgical workflow modeling and online situation models will be a significant pre-requisite for reliable and intelligent systems behavior. Hence, they will contribute to a cooperative OR environment.
Collapse
|
46
|
Morineau T, Riffaud L, Morandi X, Villain J, Jannin P. Work domain constraints for modelling surgical performance. Int J Comput Assist Radiol Surg 2015; 10:1589-97. [PMID: 25735734 DOI: 10.1007/s11548-015-1166-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2014] [Accepted: 02/16/2015] [Indexed: 11/24/2022]
Abstract
PURPOSE Three main approaches can be identified for modelling surgical performance: a competency-based approach, a task-based approach, both largely explored in the literature, and a less known work domain-based approach. The work domain-based approach first describes the work domain properties that constrain the agent's actions and shape the performance. This paper presents a work domain-based approach for modelling performance during cervical spine surgery, based on the idea that anatomical structures delineate the surgical performance. This model was evaluated through an analysis of junior and senior surgeons' actions. METHOD Twenty-four cervical spine surgeries performed by two junior and two senior surgeons were recorded in real time by an expert surgeon. According to a work domain-based model describing an optimal progression through anatomical structures, the degree of adjustment of each surgical procedure to a statistical polynomial function was assessed. RESULTS Each surgical procedure showed a significant suitability with the model and regression coefficient values around 0.9. However, the surgeries performed by senior surgeons fitted this model significantly better than those performed by junior surgeons. Analysis of the relative frequencies of actions on anatomical structures showed that some specific anatomical structures discriminate senior from junior performances. CONCLUSION The work domain-based modelling approach can provide an overall statistical indicator of surgical performance, but in particular, it can highlight specific points of interest among anatomical structures that the surgeons dwelled on according to their level of expertise.
Collapse
Affiliation(s)
- Thierry Morineau
- Centre de Recherches en Psychologie, Cognition et Communication (CRPCC), EA1285, Université de Bretagne-Sud, Centre Yves Coppens, 56000, Vannes, France.
| | - Laurent Riffaud
- Department of Neurosurgery, Pontchaillou University Hospital, 35033, Rennes Cedex 9, France.,Laboratoire de Traitement du Signal et de l'Image (LTSI), Inserm, UMR 1099, MediCIS Team, Université de Rennes 1, 35000, Rennes, France
| | - Xavier Morandi
- Department of Neurosurgery, Pontchaillou University Hospital, 35033, Rennes Cedex 9, France.,Laboratoire de Traitement du Signal et de l'Image (LTSI), Inserm, UMR 1099, MediCIS Team, Université de Rennes 1, 35000, Rennes, France
| | - Jonathan Villain
- Laboratoire de Mathématique de Bretagne Atlantique (LMBA), UMR 6205, Université de Bretagne-Sud, 56000, Vannes, France
| | - Pierre Jannin
- Laboratoire de Traitement du Signal et de l'Image (LTSI), Inserm, UMR 1099, MediCIS Team, Université de Rennes 1, 35000, Rennes, France
| |
Collapse
|
47
|
D'Angelo ALD, Rutherford DN, Ray RD, Laufer S, Kwan C, Cohen ER, Mason A, Pugh CM. Idle time: an underdeveloped performance metric for assessing surgical skill. Am J Surg 2015; 209:645-51. [PMID: 25725505 DOI: 10.1016/j.amjsurg.2014.12.013] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2014] [Revised: 12/06/2014] [Accepted: 12/17/2014] [Indexed: 10/24/2022]
Abstract
BACKGROUND The aim of this study was to evaluate validity evidence using idle time as a performance measure in open surgical skills assessment. METHODS This pilot study tested psychomotor planning skills of surgical attendings (n = 6), residents (n = 4) and medical students (n = 5) during suturing tasks of varying difficulty. Performance data were collected with a motion tracking system. Participants' hand movements were analyzed for idle time, total operative time, and path length. We hypothesized that there will be shorter idle times for more experienced individuals and on the easier tasks. RESULTS A total of 365 idle periods were identified across all participants. Attendings had fewer idle periods during 3 specific procedure steps (P < .001). All participants had longer idle time on friable tissue (P < .005). CONCLUSIONS Using an experimental model, idle time was found to correlate with experience and motor planning when operating on increasingly difficult tissue types. Further work exploring idle time as a valid psychomotor measure is warranted.
Collapse
Affiliation(s)
- Anne-Lise D D'Angelo
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA.
| | - Drew N Rutherford
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA; Department of Kinesiology, School of Education, University of Wisconsin - Madison, 2000 Observatory Drive, Madison, WI 53706, USA
| | - Rebecca D Ray
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA
| | - Shlomi Laufer
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA; Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin - Madison, 1415 Engineering Drive, Madison, WI 53706, USA
| | - Calvin Kwan
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA
| | - Elaine R Cohen
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA
| | - Andrea Mason
- Department of Kinesiology, School of Education, University of Wisconsin - Madison, 2000 Observatory Drive, Madison, WI 53706, USA
| | - Carla M Pugh
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA
| |
Collapse
|
48
|
Nisky I, Hsieh MH, Okamura AM. Uncontrolled manifold analysis of arm joint angle variability during robotic teleoperation and freehand movement of surgeons and novices. IEEE Trans Biomed Eng 2014; 61:2869-81. [PMID: 24967980 PMCID: PMC8085739 DOI: 10.1109/tbme.2014.2332359] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Teleoperated robot-assisted surgery (RAS) is used to perform a wide variety of minimally invasive procedures. However, current understanding of the effect of robotic manipulation on the motor coordination of surgeons is limited. Recent studies in human motor control suggest that we optimize hand movement stability and task performance while minimizing control effort and improving robustness to unpredicted disturbances. To achieve this, the variability of joint angles and muscle activations is structured to reduce task-relevant variability and increase task-irrelevant variability. In this study, we determine whether teleoperation of a da Vinci Si surgical system in a nonclinical task of simple planar movements changes this structure of variability in experienced surgeons and novices. To answer this question, we employ the UnControlled manifold analysis that partitions users' joint angle variability into task-irrelevant and task-relevant manifolds. We show that experienced surgeons coordinate their joint angles to stabilize hand movements more than novices, and that the effect of teleoperation depends on experience--experts increase teleoperated stabilization relative to freehand whereas novices decrease it. We suggest that examining users' exploitation of the task-irrelevant manifold for stabilization of hand movements may be applied to: (1) evaluation and optimization of teleoperator design and control parameters, and (2) skill assessment and optimization of training in RAS.
Collapse
|
49
|
Kowalewski TM, White LW, Lendvay TS, Jiang IS, Sweet R, Wright A, Hannaford B, Sinanan MN. Beyond task time: automated measurement augments fundamentals of laparoscopic skills methodology. J Surg Res 2014; 192:329-38. [DOI: 10.1016/j.jss.2014.05.077] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Revised: 05/12/2014] [Accepted: 05/27/2014] [Indexed: 01/22/2023]
|
50
|
Effects of robotic manipulators on movements of novices and surgeons. Surg Endosc 2014; 28:2145-58. [PMID: 24519031 DOI: 10.1007/s00464-014-3446-5] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2013] [Accepted: 01/10/2014] [Indexed: 01/22/2023]
Abstract
BACKGROUND Robot-assisted surgery is widely adopted for many procedures but has not realized its full potential to date. Based on human motor control theories, the authors hypothesized that the dynamics of the master manipulators impose challenges on the motor system of the user and may impair performance and slow down learning. Although studies have shown that robotic outcomes are correlated with the case experience of the surgeon, the relative contribution of cognitive versus motor skill is unknown. This study quantified the effects of da Vinci Si master manipulator dynamics on movements of novice users and experienced surgeons and suggests possible implications for training and robot design. METHODS In the reported study, six experienced robotic surgeons and ten novice nonmedical users performed movements under two conditions: teleoperation of a da Vinci Si Surgical system and freehand. A linear mixed model was applied to nine kinematic metrics (including endpoint error, movement time, peak speed, initial jerk, and deviation from a straight line) to assess the effects of teleoperation and expertise. To assess learning effects, t tests between the first and last movements of each type were used. RESULTS All the users moved slower during teleoperation than during freehand movements (F(1,9343) = 345; p < 0.001). The experienced surgeons had smaller errors than the novices (F(1,14) = 36.8; p < 0.001). The straightness of movements depended on their direction (F(7,9343) = 117; p < 0.001). Learning effects were observed in all conditions. Novice users first learned the task and then the dynamics of the manipulator. CONCLUSIONS The findings showed differences between the novices and the experienced surgeons for extremely simple point-to-point movements. The study demonstrated that manipulator dynamics affect user movements, suggesting that these dynamics could be improved in future robot designs. The authors showed the partial adaptation of novice users to the dynamics. Future studies are needed to evaluate whether it will be beneficial to include early training sessions dedicated to learning the dynamics of the manipulator.
Collapse
|