1
|
Kulkarni CS, Deng S, Wang T, Hartman-Kenzler J, Barnes LE, Parker SH, Safford SD, Lau N. Scene-dependent, feedforward eye gaze metrics can differentiate technical skill levels of trainees in laparoscopic surgery. Surg Endosc 2023; 37:1569-1580. [PMID: 36123548 PMCID: PMC11062149 DOI: 10.1007/s00464-022-09582-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
INTRODUCTION In laparoscopic surgery, looking in the target areas is an indicator of proficiency. However, gaze behaviors revealing feedforward control (i.e., looking ahead) and their importance have been under-investigated in surgery. This study aims to establish the sensitivity and relative importance of different scene-dependent gaze and motion metrics for estimating trainee proficiency levels in surgical skills. METHODS Medical students performed the Fundamentals of Laparoscopic Surgery peg transfer task while recording their gaze on the monitor and tool activities inside the trainer box. Using computer vision and fixation algorithms, five scene-dependent gaze metrics and one tool speed metric were computed for 499 practice trials. Cluster analysis on the six metrics was used to group the trials into different clusters/proficiency levels, and ANOVAs were conducted to test differences between proficiency levels. A Random Forest model was trained to study metric importance at predicting proficiency levels. RESULTS Three clusters were identified, corresponding to three proficiency levels. The correspondence between the clusters and proficiency levels was confirmed by differences between completion times (F2,488 = 38.94, p < .001). Further, ANOVAs revealed significant differences between the three levels for all six metrics. The Random Forest model predicted proficiency level with 99% out-of-bag accuracy and revealed that scene-dependent gaze metrics reflecting feedforward behaviors were more important for prediction than the ones reflecting feedback behaviors. CONCLUSION Scene-dependent gaze metrics revealed skill levels of trainees more precisely than between experts and novices as suggested in the literature. Further, feedforward gaze metrics appeared to be more important than feedback ones at predicting proficiency.
Collapse
Affiliation(s)
- Chaitanya S Kulkarni
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Shiyu Deng
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Tianzi Wang
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | | | - Laura E Barnes
- Environmental and Systems Engineering, University of Virginia, Charlottesville, VA, USA
| | | | - Shawn D Safford
- Division of Pediatric General and Thoracic Surgery, UPMC Children's Hospital of Pittsburgh, Harrisburg, PA, USA
| | - Nathan Lau
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA.
| |
Collapse
|
2
|
Spatiotemporal Modeling of Grip Forces Captures Proficiency in Manual Robot Control. BIOENGINEERING (BASEL, SWITZERLAND) 2023; 10:bioengineering10010059. [PMID: 36671631 PMCID: PMC9854605 DOI: 10.3390/bioengineering10010059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/19/2022] [Accepted: 12/27/2022] [Indexed: 01/06/2023]
Abstract
New technologies for monitoring grip forces during hand and finger movements in non-standard task contexts have provided unprecedented functional insights into somatosensory cognition. Somatosensory cognition is the basis of our ability to manipulate and transform objects of the physical world and to grasp them with the right amount of force. In previous work, the wireless tracking of grip-force signals recorded from biosensors in the palm of the human hand has permitted us to unravel some of the functional synergies that underlie perceptual and motor learning under conditions of non-standard and essentially unreliable sensory input. This paper builds on this previous work and discusses further, functionally motivated, analyses of individual grip-force data in manual robot control. Grip forces were recorded from various loci in the dominant and non-dominant hands of individuals with wearable wireless sensor technology. Statistical analyses bring to the fore skill-specific temporal variations in thousands of grip forces of a complete novice and a highly proficient expert in manual robot control. A brain-inspired neural network model that uses the output metric of a self-organizing pap with unsupervised winner-take-all learning was run on the sensor output from both hands of each user. The neural network metric expresses the difference between an input representation and its model representation at any given moment in time and reliably captures the differences between novice and expert performance in terms of grip-force variability.Functionally motivated spatiotemporal analysis of individual average grip forces, computed for time windows of constant size in the output of a restricted amount of task-relevant sensors in the dominant (preferred) hand, reveal finger-specific synergies reflecting robotic task skill. The analyses lead the way towards grip-force monitoring in real time. This will permit tracking task skill evolution in trainees, or identify individual proficiency levels in human robot-interaction, which represents unprecedented challenges for perceptual and motor adaptation in environmental contexts of high sensory uncertainty. Cross-disciplinary insights from systems neuroscience and cognitive behavioral science, and the predictive modeling of operator skills using parsimonious Artificial Intelligence (AI), will contribute towards improving the outcome of new types of surgery, in particular the single-port approaches such as NOTES (Natural Orifice Transluminal Endoscopic Surgery) and SILS (Single-Incision Laparoscopic Surgery).
Collapse
|
3
|
Batmaz AU, Stuerzlinger W. Effective Throughput Analysis of Different Task Execution Strategies for Mid-Air Fitts' Tasks in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3939-3947. [PMID: 36044498 DOI: 10.1109/tvcg.2022.3203105] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Fitts' law and throughput based on effective measures are two mathematical models frequently used to analyze human motor performance in a standardized pointing task, e.g., to compare the performance of input and output devices. Even though pointing has been deeply studied in 2D, it is not well understood how different task execution strategies affect throughput in pointing in 3D virtual environments. In this work, we examine the effective throughput measure, claimed to be invariant to task execution strategies, in Virtual Reality (VR) systems with three such strategies, "as fast, as precise, and as fast and as precise as possible" for ray casting and virtual hand interaction, by re-analyzing data from a 3D pointing ISO 9241-411 study. Results show that effective throughput is not invariant for different task execution strategies in VR, which also matches a more recent 2D result. Normalized speed vs. accuracy curves also did not fit the data. We thus suggest that practitioners, developers, and researchers who use MacKenzie's effective throughput formulation should consider our findings when analyzing 3D user pointing performance in VR systems.
Collapse
|
4
|
Dresp-Langley B. Grip force as a functional window to somatosensory cognition. Front Psychol 2022; 13:1026439. [DOI: 10.3389/fpsyg.2022.1026439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 09/26/2022] [Indexed: 11/13/2022] Open
Abstract
Analysis of grip force signals tailored to hand and finger movement evolution and changes in grip force control during task execution provide unprecedented functional insight into somatosensory cognition. Somatosensory cognition is the basis of our ability to act upon and to transform the physical world around us, to recognize objects on the basis of touch alone, and to grasp them with the right amount of force for lifting and manipulating them. Recent technology has permitted the wireless monitoring of grip force signals recorded from biosensors in the palm of the human hand to track and trace human grip forces deployed in cognitive tasks executed under conditions of variable sensory (visual, auditory) input. Non-invasive multi-finger grip force sensor technology can be exploited to explore functional interactions between somatosensory brain mechanisms and motor control, in particular during learning a cognitive task where the planning and strategic execution of hand movements is essential. Sensorial and cognitive processes underlying manual skills and/or hand-specific (dominant versus non-dominant hand) behaviors can be studied in a variety of contexts by probing selected measurement loci in the fingers and palm of the human hand. Thousands of sensor data recorded from multiple spatial locations can be approached statistically to breathe functional sense into the forces measured under specific task constraints. Grip force patterns in individual performance profiling may reveal the evolution of grip force control as a direct result of cognitive changes during task learning. Grip forces can be functionally mapped to from-global-to-local coding principles in brain networks governing somatosensory processes for motor control in cognitive tasks leading to a specific task expertise or skill. Under the light of a comprehensive overview of recent discoveries into the functional significance of human grip force variations, perspectives for future studies in cognition, in particular the cognitive control of strategic and task relevant hand movements in complex real-world precision task, are pointed out.
Collapse
|
5
|
Abstract
This selective review explores biologically inspired learning as a model for intelligent robot control and sensing technology on the basis of specific examples. Hebbian synaptic learning is discussed as a functionally relevant model for machine learning and intelligence, as explained on the basis of examples from the highly plastic biological neural networks of invertebrates and vertebrates. Its potential for adaptive learning and control without supervision, the generation of functional complexity, and control architectures based on self-organization is brought forward. Learning without prior knowledge based on excitatory and inhibitory neural mechanisms accounts for the process through which survival-relevant or task-relevant representations are either reinforced or suppressed. The basic mechanisms of unsupervised biological learning drive synaptic plasticity and adaptation for behavioral success in living brains with different levels of complexity. The insights collected here point toward the Hebbian model as a choice solution for “intelligent” robotics and sensor systems.
Collapse
|
6
|
Fuxjager MJ, Fusani L, Schlinger BA. Physiological innovation and the evolutionary elaboration of courtship behaviour. Anim Behav 2022. [DOI: 10.1016/j.anbehav.2021.03.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
7
|
Penalver-Andres J, Buetler KA, Koenig T, Müri RM, Marchal-Crespo L. Providing Task Instructions During Motor Training Enhances Performance and Modulates Attentional Brain Networks. Front Neurosci 2021; 15:755721. [PMID: 34955719 PMCID: PMC8695982 DOI: 10.3389/fnins.2021.755721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 10/18/2021] [Indexed: 11/21/2022] Open
Abstract
Learning a new motor task is a complex cognitive and motor process. Especially early during motor learning, cognitive functions such as attentional engagement, are essential, e.g., to discover relevant visual stimuli. Drawing participant's attention towards task-relevant stimuli-e.g., with task instructions using visual cues or explicit written information-is a common practice to support cognitive engagement during training and, hence, accelerate motor learning. However, there is little scientific evidence about how visually cued or written task instructions affect attentional brain networks during motor learning. In this experiment, we trained 36 healthy participants in a virtual motor task: surfing waves by steering a boat with a joystick. We measured the participants' motor performance and observed attentional brain networks using alpha-band electroencephalographic (EEG) activity before and after training. Participants received one of the following task instructions during training: (1) No explicit task instructions and letting participants surf freely (implicit training; IMP); (2) Task instructions provided through explicit visual cues (explicit-implicit training; E-IMP); or (3) through explicit written commands (explicit training; E). We found that providing task instructions during training (E and E-IMP) resulted in less post-training motor variability-linked to enhanced performance-compared to training without instructions (IMP). After training, participants trained with visual cues (E-IMP) enhanced the alpha-band strength over parieto-occipital and frontal brain areas at wave onset. In contrast, participants who trained with explicit commands (E) showed decreased fronto-temporal alpha activity. Thus, providing task instructions in written (E) or using visual cues (E-IMP) leads to similar motor performance improvements by enhancing activation on different attentional networks. While training with visual cues (E-IMP) may be associated with visuo-attentional processes, verbal-analytical processes may be more prominent when written explicit commands are provided (E). Together, we suggest that training parameters such as task instructions, modulate the attentional networks observed during motor practice and may support participant's cognitive engagement, compared to training without instructions.
Collapse
Affiliation(s)
- Joaquin Penalver-Andres
- Motor Learning and Neurorehabilitation Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Psychosomatic Medicine, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Karin A. Buetler
- Motor Learning and Neurorehabilitation Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Thomas Koenig
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - René Martin Müri
- Gerontechnology and Rehabilitation Group, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Perception and Eye Movement Laboratory, Department of Neurology and BioMedical Research, University of Bern, Bern, Switzerland
- Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Laura Marchal-Crespo
- Motor Learning and Neurorehabilitation Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
8
|
Abstract
Pieron’s and Chocholle’s seminal psychophysical work predicts that human response time to information relative to visual contrast and/or sound frequency decreases when contrast intensity or sound frequency increases. The goal of this study is to bring to the forefront the ability of individuals to use visual contrast intensity and sound frequency in combination for faster perceptual decisions of relative depth (“nearer”) in planar (2D) object configurations based on physical variations in luminance contrast. Computer controlled images with two abstract patterns of varying contrast intensity, one on the left and one on the right, preceded or not by a pure tone of varying frequency, were shown to healthy young humans in controlled experimental sequences. Their task (two-alternative, forced-choice) was to decide as quickly as possible which of two patterns, the left or the right one, in a given image appeared to “stand out as if it were nearer” in terms of apparent (subjective) visual depth. The results showed that the combinations of varying relative visual contrast with sounds of varying frequency exploited here produced an additive effect on choice response times in terms of facilitation, where a stronger visual contrast combined with a higher sound frequency produced shorter forced-choice response times. This new effect is predicted by audio-visual probability summation.
Collapse
|
9
|
Towards Expert-Based Speed–Precision Control in Early Simulator Training for Novice Surgeons. INFORMATION 2018. [DOI: 10.3390/info9120316] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Simulator training for image-guided surgical interventions would benefit from intelligent systems that detect the evolution of task performance, and take control of individual speed–precision strategies by providing effective automatic performance feedback. At the earliest training stages, novices frequently focus on getting faster at the task. This may, as shown here, compromise the evolution of their precision scores, sometimes irreparably, if it is not controlled for as early as possible. Artificial intelligence could help make sure that a trainee reaches her/his optimal individual speed–accuracy trade-off by monitoring individual performance criteria, detecting critical trends at any given moment in time, and alerting the trainee as early as necessary when to slow down and focus on precision, or when to focus on getting faster. It is suggested that, for effective benchmarking, individual training statistics of novices are compared with the statistics of an expert surgeon. The speed–accuracy functions of novices trained in a large number of experimental sessions reveal differences in individual speed–precision strategies, and clarify why such strategies should be automatically detected and controlled for before further training on specific surgical task models, or clinical models, may be envisaged. How expert benchmark statistics may be exploited for automatic performance control is explained.
Collapse
|
10
|
Oswald MS, Hansmann ML. 3D approach visualizing cellular networks in human lymph nodes. Acta Histochem 2018; 120:720-727. [PMID: 30104013 DOI: 10.1016/j.acthis.2018.08.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 07/25/2018] [Accepted: 08/03/2018] [Indexed: 11/16/2022]
Abstract
Lymph node diagnostics are essentially based on cutting thin sections of formalin fixed tissues. After hematoxylin and eosin stain, Giemsa stain and immunohistochemical staining of these tissues, the lymph node diagnosis is done using a light microscope, looking at two-dimensional pictures. Three-dimensional visualizations of lymph node tissue have not been used in lymphoma diagnostics yet. This article describes three-dimensional visualization of lymphoid tissue, using thick paraffin sections, immunostained with monoclonal antibodies, confocal laser scanning and data processing with appropriate software and the 3D printing process itself. The advantages and disadvantages of different printing techniques are discussed as well as the application of 3D models in diagnostics, teaching and research of lymph nodes.
Collapse
Affiliation(s)
- Marvin Siegfried Oswald
- Universitätsklinikum Frankfurt/Main, Dr. Senckenberg Institut für Pathologie, Theodor-Stern-Kai 7, Frankfurt/Main, 60590, Hessen, Germany.
| | - Martin-Leo Hansmann
- Universitätsklinikum Frankfurt/Main, Dr. Senckenberg Institut für Pathologie, Theodor-Stern-Kai 7, Frankfurt/Main, 60590, Hessen, Germany; Johann Wolfgang Goethe-Universität Frankfurt am Main, Frankfurt Institute for Advanced Studies (FIAS), Ruth-Moufang-Straße 1, Frankfurt/Main, 60438, Hessen, Germany.
| |
Collapse
|
11
|
Dresp-Langley B, Reeves A. Colour for Behavioural Success. Iperception 2018; 9:2041669518767171. [PMID: 29770183 PMCID: PMC5946649 DOI: 10.1177/2041669518767171] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Accepted: 03/05/2018] [Indexed: 11/17/2022] Open
Abstract
Colour information not only helps sustain the survival of animal species by guiding sexual selection and foraging behaviour but also is an important factor in the cultural and technological development of our own species. This is illustrated by examples from the visual arts and from state-of-the-art imaging technology, where the strategic use of colour has become a powerful tool for guiding the planning and execution of interventional procedures. The functional role of colour information in terms of its potential benefits to behavioural success across the species is addressed in the introduction here to clarify why colour perception may have evolved to generate behavioural success. It is argued that evolutionary and environmental pressures influence not only colour trait production in the different species but also their ability to process and exploit colour information for goal-specific purposes. We then leap straight to the human primate with insight from current research on the facilitating role of colour cues on performance training with precision technology for image-guided surgical planning and intervention. It is shown that local colour cues in two-dimensional images generated by a surgical fisheye camera help individuals become more precise rapidly across a limited number of trial sets in simulator training for specific manual gestures with a tool. This facilitating effect of a local colour cue on performance evolution in a video-controlled simulator (pick-and-place) task can be explained in terms of colour-based figure-ground segregation facilitating attention to local image parts when more than two layers of subjective surface depth are present, as in all natural and surgical images.
Collapse
Affiliation(s)
- Birgitta Dresp-Langley
- ICube UMR 7357, Centre National de la Recherche Scientifique, University of Strasbourg, France
| | - Adam Reeves
- Department of Psychology, Northeastern University, Boston, MA, USA
| |
Collapse
|
12
|
Batmaz AU, de Mathelin M, Dresp-Langley B. Effects of 2D and 3D image views on hand movement trajectories in the surgeon’s peri-personal space in a computer controlled simulator environment. COGENT MEDICINE 2018. [DOI: 10.1080/2331205x.2018.1426232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Affiliation(s)
- Anil Ufuk Batmaz
- ICube Lab, CNRS and University of Strasbourg, UMR 7357, Strasbourg, France
| | - Michel de Mathelin
- ICube Lab, CNRS and University of Strasbourg, UMR 7357, Strasbourg, France
| | | |
Collapse
|
13
|
Batmaz AU, de Mathelin M, Dresp-Langley B. Seeing virtual while acting real: Visual display and strategy effects on the time and precision of eye-hand coordination. PLoS One 2017; 12:e0183789. [PMID: 28859092 PMCID: PMC5578485 DOI: 10.1371/journal.pone.0183789] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2016] [Accepted: 08/11/2017] [Indexed: 11/18/2022] Open
Abstract
Effects of different visual displays on the time and precision of bare-handed or tool-mediated eye-hand coordination were investigated in a pick-and-place-task with complete novices. All of them scored well above average in spatial perspective taking ability and performed the task with their dominant hand. Two groups of novices, four men and four women in each group, had to place a small object in a precise order on the centre of five targets on a Real-world Action Field (RAF), as swiftly as possible and as precisely as possible, using a tool or not (control). Each individual session consisted of four visual display conditions. The order of conditions was counterbalanced between individuals and sessions. Subjects looked at what their hands were doing 1) directly in front of them (“natural” top-down view) 2) in top-down 2D fisheye view 3) in top-down undistorted 2D view or 4) in 3D stereoscopic top-down view (head-mounted OCULUS DK 2). It was made sure that object movements in all image conditions matched the real-world movements in time and space. One group was looking at the 2D images with the monitor positioned sideways (sub-optimal); the other group was looking at the monitor placed straight ahead of them (near-optimal). All image viewing conditions had significantly detrimental effects on time (seconds) and precision (pixels) of task execution when compared with “natural” direct viewing. More importantly, we find significant trade-offs between time and precision between and within groups, and significant interactions between viewing conditions and manipulation conditions. The results shed new light on controversial findings relative to visual display effects on eye-hand coordination, and lead to conclude that differences in camera systems and adaptive strategies of novices are likely to explain these.
Collapse
Affiliation(s)
- Anil U. Batmaz
- ICube Lab Robotics Department, University of Strasbourg, 1 Place de l'Hôpital, Strasbourg, France
| | - Michel de Mathelin
- ICube Lab Robotics Department, University of Strasbourg, 1 Place de l'Hôpital, Strasbourg, France
| | - Birgitta Dresp-Langley
- ICube Lab Cognitive Science Department, Centre National de la Recherche Scientifique, 1 Place de l'Hôpital, Strasbourg, France
- * E-mail:
| |
Collapse
|