1
|
Graffeo CS, Bhandarkar AR, Carlstrom LP, Perry A, Nguyen B, Daniels DJ, Link MJ, Morris JM. That which is unseen: 3D printing for pediatric cerebrovascular education. Childs Nerv Syst 2023; 39:2449-2457. [PMID: 37272936 DOI: 10.1007/s00381-023-05987-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 05/06/2023] [Indexed: 06/06/2023]
Abstract
INTRODUCTION Pediatric cerebrovascular lesions are very rare and include aneurysms, arteriovenous malformations (AVM), and vein of Galen malformations (VOGM). OBJECTIVE To describe and disseminate a validated, reproducible set of 3D models for optimization of neurosurgical training with respect to pediatric cerebrovascular diseases METHODS: All pediatric cerebrovascular lesions treated at our institution with adequate imaging studies during the study period 2015-2020 were reviewed by the study team. Three major diagnostic groups were identified: aneurysm, AVM, and VOGM. For each group, a case deemed highly illustrative of the core diagnostic and therapeutic principles was selected by the lead and senior investigators for printing (CSG/JM). Files for model reproduction and free distribution were prepared for inclusion as Supplemental Materials. RESULTS Representative cases included a 7-month-old female with a giant left MCA aneurysm; a 3-day-old male with a large, complex, high-flow, choroidal-type VOGM, supplied from bilateral thalamic, choroidal, and pericallosal perforators, with drainage into a large prosencephalic vein; and a 7-year-old male with a left frontal AVM with one feeding arterial vessel from the anterior cerebral artery and one single draining vein into the superior sagittal sinus CONCLUSION: Pediatric cerebrovascular lesions are representative of rare but important neurosurgical diseases that require creative approaches for training optimization. As these lesions are quite rare, 3D-printed models and open source educational materials may provide a meaningful avenue for impactful clinical teaching with respect to a wide swath of uncommon or unusual neurosurgical diseases.
Collapse
Affiliation(s)
- Christopher S Graffeo
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, USA
- Department of Neurologic Surgery, OU Health University of Oklahoma Medical Center, Oklahoma City, OK, USA
| | | | | | - Avital Perry
- Department of Neurosurgery, Sheba Hospital, Tel Aviv, Israel
| | - Bachtri Nguyen
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - David J Daniels
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Michael J Link
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, USA
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN, USA
| | - Jonathan M Morris
- Department of Radiology, Mayo Clinic, Rochester, MN, USA.
- Department of Neurosurgery, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA.
| |
Collapse
|
2
|
Piller S, Senna I, Ernst MO. Visual experience shapes the Bouba-Kiki effect and the size-weight illusion upon sight restoration from congenital blindness. Sci Rep 2023; 13:11435. [PMID: 37454205 PMCID: PMC10349879 DOI: 10.1038/s41598-023-38486-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 07/09/2023] [Indexed: 07/18/2023] Open
Abstract
The Bouba-Kiki effect is the systematic mapping between round/spiky shapes and speech sounds ("Bouba"/"Kiki"). In the size-weight illusion, participants judge the smaller of two equally-weighted objects as being heavier. Here we investigated the contribution of visual experience to the development of these phenomena. We compared three groups: early blind individuals (no visual experience), individuals treated for congenital cataracts years after birth (late visual experience), and typically sighted controls (visual experience from birth). We found that, in cataract-treated participants (tested visually/visuo-haptically), both phenomena are absent shortly after sight onset, just like in blind individuals (tested haptically). However, they emerge within months following surgery, becoming statistically indistinguishable from the sighted controls. This suggests a pivotal role of visual experience and refutes the existence of an early sensitive period: A short period of experience, even when gained only years after birth, is sufficient for participants to visually pick-up regularities in the environment, contributing to the development of these phenomena.
Collapse
Affiliation(s)
- Sophia Piller
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany.
- Transfer Center for Neuroscience and Education (ZNL), Ulm University, Ulm, Germany.
| | - Irene Senna
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
- Department of Psychology, Liverpool Hope University, Liverpool, UK
| | - Marc O Ernst
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| |
Collapse
|
3
|
Keenan ID, Green E, Haagensen E, Hancock R, Scotcher KS, Swainson H, Swamy M, Walker S, Woodhouse L. Pandemic-Era Digital Education: Insights from an Undergraduate Medical Programme. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2023; 1397:1-19. [DOI: 10.1007/978-3-031-17135-2_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
4
|
Sveistrup MA, Langlois J, Wilson TD. Do our hands see what our eyes see? Investigating spatial and haptic abilities. ANATOMICAL SCIENCES EDUCATION 2022. [PMID: 36565014 DOI: 10.1002/ase.2247] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 11/02/2022] [Accepted: 12/17/2022] [Indexed: 06/17/2023]
Abstract
Spatial abilities (SAs) are cognitive resources used to mentally manipulate representations of objects to solve problems. Haptic abilities (HAs) represent tactile interactions with real-world objects transforming somatic information into mental representations. Both are proposed to be factors in anatomy education, yet relationships between SAs and HAs remain unknown. The objective of the current study was to explore SA-HA interactions. A haptic ability test (HAT) was developed based on the mental rotations test (MRT) with three-dimensional (3D) objects. The HAT was undertaken in three sensory conditions: (1) sighted, (2) sighted with haptics, and (3) haptics. Participants (n = 22; 13 females, 9 males) completed the MRT and were categorized into high spatial abilities (HSAs) (n = 12, mean± standard deviation: 13.7 ± 3.0) and low spatial abilities (LSAs) (n = 10, 5.6 ± 2.0) based on score distributions about the overall mean. Each SA group's HAT scores were compared across the three sensory conditions. Spearman's correlation coefficients between MRT and HAT scores indicated a statistically significant correlation in sighted condition (r = 0.553, p = 0.015) but were not significant in the sighted with haptics (r = 0.0.078, p = 0.212) and haptics (r = 0.043, p = 0.279) conditions. These data suggest HAs appear unrelated to SAs. With haptic exploration, LSA HAT scores were compensated; comparing HSA with LSA: sighted with haptics [median (lower and upper quartiles): 12 (12,13) vs. 12 (11,13), p = 0.254], and haptics [12 (11,13) vs. 12 (10,12), p = 0.381] conditions. Migrations to online anatomy teaching may unwittingly remove important sensory modalities from the learner. Understanding learner behaviors and performance when haptic inputs are removed from the learning environment represents valuable insight informing future anatomy curriculum and resource development.
Collapse
Affiliation(s)
- Michelle A Sveistrup
- The Corps for Research of Instructional and Perceptual Technologies (CRIPT) Laboratory, Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Jean Langlois
- Department of Emergency Medicine, CIUSSS de l'Estrie-Centre hospitalier universitaire de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Timothy D Wilson
- The Corps for Research of Instructional and Perceptual Technologies (CRIPT) Laboratory, Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| |
Collapse
|
5
|
Hashemi Y, Taghizadeh G, Azad A, Behzadipour S. The effects of supervised and non-supervised upper limb virtual reality exercises on upper limb sensory-motor functions in patients with idiopathic Parkinson's disease. Hum Mov Sci 2022; 85:102977. [PMID: 35932518 DOI: 10.1016/j.humov.2022.102977] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 07/09/2022] [Accepted: 07/13/2022] [Indexed: 11/28/2022]
Abstract
BACKGROUND Impairments of upper limb (UL) sensory-motor functions are common in Parkinson's disease (PD). Virtual reality exercises may improve sensory-motor functions in a safe environment and can be used in tele-rehabilitation. This study aimed to investigate the effects of supervised and non-supervised UL virtual reality exercises (ULVRE) on UL sensory-motor functions in patients with idiopathic PD. METHODS In this clinical trial study, 45 patients with idiopathic PD (29 male) by mean ± SD age of 58.64 ± 8.69 years were randomly allocated to either the control group (conventional rehabilitation exercises), supervised ULVRE or non-supervised ULVRE. Interventions were 24 sessions, 3 sessions/week. Before/after of interventions and follow-up period all assessment was done. Hand Active Sensation Test and Wrist Position Sense Test were used for assessing UL sensory function. Gross and fine manual dexterity were assessed by Box-Block Test and Nine-Hole Peg Test, respectively. Grip and pinch strength were evaluated by a dynamometer and pinch gauge, respectively. RESULTS The results showed significant improvement in discriminative sensory function (HAST-weight and HAST-total), wrist proprioception, gross manual dexterity and grip strength of both less and more affected hands as well as fine manual dexterity of the more affected hand in the three groups in patients with idiopathic PD (P < 0.05). CONCLUSION The results of this study indicated that both supervised and non-supervised ULVRE using the Kinect device might potentially improve some aspects of UL sensory-motor functions in patients with PD. Therefore, ULVRE using the Kinect device can be used in tele-rehabilitation, especially in the current limitations induced by the COVID-19 pandemic, for improving UL functions in patients with PD.
Collapse
Affiliation(s)
- Yazdan Hashemi
- Rehabilitation Research Center, Department of Occupational Therapy, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran
| | - Ghorban Taghizadeh
- Rehabilitation Research Center, Department of Occupational Therapy, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran.
| | - Akram Azad
- Rehabilitation Research Center, Department of Occupational Therapy, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran.
| | - Saeed Behzadipour
- Mechanical Engineering Department, Sharif University of Technology, Tehran, Iran; Djavad Mowafaghian Research Center for Intelligent Neuro-rehabilitation Technologies, Tehran, Iran.
| |
Collapse
|
6
|
Gardner EP, Putrino DF, Chen Van Daele J. Neural representation in M1 and S1 cortex of bilateral hand actions during prehension. J Neurophysiol 2022; 127:1007-1025. [PMID: 35294304 PMCID: PMC8993539 DOI: 10.1152/jn.00374.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 02/03/2022] [Accepted: 02/23/2022] [Indexed: 11/22/2022] Open
Abstract
Bimanual movements that require coordinated actions of the two hands may be coordinated by synchronous bilateral activation of somatosensory and motor cortical areas in both hemispheres, by enhanced activation of individual neurons specialized for bimanual actions, or by both mechanisms. To investigate cortical neural mechanisms that mediate unimanual and bimanual prehension, we compared actions of the left and right hands in a reach to grasp-and-pull instructed-delay task. Spike trains were recorded with multiple electrode arrays placed in the hand area of primary motor (M1) and somatosensory (S1) cortex of the right hemisphere in macaques, allowing us to measure and compare the relative timing, amplitude, and synchronization of cortical activity in these areas as animals grasped and manipulated objects that differed in shape and location. We report that neurons in the right hemisphere show common task-related firing patterns for the two hands but actions of the ipsilateral hand elicited weaker and shorter-duration responses than those of the contralateral hand. We report significant bimanual activation of neurons in M1 but not in S1 cortex when animals have free choice of hand use in prehension tasks. Population ensemble responses in M1 thereby provide an accurate depiction of hand actions during skilled manual tasks. These studies also demonstrate that somatosensory cortical areas serve important cognitive and motor functions in skilled hand actions. Bilateral representation of hand actions may serve an important role in "motor equivalence" when the same movements are performed by either hand and in transfer of skill learning between the hands.NEW & NOTEWORTHY Humans can manipulate small objects with the right or left hand but typically select the dominant hand to handle them. We trained monkeys to grasp and manipulate objects with either hand, while recording neural activity in primary motor (M1) and somatosensory (S1) cortex. Actions of both hands activate M1 neurons, but S1 neurons respond only to the contralateral hand. Bilateral sensitivity in M1 may aid skill transfer between hands after stroke or head injury.
Collapse
Affiliation(s)
- Esther P Gardner
- Department of Neuroscience and Physiology and NYU Neuroscience Institute, New York University Grossman School of Medicine Public Health Research Institute, New York, New York
| | - David F Putrino
- Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, New York
| | | |
Collapse
|
7
|
Thompson B, Green E, Scotcher K, Keenan ID. A Novel Cadaveric Embalming Technique for Enhancing Visualisation of Human Anatomy. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1356:299-317. [PMID: 35146627 DOI: 10.1007/978-3-030-87779-8_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Within the discipline of anatomical education, the use of donated human cadavers in laboratory-based learning activities is often described as the 'gold standard' resource for supporting student understanding of anatomy. Due to both historical and educational factors, cadaveric dissection has traditionally been the approach against which other anatomy learning modalities and resources have been judged. To prepare human donors for teaching purposes, bodies must be embalmed with fixative agents to preserve the tissues. Embalmed cadavers can then be dissected by students or can be prosected or plastinated to produce teaching resources. Here, we describe the history of cadaveric preservation in anatomy education and review the practical strengths and limitations of current approaches for the embalming of human bodies. Furthermore, we investigate the pedagogic benefits of a range of established modern embalming techniques. We describe relevant cadaveric attributes and their impacts on learning, including the importance of colour, texture, smell, and joint mobility. We also explore the emotional and humanistic elements of the use of human donors in anatomy education, and the relative impact of these factors when alternative types of embalming process are performed. Based on these underpinnings, we provide a technical description of our modern Newcastle-WhitWell embalming process. In doing so, we aim to inform anatomy educators and technical staff seeking to embalm human donors rapidly and safely and at reduced costs, while enhancing visual and haptic tissue characteristics. We propose that our technique has logistical and pedagogic implications, both for the development of embalming techniques and for student visualisation and learning.
Collapse
|
8
|
Visual and Tactile Sensory Systems Share Common Features in Object Recognition. eNeuro 2021; 8:ENEURO.0101-21.2021. [PMID: 34544756 PMCID: PMC8493885 DOI: 10.1523/eneuro.0101-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 08/24/2021] [Accepted: 08/31/2021] [Indexed: 11/24/2022] Open
Abstract
Although we use our visual and tactile sensory systems interchangeably for object recognition on a daily basis, little is known about the mechanism underlying this ability. This study examined how 3D shape features of objects form two congruent and interchangeable visual and tactile perceptual spaces in healthy male and female participants. Since active exploration plays an important role in shape processing, a virtual reality environment was used to visually explore 3D objects called digital embryos without using the tactile sense. In addition, during the tactile procedure, blindfolded participants actively palpated a 3D-printed version of the same objects with both hands. We first demonstrated that the visual and tactile perceptual spaces were highly similar. We then extracted a series of 3D shape features to investigate how visual and tactile exploration can lead to the correct identification of the relationships between objects. The results indicate that both modalities share the same shape features to form highly similar veridical spaces. This finding suggests that visual and tactile systems might apply similar cognitive processes to sensory inputs that enable humans to rely merely on one modality in the absence of another to recognize surrounding objects.
Collapse
|
9
|
Rodgers CC, Nogueira R, Pil BC, Greeman EA, Park JM, Hong YK, Fusi S, Bruno RM. Sensorimotor strategies and neuronal representations for shape discrimination. Neuron 2021; 109:2308-2325.e10. [PMID: 34133944 PMCID: PMC8298290 DOI: 10.1016/j.neuron.2021.05.019] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 01/28/2021] [Accepted: 05/14/2021] [Indexed: 10/21/2022]
Abstract
Humans and other animals can identify objects by active touch, requiring the coordination of exploratory motion and tactile sensation. Both the motor strategies and neural representations employed could depend on the subject's goals. We developed a shape discrimination task that challenged head-fixed mice to discriminate concave from convex shapes. Behavioral decoding revealed that mice did this by comparing contacts across whiskers. In contrast, a separate group of mice performing a shape detection task simply summed up contacts over whiskers. We recorded populations of neurons in the barrel cortex, which processes whisker input, and found that individual neurons across the cortical layers encoded touch, whisker motion, and task-related signals. Sensory representations were task-specific: during shape discrimination, but not detection, neurons responded most to behaviorally relevant whiskers, overriding somatotopy. Thus, sensory cortex employs task-specific representations compatible with behaviorally relevant computations.
Collapse
Affiliation(s)
- Chris C Rodgers
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA.
| | - Ramon Nogueira
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA
| | - B Christina Pil
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA
| | - Esther A Greeman
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA
| | - Jung M Park
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA
| | - Y Kate Hong
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA
| | - Stefano Fusi
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA
| | - Randy M Bruno
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
10
|
Wilson TD. Visualisation technologies-I can see clearly now but the feel is gone: Commentary on: Stereoscopic three-dimensional visualisation technology in anatomy learning: A meta-analysis, Bogomolova et al. MEDICAL EDUCATION 2021; 55:285-288. [PMID: 33386616 DOI: 10.1111/medu.14448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 12/16/2020] [Accepted: 12/18/2020] [Indexed: 06/12/2023]
Affiliation(s)
- Timothy D Wilson
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario, London, ON, Canada
| |
Collapse
|
11
|
Abstract
SUMMARYInteraction between a robot and its environment requires perception about the environment, which helps the robot in making a clear decision about the object type and its location. After that, the end effector will be brought to the object’s location for grasping. There are many research studies on the reaching and grasping of objects using different techniques and mechanisms for increasing accuracy and robustness during grasping and reaching tasks. Thus, this paper presents an extensive review of research directions and topics of different approaches such as sensing, learning and gripping, which have been implemented within the current five years.
Collapse
|
12
|
Different activation signatures in the primary sensorimotor and higher-level regions for haptic three-dimensional curved surface exploration. Neuroimage 2021; 231:117754. [PMID: 33454415 DOI: 10.1016/j.neuroimage.2021.117754] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 01/05/2021] [Accepted: 01/09/2021] [Indexed: 01/03/2023] Open
Abstract
Haptic object perception begins with continuous exploratory contact, and the human brain needs to accumulate sensory information continuously over time. However, it is still unclear how the primary sensorimotor cortex (PSC) interacts with these higher-level regions during haptic exploration over time. This functional magnetic resonance imaging (fMRI) study investigates time-dependent haptic object processing by examining brain activity during haptic 3D curve and roughness estimations. For this experiment, we designed sixteen haptic stimuli (4 kinds of curves × 4 varieties of roughness) for the haptic curve and roughness estimation tasks. Twenty participants were asked to move their right index and middle fingers along the surface twice and to estimate one of the two features-roughness or curvature-depending on the task instruction. We found that the brain activity in several higher-level regions (e.g., the bilateral posterior parietal cortex) linearly increased as the number of curves increased during the haptic exploration phase. Surprisingly, we found that the contralateral PSC was parametrically modulated by the number of curves only during the late exploration phase but not during the early exploration phase. In contrast, we found no similar parametric modulation activity patterns during the haptic roughness estimation task in either the contralateral PSC or in higher-level regions. Thus, our findings suggest that haptic 3D object perception is processed across the cortical hierarchy, whereas the contralateral PSC interacts with other higher-level regions across time in a manner that is dependent upon the features of the object.
Collapse
|
13
|
Branson TM, Shapiro L, Venter RG. Observation of Patients' 3D Printed Anatomical Features and 3D Visualisation Technologies Improve Spatial Awareness for Surgical Planning and in-Theatre Performance. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2021; 1334:23-37. [PMID: 34476743 DOI: 10.1007/978-3-030-76951-2_2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Improved spatial awareness is vital in anatomy education as well as in many areas of medical practice. Many healthcare professionals struggle with the extrapolation of 2D data to its locus within the 3D volume of the anatomy. In this chapter, we outline the use of touch as an important sensory modality in the observation of 3D forms, including anatomical parts, with the specific neuroscientific underpinnings in this regard being described. We explore how improved spatial awareness is directly linked to improved spatial skill. The reader is offered two practical exercises that lead to improved spatial awareness for application in exploring external 3D anatomy volume as well as internal 3D anatomy volume. These exercises are derived from the Haptico-visual observation and drawing (HVOD) method. The resulting cognitive improvement in spatial awareness that these exercises engender can be of benefit to students in their study of anatomy and for application by healthcare professionals in many aspects of their medical practice. The use of autostereoscopic visualisation technology (AS3D) to view the anatomy from DICOM data, in combination with the haptic exploration of a 3D print (3Dp) of the same stereoscopic on-screen image, is recommended as a practice for improved understanding of any anatomical part or feature. We describe a surgical innovation that relies on the haptic perception of patients' 3D printed (3Dp) anatomical features from patient DICOM data, for improved surgical planning and in-theatre surgical performance. Throughout the chapter, underlying neuroscientific correlates to haptic and visual observation, memory, working memory, and cognitive load are provided.
Collapse
Affiliation(s)
- Toby M Branson
- Department of Health and Medical Sciences, Adelaide Medical School, The University of Adelaide, Adelaide, SA, Australia
| | - Leonard Shapiro
- Division of Clinical Anatomy and Biological Anthropology, Department of Human Biology, University of Cape Town, Cape Town, South Africa.
| | - Rudolph G Venter
- Faculty of Medicine and Health Science, Division of Orthopaedic Surgery, Stellenbosch University, Stellenbosch, South Africa
| |
Collapse
|
14
|
Herman AM, Palmer C, Azevedo RT, Tsakiris M. Neural divergence and convergence for attention to and detection of interoceptive and somatosensory stimuli. Cortex 2020; 135:186-206. [PMID: 33385747 DOI: 10.1016/j.cortex.2020.11.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 09/30/2020] [Accepted: 11/17/2020] [Indexed: 12/30/2022]
Abstract
Body awareness is constructed by signals originating from within and outside the body. How do these apparently divergent signals converge? We developed a signal detection task to study the neural convergence and divergence of interoceptive and somatosensory signals. Participants focused on either cardiac or tactile events and reported their presence or absence. Beyond some evidence of divergence, we observed a robust overlap in the pattern of activation evoked across both conditions in frontal areas including the insular cortex, as well as parietal and occipital areas, and for both attention and detection of these signals. Psycho-physiological interaction analysis revealed that right insular cortex connectivity was modulated by the conscious detection of cardiac compared to somatosensory sensations, with greater connectivity to occipito-parietal regions when attending to cardiac signals. Our findings speak in favour of the inherent convergence of bodily-related signals and move beyond the apparent antagonism between exteroception and interoception.
Collapse
Affiliation(s)
- Aleksandra M Herman
- Lab of Action and Body, Department of Psychology, Royal Holloway, University of London, UK.
| | - Clare Palmer
- ABCD Coordinating Center, Center for Human Development (CHD), University of California, San Diego, USA
| | | | - Manos Tsakiris
- Lab of Action and Body, Department of Psychology, Royal Holloway, University of London, UK; The Warburg Institute, School of Advanced Study, University of London, UK; Department of Behavioural and Cognitive Sciences, Faculty of Humanities, Education and Social Sciences, University of Luxembourg, Luxembourg
| |
Collapse
|
15
|
Neuromorphic approach to tactile edge orientation estimation using spatiotemporal similarity. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.04.131] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
16
|
Bogomolova K, van der Ham IJM, Dankbaar MEW, van den Broek WW, Hovius SER, van der Hage JA, Hierck BP. The Effect of Stereoscopic Augmented Reality Visualization on Learning Anatomy and the Modifying Effect of Visual-Spatial Abilities: A Double-Center Randomized Controlled Trial. ANATOMICAL SCIENCES EDUCATION 2020; 13:558-567. [PMID: 31887792 DOI: 10.1002/ase.1941] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Revised: 12/20/2019] [Accepted: 12/27/2019] [Indexed: 05/09/2023]
Abstract
Monoscopically projected three-dimensional (3D) visualization technology may have significant disadvantages for students with lower visual-spatial abilities despite its overall effectiveness in teaching anatomy. Previous research suggests that stereopsis may facilitate a better comprehension of anatomical knowledge. This study evaluated the educational effectiveness of stereoscopic augmented reality (AR) visualization and the modifying effect of visual-spatial abilities on learning. In a double-center randomized controlled trial, first- and second-year (bio)medical undergraduates studied lower limb anatomy with stereoscopic 3D AR model (n = 20), monoscopic 3D desktop model (n = 20), or two-dimensional (2D) anatomical atlas (n = 18). Visual-spatial abilities were tested with Mental Rotation Test (MRT), Paper Folding Test (PFT), and Mechanical Reasoning (MR) Test. Anatomical knowledge was assessed by the validated 30-item paper posttest. The overall posttest scores in the stereoscopic 3D AR group (47.8%) were similar to those in the monoscopic 3D desktop group (38.5%; P = 0.240) and the 2D anatomical atlas group (50.9%; P = 1.00). When stratified by visual-spatial abilities test scores, students with lower MRT scores achieved higher posttest scores in the stereoscopic 3D AR group (49.2%) as compared to the monoscopic 3D desktop group (33.4%; P = 0.015) and similar to the scores in the 2D group (46.4%; P = 0.99). Participants with higher MRT scores performed equally well in all conditions. It is instrumental to consider an aptitude-treatment interaction caused by visual-spatial abilities when designing research into 3D learning. Further research is needed to identify contributing features and the most effective way of introducing this technology into current educational programs.
Collapse
Affiliation(s)
- Katerina Bogomolova
- Department of Surgery, Leiden University Medical Center, Leiden, The Netherlands
- Center for Innovation of Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | | | - Mary E W Dankbaar
- Institute for Medical Education Research Rotterdam, Rotterdam Erasmus University Medical Center, Rotterdam, The Netherlands
| | - Walter W van den Broek
- Institute for Medical Education Research Rotterdam, Rotterdam Erasmus University Medical Center, Rotterdam, The Netherlands
| | - Steven E R Hovius
- Department of Plastic and Reconstructive Surgery and Hand Surgery, Rotterdam Erasmus University Medical Center, Rotterdam, The Netherlands
- Department of Plastic and Reconstructive Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jos A van der Hage
- Department of Surgery, Leiden University Medical Center, Leiden, The Netherlands
- Center for Innovation of Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Beerend P Hierck
- Center for Innovation of Medical Education, Leiden University Medical Center, Leiden, The Netherlands
- Department of Anatomy and Embryology, Leiden University Medical Center, Leiden, The Netherlands
- Centre for Innovation, Leiden University, The Hague, The Netherlands
- Leiden Teachers' Academy, Leiden University, Leiden, The Netherlands
| |
Collapse
|
17
|
Shapiro L, Bell K, Dhas K, Branson T, Louw G, Keenan ID. Focused Multisensory Anatomy Observation and Drawing for Enhancing Social Learning and Three-Dimensional Spatial Understanding. ANATOMICAL SCIENCES EDUCATION 2020; 13:488-503. [PMID: 31705741 DOI: 10.1002/ase.1929] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 10/08/2019] [Accepted: 11/03/2019] [Indexed: 06/10/2023]
Abstract
The concept that multisensory observation and drawing can be effective for enhancing anatomy learning is supported by pedagogic research and theory, and theories of drawing. A haptico-visual observation and drawing (HVOD) process has been previously introduced to support understanding of the three-dimensional (3D) spatial form of anatomical structures. The HVOD process involves exploration of 3D anatomy with the combined use of touch and sight, and the simultaneous act of making graphite marks on paper which correspond to the anatomy under observation. Findings from a previous study suggest that HVOD can increase perceptual understanding of anatomy through memorization and recall of the 3D form of observed structures. Here, additional pedagogic and cognitive underpinnings are presented to further demonstrate how and why HVOD can be effective for anatomy learning. Delivery of a HVOD workshop is described as a detailed guide for instructors, and themes arising from a phenomenological study of educator experiences of the HVOD process are presented. Findings indicate that HVOD can provide an engaging approach for the spatial exploration of anatomy within a supportive social learning environment, but also requires modification for effective curricular integration. Consequently, based on the most effective research-informed, theoretical, and logistical elements of art-based approaches in anatomy learning, including the framework provided by the observe-reflect-draw-edit-repeat (ORDER) method, an optimized "ORDER Touch" observation and drawing process has been developed. This is with the aim of providing a widely accessible resource for supporting social learning and 3D spatial understanding of anatomy, in addition to improving specific anatomical knowledge.
Collapse
Affiliation(s)
- Leonard Shapiro
- Department of Human Biology, University of Cape Town, Cape Town, Republic of South Africa
| | - Kathryn Bell
- School of Medical Education, Newcastle University, Newcastle upon Tyne, United Kingdom
- Acute Medical Unit, James Cook University Hospital, Middlesbrough, United Kingdom
| | - Kallpana Dhas
- School of Medical Education, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Toby Branson
- Department of Health and Medical Sciences, Adelaide Medical School, University of Adelaide, Adelaide, South Australia, Australia
| | - Graham Louw
- Department of Human Biology, University of Cape Town, Cape Town, Republic of South Africa
| | - Iain D Keenan
- School of Medical Education, Newcastle University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
18
|
Hutmacher F. Why Is There So Much More Research on Vision Than on Any Other Sensory Modality? Front Psychol 2019; 10:2246. [PMID: 31636589 PMCID: PMC6787282 DOI: 10.3389/fpsyg.2019.02246] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 09/19/2019] [Indexed: 01/10/2023] Open
Abstract
Why is there so much more research on vision than on any other sensory modality? There is a seemingly easy answer to this question: It is because vision is our most important and most complex sense. Although there are arguments in favor of this explanation, it can be challenged in two ways: by showing that the arguments regarding the importance and complexity of vision are debatable and by demonstrating that there are other aspects that need to be taken into account. Here, I argue that the explanation is debatable, as there are various ways of defining “importance” and “complexity” and, as there is no clear consensus that vision is indeed the most important and most complex of our senses. Hence, I propose two additional explanations: According to the methodological-structural explanation, there is more research on vision because the available, present-day technology is better suited for studying vision than for studying other modalities – an advantage which most likely is the result of an initial bias toward vision, which reinforces itself. Possible reasons for such an initial bias are discussed. The cultural explanation emphasizes that the dominance of the visual is not an unchangeable constant, but rather the result of the way our societies are designed and thus heavily influenced by human decision-making. As it turns out, there is no universal hierarchy of the senses, but great historical and cross-cultural variation. Realizing that the dominance of the visual is socially and culturally reinforced and not simply a law of nature, gives us the opportunity to take a step back and to think about the kind of sensory environments we want to create and about the kinds of theories that need to be developed in research.
Collapse
Affiliation(s)
- Fabian Hutmacher
- Department of Psychology, University of Regensburg, Regensburg, Germany
| |
Collapse
|
19
|
Liu Y(A, Jiang Z(J, Chan HC. Touching Products Virtually: Facilitating Consumer Mental Imagery with Gesture Control and Visual Presentation. J MANAGE INFORM SYST 2019. [DOI: 10.1080/07421222.2019.1628901] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
20
|
Reid S, Shapiro L, Louw G. How Haptics and Drawing Enhance the Learning of Anatomy. ANATOMICAL SCIENCES EDUCATION 2019; 12:164-172. [PMID: 30107081 DOI: 10.1002/ase.1807] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Accepted: 05/09/2018] [Indexed: 06/08/2023]
Abstract
Students' engagement with two-dimensional (2D) representations as opposed to three-dimensional (3D) representations of anatomy such as in dissection, is significant in terms of the depth of their comprehension. This qualitative study aimed to understand how students learned anatomy using observational and drawing activities that included touch, called haptics. Five volunteer second year medical students at the University of Cape Town participated in a six-day educational intervention in which a novel "haptico-visual observation and drawing" (HVOD) method was employed. Data were collected through individual interviews as well as a focus group discussion. The HVOD method was successfully applied by all the participants, who reported an improvement of their cognitive understanding and memorization of the 3D form of the anatomical part. All the five participants described the development of a "mental picture" of the object as being central to "deep learning." The use of the haptic senses coupled with the simultaneous act of drawing enrolled sources of information that were reported by the participants to have enabled better memorization. We postulate that the more sources of information about an object, the greater degree of complexity could be appreciated, and therefore the more clearly it could be captured and memorized. The inclusion of haptics has implications for cadaveric dissection versus non-cadaveric forms of learning. This study was limited by its sample size as well as the bias and position of the researchers, but the sample of five produced a sufficient amount of data to generate a conceptual model and hypothesis.
Collapse
Affiliation(s)
- Stephen Reid
- Primary Health Care Directorate, Faculty of Health Sciences, University of Cape Town, Cape Town, South Africa
| | - Leonard Shapiro
- Primary Health Care Directorate, Faculty of Health Sciences, University of Cape Town, Cape Town, South Africa
| | - Graham Louw
- Division of Clinical Anatomy and Biological Anthropology, Department of Human Biology, Faculty of Health Sciences, University of Cape Town, Cape Town, South Africa
| |
Collapse
|
21
|
Tactile Perception for Stroke Induce Changes in Electroencephalography. Hong Kong J Occup Ther 2016; 28:1-6. [PMID: 30186061 PMCID: PMC6091988 DOI: 10.1016/j.hkjot.2016.10.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2015] [Revised: 09/06/2016] [Accepted: 10/29/2016] [Indexed: 12/02/2022] Open
Abstract
Objective/Background Tactile perception is a basic way to obtain and evaluate information about an
object. The purpose of this study was to examine the effects of tactile
perception on brain activation using two different tactile explorations,
passive and active touches, in individuals with chronic hemiparetic
stroke. Methods Twenty patients who were diagnosed with stroke (8 right brain damaged, 12
left brain damaged) participated in this study. The tactile perception was
conducted using passive and active explorations in a sitting position. To
determine the neurological changes in the brain, this study measured the
brain waves of the participants using electroencephalography (EEG). Results The relative power of the sensory motor rhythm on the right prefrontal lobe
and right parietal lobe was significantly greater during the active tactile
exploration compared to the relative power during the passive exploration in
the left damaged hemisphere. Most of the measured brain areas showed
nonsignificantly higher relative power of the sensory motor rhythm during
the active tactile exploration, regardless of which hemisphere was
damaged. Conclusion The results of this study provided a neurophysiological evidence on tactile
perception in individuals with chronic stroke. Occupational therapists
should consider an active tactile exploration as a useful modality on
occupational performance in rehabilitation training.
Collapse
|
22
|
Papale P, Chiesi L, Rampinini AC, Pietrini P, Ricciardi E. When Neuroscience 'Touches' Architecture: From Hapticity to a Supramodal Functioning of the Human Brain. Front Psychol 2016; 7:866. [PMID: 27375542 PMCID: PMC4899444 DOI: 10.3389/fpsyg.2016.00866] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2016] [Accepted: 05/25/2016] [Indexed: 02/02/2023] Open
Abstract
In the last decades, the rapid growth of functional brain imaging methodologies allowed cognitive neuroscience to address open questions in philosophy and social sciences. At the same time, novel insights from cognitive neuroscience research have begun to influence various disciplines, leading to a turn to cognition and emotion in the fields of planning and architectural design. Since 2003, the Academy of Neuroscience for Architecture has been supporting 'neuro-architecture' as a way to connect neuroscience and the study of behavioral responses to the built environment. Among the many topics related to multisensory perceptual integration and embodiment, the concept of hapticity was recently introduced, suggesting a pivotal role of tactile perception and haptic imagery in architectural appraisal. Arguments have thus risen in favor of the existence of shared cognitive foundations between hapticity and the supramodal functional architecture of the human brain. Precisely, supramodality refers to the functional feature of defined brain regions to process and represent specific information content in a more abstract way, independently of the sensory modality conveying such information to the brain. Here, we highlight some commonalities and differences between the concepts of hapticity and supramodality according to the distinctive perspectives of architecture and cognitive neuroscience. This comparison and connection between these two different approaches may lead to novel observations in regard to people-environment relationships, and even provide empirical foundations for a renewed evidence-based design theory.
Collapse
Affiliation(s)
- Paolo Papale
- Department of Engineering and Architecture, University of Trieste, TriesteItaly
| | - Leonardo Chiesi
- Citylab – Laboratory of Social Research on Design, Architecture and Beyond, Department of Political and Social Sciences, School of Architecture, University of Florence, FlorenceItaly
| | - Alessandra C. Rampinini
- Department of Surgical, Medical, Molecular Pathology and Critical Area, University of Pisa, PisaItaly
| | | | - Emiliano Ricciardi
- Department of Surgical, Medical, Molecular Pathology and Critical Area, University of Pisa, PisaItaly
| |
Collapse
|
23
|
Yau JM, Kim SS, Thakur PH, Bensmaia SJ. Feeling form: the neural basis of haptic shape perception. J Neurophysiol 2016; 115:631-42. [PMID: 26581869 PMCID: PMC4752307 DOI: 10.1152/jn.00598.2015] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2015] [Accepted: 10/23/2015] [Indexed: 11/22/2022] Open
Abstract
The tactile perception of the shape of objects critically guides our ability to interact with them. In this review, we describe how shape information is processed as it ascends the somatosensory neuraxis of primates. At the somatosensory periphery, spatial form is represented in the spatial patterns of activation evoked across populations of mechanoreceptive afferents. In the cerebral cortex, neurons respond selectively to particular spatial features, like orientation and curvature. While feature selectivity of neurons in the earlier processing stages can be understood in terms of linear receptive field models, higher order somatosensory neurons exhibit nonlinear response properties that result in tuning for more complex geometrical features. In fact, tactile shape processing bears remarkable analogies to its visual counterpart and the two may rely on shared neural circuitry. Furthermore, one of the unique aspects of primate somatosensation is that it contains a deformable sensory sheet. Because the relative positions of cutaneous mechanoreceptors depend on the conformation of the hand, the haptic perception of three-dimensional objects requires the integration of cutaneous and proprioceptive signals, an integration that is observed throughout somatosensory cortex.
Collapse
Affiliation(s)
- Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas;
| | - Sung Soo Kim
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia
| | | | - Sliman J Bensmaia
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois
| |
Collapse
|
24
|
Occelli V, Lacey S, Stephens C, John T, Sathian K. Haptic Object Recognition is View-Independent in Early Blind but not Sighted People. Perception 2015; 45:337-45. [PMID: 26562881 DOI: 10.1177/0301006615614489] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Object recognition, whether visual or haptic, is impaired in sighted people when objects are rotated between learning and test, relative to an unrotated condition, that is, recognition is view-dependent. Loss of vision early in life results in greater reliance on haptic perception for object identification compared with the sighted. Therefore, we hypothesized that early blind people may be more adept at recognizing objects despite spatial transformations. To test this hypothesis, we compared early blind and sighted control participants on a haptic object recognition task. Participants studied pairs of unfamiliar three-dimensional objects and performed a two-alternative forced-choice identification task, with the learned objects presented both unrotated and rotated 180° about they-axis. Rotation impaired the recognition accuracy of sighted, but not blind, participants. We propose that, consistent with our hypothesis, haptic view-independence in the early blind reflects their greater experience with haptic object perception.
Collapse
Affiliation(s)
| | - Simon Lacey
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Careese Stephens
- Department of Neurology, Emory University, Atlanta, GA, USARehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| | - Thomas John
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University, Atlanta, GA, USADepartment of Rehabilitation Medicine, Emory University, Atlanta, GA, USA; Department of Psychology, Emory University, Atlanta, GA, USARehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| |
Collapse
|
25
|
Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts. Atten Percept Psychophys 2014; 76:541-58. [PMID: 24197503 DOI: 10.3758/s13414-013-0559-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.
Collapse
|
26
|
Nurzaman SG, Culha U, Brodbeck L, Wang L, Iida F. Active sensing system with in situ adjustable sensor morphology. PLoS One 2013; 8:e84090. [PMID: 24416094 PMCID: PMC3887119 DOI: 10.1371/journal.pone.0084090] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2013] [Accepted: 11/11/2013] [Indexed: 11/18/2022] Open
Abstract
Background Despite the widespread use of sensors in engineering systems like robots and automation systems, the common paradigm is to have fixed sensor morphology tailored to fulfill a specific application. On the other hand, robotic systems are expected to operate in ever more uncertain environments. In order to cope with the challenge, it is worthy of note that biological systems show the importance of suitable sensor morphology and active sensing capability to handle different kinds of sensing tasks with particular requirements. Methodology This paper presents a robotics active sensing system which is able to adjust its sensor morphology in situ in order to sense different physical quantities with desirable sensing characteristics. The approach taken is to use thermoplastic adhesive material, i.e. Hot Melt Adhesive (HMA). It will be shown that the thermoplastic and thermoadhesive nature of HMA enables the system to repeatedly fabricate, attach and detach mechanical structures with a variety of shape and size to the robot end effector for sensing purposes. Via active sensing capability, the robotic system utilizes the structure to physically probe an unknown target object with suitable motion and transduce the arising physical stimuli into information usable by a camera as its only built-in sensor. Conclusions/Significance The efficacy of the proposed system is verified based on two results. Firstly, it is confirmed that suitable sensor morphology and active sensing capability enables the system to sense different physical quantities, i.e. softness and temperature, with desirable sensing characteristics. Secondly, given tasks of discriminating two visually indistinguishable objects with respect to softness and temperature, it is confirmed that the proposed robotic system is able to autonomously accomplish them. The way the results motivate new research directions which focus on in situ adjustment of sensor morphology will also be discussed.
Collapse
Affiliation(s)
- Surya G Nurzaman
- Bio-Inspired Robotics Laboratory, Institute of Robotics and Intelligent Systems, Department of Mechanical and Process Engineering, ETH Zürich, Zürich, Switzerland
| | - Utku Culha
- Bio-Inspired Robotics Laboratory, Institute of Robotics and Intelligent Systems, Department of Mechanical and Process Engineering, ETH Zürich, Zürich, Switzerland
| | - Luzius Brodbeck
- Bio-Inspired Robotics Laboratory, Institute of Robotics and Intelligent Systems, Department of Mechanical and Process Engineering, ETH Zürich, Zürich, Switzerland
| | - Liyu Wang
- Bio-Inspired Robotics Laboratory, Institute of Robotics and Intelligent Systems, Department of Mechanical and Process Engineering, ETH Zürich, Zürich, Switzerland
| | - Fumiya Iida
- Bio-Inspired Robotics Laboratory, Institute of Robotics and Intelligent Systems, Department of Mechanical and Process Engineering, ETH Zürich, Zürich, Switzerland
| |
Collapse
|
27
|
Horev G, Saig A, Knutsen PM, Pietr M, Yu C, Ahissar E. Motor-sensory convergence in object localization: a comparative study in rats and humans. Philos Trans R Soc Lond B Biol Sci 2012; 366:3070-6. [PMID: 21969688 DOI: 10.1098/rstb.2011.0157] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In order to identify basic aspects in the process of tactile perception, we trained rats and humans in similar object localization tasks and compared the strategies used by the two species. We found that rats integrated temporally related sensory inputs ('temporal inputs') from early whisk cycles with spatially related inputs ('spatial inputs') to align their whiskers with the objects; their perceptual reports appeared to be based primarily on this spatial alignment. In a similar manner, human subjects also integrated temporal and spatial inputs, but relied mainly on temporal inputs for object localization. These results suggest that during tactile object localization, an iterative motor-sensory process gradually converges on a stable percept of object location in both species.
Collapse
Affiliation(s)
- Guy Horev
- The Department of Neurobiology, The Weizmann Institute of Science, Rehovot, Israel
| | | | | | | | | | | |
Collapse
|
28
|
Abstract
Active sensing systems are purposive and information-seeking sensory systems. Active sensing usually entails sensor movement, but more fundamentally, it involves control of the sensor apparatus, in whatever manner best suits the task, so as to maximize information gain. In animals, active sensing is perhaps most evident in the modality of touch. In this theme issue, we look at active touch across a broad range of species from insects, terrestrial and marine mammals, through to humans. In addition to analysing natural touch, we also consider how engineering is beginning to exploit physical analogues of these biological systems so as to endow robots with rich tactile sensing capabilities. The different contributions show not only the varieties of active touch--antennae, whiskers and fingertips--but also their commonalities. They explore how active touch sensing has evolved in different animal lineages, how it serves to provide rapid and reliable cues for controlling ongoing behaviour, and even how it can disintegrate when our brains begin to fail. They demonstrate that research on active touch offers a means both to understand this essential and primary sensory modality, and to investigate how animals, including man, combine movement with sensing so as to make sense of, and act effectively in, the world.
Collapse
Affiliation(s)
- Tony J Prescott
- University of Sheffield-Psychology, Western Bank, Sheffield S10 2TN, UK.
| | | | | |
Collapse
|
29
|
Brecht M, Naumann R, Anjum F, Wolfe J, Munz M, Mende C, Roth-Alpermann C. The neurobiology of Etruscan shrew active touch. Philos Trans R Soc Lond B Biol Sci 2011; 366:3026-36. [PMID: 21969684 PMCID: PMC3172601 DOI: 10.1098/rstb.2011.0160] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
The Etruscan shrew, Suncus etruscus, is not only the smallest terrestrial mammal, but also one of the fastest and most tactile hunters described to date. The shrew's skeletal muscle consists entirely of fast-twitch types and lacks slow fibres. Etruscan shrews detect, overwhelm, and kill insect prey in large numbers in darkness. The cricket prey is exquisitely mechanosensitive and fast-moving, and is as big as the shrew itself. Experiments with prey replica show that shape cues are both necessary and sufficient for evoking attacks. Shrew attacks are whisker guided by motion- and size-invariant Gestalt-like prey representations. Shrews often attack their prey prior to any signs of evasive manoeuvres. Shrews whisk at frequencies of approximately 14 Hz and can react with latencies as short as 25-30 ms to prey movement. The speed of attacks suggests that shrews identify and classify prey with a single touch. Large parts of the shrew's brain respond to vibrissal touch, which is represented in at least four cortical areas comprising collectively about a third of the cortical volume. Etruscan shrews can enter a torpid state and reduce their body temperature; we observed that cortical response latencies become two to three times longer when body temperature drops from 36°C to 24°C, suggesting that endothermy contributes to the animal's high-speed sensorimotor performance. We argue that small size, high-speed behaviour and extreme dependence on touch are not coincidental, but reflect an evolutionary strategy, in which the metabolic costs of small body size are outweighed by the advantages of being a short-range high-speed touch and kill predator.
Collapse
Affiliation(s)
- Michael Brecht
- BCCN, Humboldt University Berlin, Philippstrasse 13, House 6, 10115 Berlin, Germany.
| | | | | | | | | | | | | |
Collapse
|
30
|
Bicchi A, Gabiccini M, Santello M. Modelling natural and artificial hands with synergies. Philos Trans R Soc Lond B Biol Sci 2011; 366:3153-61. [PMID: 21969697 PMCID: PMC3172595 DOI: 10.1098/rstb.2011.0152] [Citation(s) in RCA: 166] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
We report on recent work in modelling the process of grasping and active touch by natural and artificial hands. Starting from observations made in human hands about the correlation of degrees of freedom in patterns of more frequent use (postural synergies), we consider the implications of a geometrical model accounting for such data, which is applicable to the pre-grasping phase occurring when shaping the hand before actual contact with the grasped object. To extend applicability of the synergy model to study force distribution in the actual grasp, we introduce a modified model including the mechanical compliance of the hand's musculotendinous system. Numerical results obtained by this model indicate that the same principal synergies observed from pre-grasp postural data are also fundamental in achieving proper grasp force distribution. To illustrate the concept of synergies in the dual domain of haptic sensing, we provide a review of models of how the complexity and heterogeneity of sensory information from touch can be harnessed in simplified, tractable abstractions. These abstractions are amenable to fast processing to enable quick reflexes as well as elaboration of high-level percepts. Applications of the synergy model to the design and control of artificial hands and tactile sensors are illustrated.
Collapse
Affiliation(s)
- Antonio Bicchi
- Interdepartmental Research Center E. Piaggio, University of Pisa, Via Diotisalvi, 2, 56126 Pisa, Italy.
| | | | | |
Collapse
|