1
|
Szubielska M, Szewczyk M, Augustynowicz P, Kędziora W, Möhring W. Effects of scaling direction on adults' spatial scaling in different perceptual domains. Sci Rep 2023; 13:14690. [PMID: 37673909 PMCID: PMC10482972 DOI: 10.1038/s41598-023-41533-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 08/28/2023] [Indexed: 09/08/2023] Open
Abstract
The current study investigated adults' strategies of spatial scaling from memory in three perceptual conditions (visual, haptic, and visuo-haptic) when scaling up and down. Following previous research, we predicted the usage of mental transformation strategies. In all conditions, participants (N = 90, aged 19-28 years) were presented with tactile, colored graphics which allowed to visually and haptically explore spatial information. Participants were first asked to encode a map including a target. Then, they were instructed to place a response object at the same place on an empty, constant-sized referent space. Maps had five different sizes resulting in five scaling factors (3:1, 2:1, 1:1, 1:2, 1:3). This manipulation also allowed assessing potentially symmetric effects of scaling direction on adults' responses. Response times and absolute errors served as dependent variables. In line with our hypotheses, the changes in these dependent variables were best explained by a quadratic function which suggests the usage of mental transformation strategies for spatial scaling. There were no differences between perceptual conditions concerning the influence of scaling factor on dependent variables. Results revealed symmetric effects of scaling direction on participants' accuracy whereas there were small differences for response times. Our findings highlight the usage of mental transformation strategies in adults' spatial scaling, irrespective of perceptual modality and scaling direction.
Collapse
Affiliation(s)
- Magdalena Szubielska
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Lublin, Poland.
| | - Marta Szewczyk
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Lublin, Poland
| | - Paweł Augustynowicz
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Lublin, Poland
| | | | - Wenke Möhring
- Faculty of Psychology, University of Basel, Basel, Switzerland
- Department of Educational and Health Psychology, University of Education Schwäbisch Gmünd, Schwäbisch Gmünd, Germany
| |
Collapse
|
2
|
Macklin AS, Yau JM, Fischer-Baum S, O'Malley MK. Representational Similarity Analysis for Tracking Neural Correlates of Haptic Learning on a Multimodal Device. IEEE TRANSACTIONS ON HAPTICS 2023; 16:424-435. [PMID: 37556331 PMCID: PMC10605963 DOI: 10.1109/toh.2023.3303838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
A goal of wearable haptic devices has been to enable haptic communication, where individuals learn to map information typically processed visually or aurally to haptic cues via a process of cross-modal associative learning. Neural correlates have been used to evaluate haptic perception and may provide a more objective approach to assess association performance than more commonly used behavioral measures of performance. In this article, we examine Representational Similarity Analysis (RSA) of electroencephalography (EEG) as a framework to evaluate how the neural representation of multifeatured haptic cues changes with association training. We focus on the first phase of cross-modal associative learning, perception of multimodal cues. A participant learned to map phonemes to multimodal haptic cues, and EEG data were acquired before and after training to create neural representational spaces that were compared to theoretical models. Our perceptual model showed better correlations to the neural representational space before training, while the feature-based model showed better correlations with the post-training data. These results suggest that training may lead to a sharpening of the sensory response to haptic cues. Our results show promise that an EEG-RSA approach can capture a shift in the representational space of cues, as a means to track haptic learning.
Collapse
|
3
|
Schott N, Haibach-Beach P, Knöpfle I, Neuberger V. The effects of visual impairment on motor imagery in children and adolescents. RESEARCH IN DEVELOPMENTAL DISABILITIES 2021; 109:103835. [PMID: 33477083 DOI: 10.1016/j.ridd.2020.103835] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 12/03/2020] [Accepted: 12/13/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND While the development of motor imagery (MI) has been extensively studied in sighted children, it is not clear how children with different severities of visual impairment (VI) represent motor actions by using the motor representations constructed through the remaining intact senses, especially touch. AIMS Mental chronometry and generation/manipulation of MI were examined in children with and without VI. METHODS AND PROCEDURES Participants included 64 youth with and without VI (33 without visual impairments, 14 moderate-to-severe, and 17 blind). Mental chronometry was assessed with the imagined Timed-Up-and-Go-Test (iTUG), and generation/manipulation of MI with the Controllability-of-Motor-Imagery-Test (CMI). In addition, the effect of working memory performance (Letter-Number-Sequencing) and physical activity upon MI were evaluated. RESULTS Mental duration for the iTUG was significantly shorter than the active durations. Results also provided evidence of better haptic representation than motor representation in all participants; however, only for the CMI-regeneration condition controls outperformed children with visual impairments and blindness (CVIB). Exercise and working memory performance showed a significant contribution only on a few MI tests. CONCLUSION AND IMPLICATIONS Our results suggest a possible relationship between motor performance, body representation deficits and visual impairment which needs to be addressed in the evaluation and treatment of CVIB. The design of new rehabilitation interventions that focus on strengthening adequate body perception and representation should be proposed and tested to promote motor development in CVIB.
Collapse
Affiliation(s)
- Nadja Schott
- Department of Sport Psychology & Human Movement Science, Institute for Sport and Exercise Science, University of Stuttgart, Germany.
| | - Pamela Haibach-Beach
- Department of Kinesiology, Sport Studies, and Physical Education, The College at Brockport-State University of New York, USA
| | - Insa Knöpfle
- Department of Sport Psychology & Human Movement Science, Institute for Sport and Exercise Science, University of Stuttgart, Germany
| | - Verena Neuberger
- Department of Sport Psychology & Human Movement Science, Institute for Sport and Exercise Science, University of Stuttgart, Germany
| |
Collapse
|
4
|
Ferreira CD, Gadelha MJN, Fonsêca ÉKGD, Silva JSCD, Torro N, Fernández-Calvo B. Long-term memory of haptic and visual information in older adults. AGING NEUROPSYCHOLOGY AND COGNITION 2020; 28:65-77. [PMID: 31891286 DOI: 10.1080/13825585.2019.1710450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The present study examined haptic and visual memory capacity for familiar objects through the application of an intentional free-recall task with three-time intervals in a sample of 78 healthy older adults without cognitive impairment. A wooden box and a turntable were used for the presentation of haptic and visual stimuli, respectively. The procedure consisted of two phases, a study phase that consisted of the presentation of stimuli, and a test phase (free-recall task) performed after one hour, one day or one week. The analysis of covariance (ANCOVA) indicated that there was a main effect only for the time intervals (F (2,71) = 12.511, p = .001, η2 = 0.261), with a lower recall index for the interval of one week compared to the other intervals. We concluded that the memory capacity between the systems (haptic and visual) is similar for long retrieval intervals (hours to days).
Collapse
Affiliation(s)
- Cyntia Diógenes Ferreira
- Laboratory of Cognitive Science and Perception, Department of Psychology, Federal University of Paraiba , João Pessoa, Brazil
| | | | | | | | - Nelson Torro
- Laboratory of Cognitive Science and Perception, Department of Psychology, Federal University of Paraiba , João Pessoa, Brazil
| | - Bernardino Fernández-Calvo
- Laboratory of aging and neuropsychological disorder, Department of Psychology, Federal University of Paraiba , João Pessoa, Brazil
| |
Collapse
|
5
|
Abstract
Regularities like symmetry (mirror reflection) and repetition (translation) play an important role in both visual and haptic (active touch) shape perception. Altering figure-ground factors to change what is perceived as an object influences regularity detection. For vision, symmetry is usually easier to detect within one object, whereas repetition is easier to detect across two objects. For haptics, we have not found this interaction between regularity type and objectness (Cecchetto & Lawson, Journal of Experimental Psychology: Human Perception and Performance, 43, 103-125, 2017; Lawson, Ajvani, & Cecchetto, Experimental Psychology, 63, 197-214, 2016). However, our studies used repetition stimuli with mismatched concavities, convexities, and luminance, and so had mismatched contour polarities. Such stimuli may be processed differently to stimuli with matching contour polarities. We investigated this possibility. For haptics, speeded symmetry and repetition detection for novel, planar shapes was similar. Performance deteriorated strikingly if contour polarity mismatched (keeping objectness constant), whilst there was a modest disadvantage for between-2objects:facing-sides compared to within-1object:outer-sides comparisons (keeping contour polarity constant). For the same task for vision, symmetry detection was similar to haptics (strong costs for mismatched contour polarity, weaker costs for between-2objects:facing-sides comparisons), but repetition detection was very different (weak costs for mismatched contour polarity, strong benefits for between-2objects:facing-sides comparisons). Thus, objectness was less influential than contour polarity for both haptic and visual symmetry detection, and for haptic repetition detection. However, for visual repetition detection, objectness effects reversed direction (within-1object:outer-sides comparisons were harder) and were stronger than contour polarity effects. This pattern of results suggests that regularity detection reflects information extraction as well as regularity distributions in the physical world.
Collapse
|
6
|
Cecchetto S, Lawson R. Simultaneous Sketching Aids the Haptic Identification of Raised Line Drawings. Perception 2015; 44:743-54. [PMID: 26541052 DOI: 10.1177/0301006615594695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Haptically identifying raised line drawings is difficult. We investigated whether a major component of this difficulty lies in acquiring, integrating, and maintaining shape information from touch. Wijntjes, van Lienen, Verstijnen, and Kappers reported that drawings which participants had failed to identify by touch alone could often subsequently be named if they were sketched. Thus, people sometimes needed to externalize haptically acquired information by making a sketch in order to be able to use it. We extended Wijntjes et al.'s task and found that sketching while touching improved drawing identification even more than sketching after touching, but only if people could see their sketches. Our results suggest that the slow, serial nature of information acquisition seriously hampers the haptic identification of raised line drawings relative to visually identifying line drawings. Simultaneous sketching may aid identification by reducing the burden on working memory and by helping to guide haptic exploration. This conclusion is consistent with the finding reported by Lawson and Bracken that 3-D objects are much easier to identify haptically than raised line drawings since, unlike for vision, simultaneously extracting global shape information is much easier haptically for 3-D stimuli than for line drawings.
Collapse
|
7
|
Inferring the depth of 3-D objects from tactile spatial information. Atten Percept Psychophys 2015; 77:1411-22. [PMID: 25762304 DOI: 10.3758/s13414-015-0878-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Four psychophysical experiments were conducted to examine the relation between tactile spatial information and the estimated depth of partially touched 3-D objects. Human participants touched unseen, tactile grating patterns with their hand while keeping the hand shape flat. Experiment 1, by means of a production task, showed that the estimated depth of the concave part below the touched grating was well correlated with the separation between the elements of the grating, but not with the overall size of the grating, nor with the local structure of the touched parts. Experiments 2 and 3, by means of a haptic working memory task, showed that the remembered depth of a target surface was biased toward the estimated bottom position of a tactile grating distractor. Experiment 4, by means of a discrimination task, revealed that tactile grating patterns influenced speeded judgments about visual 3-D shapes. These results suggest that the haptic system uses heuristics based on spatial information to infer the depth of an untouched part of a 3-D object.
Collapse
|
8
|
Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts. Atten Percept Psychophys 2014; 76:541-58. [PMID: 24197503 DOI: 10.3758/s13414-013-0559-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.
Collapse
|
9
|
Abstract
We investigated whether the relative position of objects and the body would influence haptic recognition. People felt objects on the right or left side of their body midline, using their right hand. Their head was turned towards or away from the object, and they could not see their hands or the object. People were better at naming 2-D raised line drawings and 3-D small-scale models of objects and also real, everyday objects when they looked towards them. However, this head-towards benefit was reliable only when their right hand crossed their body midline to feel objects on their left side. Thus, haptic object recognition was influenced by people's head position, although vision of their hand and the object was blocked. This benefit of turning the head towards the object being explored suggests that proprioceptive and haptic inputs are remapped into an external coordinate system and that this remapping is harder when the body is in an unusual position (with the hand crossing the body midline and the head turned away from the hand). The results indicate that haptic processes align sensory inputs from the hand and head even though either hand-centered or object-centered coordinate systems should suffice for haptic object recognition.
Collapse
|
10
|
Martinovic J, Lawson R, Craddock M. Time course of information processing in visual and haptic object classification. Front Hum Neurosci 2012; 6:49. [PMID: 22470327 PMCID: PMC3311268 DOI: 10.3389/fnhum.2012.00049] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2011] [Accepted: 02/24/2012] [Indexed: 11/13/2022] Open
Abstract
Vision identifies objects rapidly and efficiently. In contrast, object recognition by touch is much slower. Furthermore, haptics usually serially accumulates information from different parts of objects, whereas vision typically processes object information in parallel. Is haptic object identification slower simply due to sequential information acquisition and the resulting memory load or due to more fundamental processing differences between the senses? To compare the time course of visual and haptic object recognition, we slowed visual processing using a novel, restricted viewing technique. In an electroencephalographic (EEG) experiment, participants discriminated familiar, nameable from unfamiliar, unnamable objects both visually and haptically. Analyses focused on the evoked and total fronto-central theta-band (5-7 Hz; a marker of working memory) and the occipital upper alpha-band (10-12 Hz; a marker of perceptual processing) locked to the onset of classification. Decreases in total upper alpha-band activity for haptic identification of objects indicate a likely processing role of multisensory extrastriate areas. Long-latency modulations of alpha-band activity differentiated between familiar and unfamiliar objects in haptics but not in vision. In contrast, theta-band activity showed a general increase over time for the slowed-down visual recognition task only. We conclude that haptic object recognition relies on common representations with vision but also that there are fundamental differences between the senses that do not merely arise from differences in their speed of processing.
Collapse
Affiliation(s)
| | - Rebecca Lawson
- School of Psychology, University of LiverpoolLiverpool, UK
| | - Matt Craddock
- School of Psychology, University of LiverpoolLiverpool, UK
- Institut für Psychologie, Universität LeipzigLeipzig, Germany
| |
Collapse
|
11
|
Craddock M, Martinovic J, Lawson R. An advantage for active versus passive aperture-viewing in visual object recognition. Perception 2012; 40:1154-63. [PMID: 22308886 DOI: 10.1068/p6974] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
In aperture viewing the field-of-view is restricted, such that only a small part of an image is visible, enforcing serial exploration of different regions of an object in order to successfully recognise it. Previous studies have used either active control or passive observation of the viewing aperture, but have not contrasted the two modes. Active viewing has previously been shown to confer an advantage in visual object recognition. We displayed objects through a small moveable aperture and tested whether people's ability to identify the images as familiar or novel objects was influenced by how the window location was controlled. Participants recognised objects faster when they actively controlled the window using their finger on a touch-screen, as opposed to passively observing the moving window. There was no difference between passively viewing again one's own window movement as generated in a previous block of trials versus viewing window movements that had been generated by other participants. These results contrast with those from comparable studies of haptic object recognition, which have found a benefit for passive over active stimulus exploration, but accord with findings of an advantage of active viewing in visual object recognition.
Collapse
Affiliation(s)
- Matt Craddock
- Institut für Psychologie I, Universität Leipzig, Seeburgstrasse 14-20, 04103 Leipzig, Germany.
| | | | | |
Collapse
|