1
|
Chennaz L, Mascle C, Baltenneck N, Baudouin JY, Picard D, Gentaz E, Valente D. Recognition of facial expressions of emotions in tactile drawings by blind children, children with low vision and sighted children. Acta Psychol (Amst) 2024; 247:104330. [PMID: 38852319 DOI: 10.1016/j.actpsy.2024.104330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 05/02/2024] [Accepted: 06/06/2024] [Indexed: 06/11/2024] Open
Abstract
In the context of blindness, studies on the recognition of facial expressions of emotions by touch are essential to define the compensatory touch abilities and to create adapted tools on emotions. This study is the first to examine the effect of visual experience in the recognition of tactile drawings of facial expressions of emotions by children with different visual experiences. To this end, we compared the recognition rates of tactile drawings of emotions between blind children, children with low vision and sighted children aged 6-12 years. Results revealed no effect of visual experience on recognition rates. However, an effect of emotions and an interaction effect between emotions and visual experience were found. Indeed, while all children had a low average recognition rate, the drawings of fear, anger and disgust were particularly poorly recognized. Moreover, sighted children were significantly better at recognizing the drawings of surprise and sadness than the blind children who only showed high recognition rates for joy. The results of this study support the importance of developing emotion tools that can be understood by children with different visual experiences.
Collapse
Affiliation(s)
- Lola Chennaz
- Laboratory of Sensory-motor Affective and Social Development (SMAS), Faculty of Psychology and Educational Sciences (FAPSE), University of Geneva, Switzerland.
| | - Carolane Mascle
- Inter-university Laboratory for Education and Communication Sciences (LISEC), University of Strasbourg, France.
| | - Nicolas Baltenneck
- Laboratory of Development, Individual, Process, Disability, Education (UR DIPHE), University Lumière Lyon 2, France.
| | - Jean-Yves Baudouin
- Laboratory of Development, Individual, Process, Disability, Education (UR DIPHE), University Lumière Lyon 2, France.
| | | | - Edouard Gentaz
- Laboratory of Sensory-motor Affective and Social Development (SMAS), Faculty of Psychology and Educational Sciences (FAPSE), University of Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Switzerland.
| | - Dannyelle Valente
- Laboratory of Sensory-motor Affective and Social Development (SMAS), Faculty of Psychology and Educational Sciences (FAPSE), University of Geneva, Switzerland; Laboratory of Development, Individual, Process, Disability, Education (UR DIPHE), University Lumière Lyon 2, France; Swiss Center for Affective Sciences, University of Geneva, Switzerland.
| |
Collapse
|
2
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
3
|
Diel A, Sato W, Hsu CT, Minato T. Differences in configural processing for human versus android dynamic facial expressions. Sci Rep 2023; 13:16952. [PMID: 37805572 PMCID: PMC10560218 DOI: 10.1038/s41598-023-44140-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 10/04/2023] [Indexed: 10/09/2023] Open
Abstract
Humanlike androids can function as social agents in social situations and in experimental research. While some androids can imitate facial emotion expressions, it is unclear whether their expressions tap the same processing mechanisms utilized in human expression processing, for example configural processing. In this study, the effects of global inversion and asynchrony between facial features as configuration manipulations were compared in android and human dynamic emotion expressions. Seventy-five participants rated (1) angry and happy emotion recognition and (2) arousal and valence ratings of upright or inverted, synchronous or asynchronous, android or human agent dynamic emotion expressions. Asynchrony in dynamic expressions significantly decreased all ratings (except valence in angry expressions) in all human expressions, but did not affect android expressions. Inversion did not affect any measures regardless of agent type. These results suggest that dynamic facial expressions are processed in a synchrony-based configural manner for humans, but not for androids.
Collapse
Affiliation(s)
- Alexander Diel
- RIKEN Information R&D and Strategy Headquarters, Guardian Robot Project, Kyoto, Japan.
- School of Psychology, Cardiff University, Cardiff, UK.
| | - Wataru Sato
- RIKEN Information R&D and Strategy Headquarters, Guardian Robot Project, Kyoto, Japan
| | - Chun-Ting Hsu
- RIKEN Information R&D and Strategy Headquarters, Guardian Robot Project, Kyoto, Japan
| | - Takashi Minato
- RIKEN Information R&D and Strategy Headquarters, Guardian Robot Project, Kyoto, Japan
| |
Collapse
|
4
|
Abstract
Facial expressions of emotion are nonverbal behaviors that allow us to interact efficiently in social life and respond to events affecting our welfare. This article reviews 21 studies, published between 1932 and 2015, examining the production of facial expressions of emotion by blind people. It particularly discusses the impact of visual experience on the development of this behavior from birth to adulthood. After a discussion of three methodological considerations, the review of studies reveals that blind subjects demonstrate differing capacities for producing spontaneous expressions and voluntarily posed expressions. Seventeen studies provided evidence that blind and sighted spontaneously produce the same pattern of facial expressions, even if some variations can be found, reflecting facial and body movements specific to blindness or differences in intensity and control of emotions in some specific contexts. This suggests that lack of visual experience seems to not have a major impact when this behavior is generated spontaneously in real emotional contexts. In contrast, eight studies examining voluntary expressions indicate that blind individuals have difficulty posing emotional expressions. The opportunity for prior visual observation seems to affect performance in this case. Finally, we discuss three new directions for research to provide additional and strong evidence for the debate regarding the innate or the culture-constant learning character of the production of emotional facial expressions by blind individuals: the link between perception and production of facial expressions, the impact of display rules in the absence of vision, and the role of other channels in expression of emotions in the context of blindness.
Collapse
|
5
|
Klatzky RL, Lederman SJ. Haptic object perception: spatial dimensionality and relation to vision. Philos Trans R Soc Lond B Biol Sci 2012; 366:3097-105. [PMID: 21969691 DOI: 10.1098/rstb.2011.0153] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Enabled by the remarkable dexterity of the human hand, specialized haptic exploration is a hallmark of object perception by touch. Haptic exploration normally takes place in a spatial world that is three-dimensional; nevertheless, stimuli of reduced spatial dimensionality are also used to display spatial information. This paper examines the consequences of full (three-dimensional) versus reduced (two-dimensional) spatial dimensionality for object processing by touch, particularly in comparison with vision. We begin with perceptual recognition of common human-made artefacts, then extend our discussion of spatial dimensionality in touch and vision to include faces, drawing from research on haptic recognition of facial identity and emotional expressions. Faces have often been characterized as constituting a specialized input for human perception. We find that contrary to vision, haptic processing of common objects is impaired by reduced spatial dimensionality, whereas haptic face processing is not. We interpret these results in terms of fundamental differences in object perception across the modalities, particularly the special role of manual exploration in extracting a three-dimensional structure.
Collapse
Affiliation(s)
- Roberta L Klatzky
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | | |
Collapse
|
6
|
Picard D, Jouffrais C, Lebaz S. Haptic Recognition of Emotions in Raised-Line Drawings by Congenitally Blind and Sighted Adults. IEEE TRANSACTIONS ON HAPTICS 2011; 4:67-71. [PMID: 26962956 DOI: 10.1109/toh.2010.58] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
15 sighted and 15 congenitally blind adults were to classify raised-line pictures of emotional faces through haptics. Whereas accuracy did not vary significantly between the two groups, the blind adults were faster at the task. These results suggest that raised-line pictures of emotional faces are intelligible to blind adults.
Collapse
|
7
|
Irrelevant visual faces influence haptic identification of facial expressions of emotion. Atten Percept Psychophys 2010; 73:521-30. [DOI: 10.3758/s13414-010-0038-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
8
|
McGregor TA, Klatzky RL, Hamilton C, Lederman SJ. Haptic Classification of Facial Identity in 2D Displays: Configural versus Feature-Based Processing. IEEE TRANSACTIONS ON HAPTICS 2010; 3:48-55. [PMID: 27788089 DOI: 10.1109/toh.2009.49] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Participants learned through feedback to haptically classify the identity of upright versus inverted versus scrambled faces depicted in simple 2D raised-line displays. We investigated whether identity classification would make use of a configural face representation, as is evidenced for vision and 3D haptic facial displays. Upright and scrambled faces produced equivalent accuracy, and both were identified more accurately than inverted faces. The mean magnitude of the haptic inversion effect for 2D facial identity was a sizable 26 percent, indicating that the upright orientation was ¿privileged¿ in the haptic representations of facial identity in these 2D displays, as with other facial modalities. However, given the effect of scrambling, we conclude that configural processing was not employed; rather, only local information about the features was used, the features being treated as oriented objects within a body-centered frame of reference. The results indicate a fundamental difference between haptic identification of 2D facial depictions and 3D faces, paralleling a corresponding difference in recognition of nonface objects.
Collapse
|
9
|
Abramowicz A, Klatzky RL, Lederman SJ. Learning and Generalization in Haptic Classification of 2-D Raised-Line Drawings of Facial Expressions of Emotion by Sighted and Adventitiously Blind Observers. Perception 2010; 39:1261-75. [DOI: 10.1068/p6686] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Sighted blindfolded individuals can successfully classify basic facial expressions of emotion (FEEs) by manually exploring simple 2-D raised-line drawings (Lederman et al 2008, IEEE Transactions on Haptics1 27–38). The effect of training on classification accuracy was assessed by sixty sighted blindfolded participants (experiment 1) and by three adventitiously blind participants (experiment 2). We further investigated whether the underlying learning process(es) constituted token-specific learning and/or generalization. A hybrid learning paradigm comprising pre/post and old/new test comparisons was used. For both participant groups, classification accuracy for old (ie trained) drawings markedly increased over study trials (mean improvement = 76%, and 88%, respectively). Additionally, RT decreased by a mean of 30% for the sighted, and 31% for the adventitiously blind. Learning was mostly token-specific, but some generalization was also observed for both groups. The sighted classified novel drawings of all six FEEs faster with training (mean RT decrease = 20%). Accuracy also improved significantly (mean improvement = 20%), but this improvement was restricted to two FEEs (anger and sadness). Two of three adventitiously blind participants classified new drawings more accurately (mean improvement = 30%); however, RTs for this group did not reflect generalization. Based on a limited number of blind subjects, our results tentatively suggest that adventitiously blind individuals learn to haptically classify FEEs as well as, or even better than, sighted persons.
Collapse
Affiliation(s)
| | - Roberta L Klatzky
- Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA
| | | |
Collapse
|