1
|
Xu S, Su YJ, Ehrlich JR, Song Q. Associations Between Self-Reported Visual Difficulty, Age of Onset, and Cognitive Function Trajectories Among Chinese Older Adults. J Aging Health 2025; 37:305-316. [PMID: 38621713 DOI: 10.1177/08982643241247251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
Objectives: This study examined the association between self-reported visual difficulty and age-related cognitive declines among older Chinese adults and how the timing of visual difficulty onset plays a role in cognitive trajectories. Methods: Data were drawn from the 2011-2018 wave of the China Health and Retirement Longitudinal Study, involving 9974 respondents aged 60 years or older (mean age 65.44 years, range 60-101 years). Results: At baseline, 14.16% respondents had self-reported visual difficulty. Growth curve models showed that Chinese older adults with visual difficulty experienced a faster decline in cognitive function compared to those without visual difficulty (β = -0.02, p < .01). Older adults who began experiencing visual difficulty between 61 and 75 years of age had steeper cognitive declines compared to those with earlier or later onset (β = -0.05, p < .01). Discussion: Older adults with self-reported visual difficulty experience faster rates of cognitive decline. Future research should explore potential factors that underlie the association between onset timing of visual difficulty and cognitive function.
Collapse
Affiliation(s)
- Shu Xu
- Department of Gerontology, University of Massachusetts Boston, Boston, MA, USA
| | - Yan-Jhu Su
- Department of Gerontology, University of Massachusetts Boston, Boston, MA, USA
| | - Joshua R Ehrlich
- Department of Ophthalmology and Visual Sciences and Institute for Social Research, University of Michigan, Ann Arbor, MI, USA
| | - Qian Song
- Department of Gerontology, University of Massachusetts Boston, Boston, MA, USA
| |
Collapse
|
2
|
Ben-David BM, Chebat DR, Icht M. "Love looks not with the eyes": supranormal processing of emotional speech in individuals with late-blindness versus preserved processing in individuals with congenital-blindness. Cogn Emot 2024; 38:1354-1367. [PMID: 38785380 DOI: 10.1080/02699931.2024.2357656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 04/10/2024] [Accepted: 05/11/2024] [Indexed: 05/25/2024]
Abstract
Processing of emotional speech in the absence of visual information relies on two auditory channels: semantics and prosody. No study to date has investigated how blindness impacts this process. Two theories, Perceptual Deficit, and Sensory Compensation, yiled different expectations about the role of visual experience (or its lack thereof) in processing emotional speech. To test the effect of vision and early visual experience on processing of emotional speech, we compared individuals with congenital blindness (CB, n = 17), individuals with late blindness (LB, n = 15), and sighted controls (SC, n = 21) on identification and selective-attention of semantic and prosodic spoken-emotions. Results showed that individuals with blindness performed at least as well as SC, supporting Sensory Compensation and the role of cortical reorganisation. Individuals with LB outperformed individuals with CB, in accordance with Perceptual Deficit, supporting the role of early visual experience. The LB advantage was moderated by executive functions (working-memory). Namely, the advantage was erased for individuals with CB who showed higher levels of executive functions. Results suggest that vision is not necessary for processing of emotional speech, but early visual experience could improve it. The findings support a combination of the two aforementioned theories and reject a dichotomous view of deficiencies/enhancements of blindness.
Collapse
Affiliation(s)
- Boaz M Ben-David
- Communication, Aging, and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, Canada
- KITE, Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), The Department of Psychology, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center (NARCA), Ariel University, Ariel, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| |
Collapse
|
3
|
Coelho LA, Gonzalez CLR, Tammurello C, Campus C, Gori M. Hand and foot overestimation in visually impaired human adults. Neuroscience 2024; 563:74-83. [PMID: 39521320 DOI: 10.1016/j.neuroscience.2024.10.055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 10/28/2024] [Accepted: 10/30/2024] [Indexed: 11/16/2024]
Abstract
Previous research has shown that visual impairment results in reduced audio, tactile and proprioceptive ability. One hypothesis is that these issues arise from inaccurate body representations. Few studies have investigated metric body representations in a visually impaired population. We designed an ecologically valid behavioural task in which visually impaired adults haptically explored various sized gloves or shoes. They were asked to indicate if they perceived each clothing item as bigger than the size of their hand or foot. In the post-hoc analyses we fit psychometric curves to the data to extract the point of subjective equality. We then compared the results to age/sex matched controls. We hypothesized the blind participants body representations should be more distorted. Because previous research has shown that females are more likely to overestimate body size, we predicted sex differences in the sighted participants. However, because blind adults have no exposure to visual ideals of body size, we predicted that there would be no sex differences. Our results showed thatblind participants overestimated their hands and feetto a similar degree. Sighted controls overestimated their hands significantly more than their feet. Taken together, our results partially support our hypothesis and suggest that visual deprivation, even for short periods result in hand size overestimation.
Collapse
Affiliation(s)
- Lara A Coelho
- Unit for visually impaired (UVIP), Italian Institute of Technology, Genova, Italy.
| | - Claudia L R Gonzalez
- The Brain in Action Laboratory, Faculty of Kinesiology, University of Lethbridge, Canada
| | - Carolina Tammurello
- Unit for visually impaired (UVIP), Italian Institute of Technology, Genova, Italy; Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genova, Genova, Italy
| | - Claudio Campus
- Unit for visually impaired (UVIP), Italian Institute of Technology, Genova, Italy
| | - Monica Gori
- Unit for visually impaired (UVIP), Italian Institute of Technology, Genova, Italy
| |
Collapse
|
4
|
Depianti JRB, Pimentel TGP, Pessanha FB, de Moraes JRMM, Cabral IE. Guides or guidelines for interacting and playing with medical complex children: a qualitative documentary research. Rev Lat Am Enfermagem 2024; 32:e4146. [PMID: 38985041 PMCID: PMC11251688 DOI: 10.1590/1518-8345.6691.4146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 12/10/2023] [Indexed: 07/11/2024] Open
Abstract
OBJECTIVES to identify content on play and interaction with children with special health care needs recommended in clinical guidelines; analyze play and interaction activities applicable to children with special health care needs and complex care requirements. METHOD qualitative documentary research based on guides, protocols, or guidelines on playing and interacting with children with special and living with complex care. Search terms in English (guidelines, playing OR play, complex needs, OR chronic disease) and in Portuguese ( guia, brincar ou brincadeiras, condições crônicas ) on the first ten pages of_Google Search ® . Thematic analysis was applied to the information extracted from the documents. RESULTS a total of nine documents with similar content were grouped into units of analysis, keeping only the interacting and playing activities applicable to children with special health care needs and living with complex care requirements, namely stimulation of potential, stimulation of adult-child interaction, and stimulation of the senses (touch, sight, and hearing), to be carried out by health professionals and family caregivers in the different care contexts. CONCLUSION interaction and play are potential promoters of adult-child interaction, with application in the stimulating and life-delivering complex care for children.
Collapse
Affiliation(s)
- Jéssica Renata Bastos Depianti
- Universidade Federal do Rio de Janeiro, Escola de Enfermagem Anna Nery, Rio de Janeiro, RJ, Brazil
- Scholarship holder at the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Brazil
| | - Thaís Guilherme Pereira Pimentel
- Universidade Federal do Rio de Janeiro, Escola de Enfermagem Anna Nery, Rio de Janeiro, RJ, Brazil
- Scholarship holder at the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), Brazil
| | - Fernanda Borges Pessanha
- Universidade Federal do Rio de Janeiro, Escola de Enfermagem Anna Nery, Rio de Janeiro, RJ, Brazil
- Scholarship holder at the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), Brazil
| | | | - Ivone Evangelista Cabral
- Universidade Federal do Rio de Janeiro, Escola de Enfermagem Anna Nery, Rio de Janeiro, RJ, Brazil
- Scholarship holder at the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Brazil
- Universidade do Estado do Rio de Janeiro, Faculdade de Enfermagem, Rio de Janeiro, RJ, Brazil
| |
Collapse
|
5
|
Scheller M, Nardini M. Correctly establishing evidence for cue combination via gains in sensory precision: Why the choice of comparator matters. Behav Res Methods 2024; 56:2842-2858. [PMID: 37730934 PMCID: PMC11133123 DOI: 10.3758/s13428-023-02227-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/27/2023] [Indexed: 09/22/2023]
Abstract
Studying how sensory signals from different sources (sensory cues) are integrated within or across multiple senses allows us to better understand the perceptual computations that lie at the foundation of adaptive behaviour. As such, determining the presence of precision gains - the classic hallmark of cue combination - is important for characterising perceptual systems, their development and functioning in clinical conditions. However, empirically measuring precision gains to distinguish cue combination from alternative perceptual strategies requires careful methodological considerations. Here, we note that the majority of existing studies that tested for cue combination either omitted this important contrast, or used an analysis approach that, unknowingly, strongly inflated false positives. Using simulations, we demonstrate that this approach enhances the chances of finding significant cue combination effects in up to 100% of cases, even when cues are not combined. We establish how this error arises when the wrong cue comparator is chosen and recommend an alternative analysis that is easy to implement but has only been adopted by relatively few studies. By comparing combined-cue perceptual precision with the best single-cue precision, determined for each observer individually rather than at the group level, researchers can enhance the credibility of their reported effects. We also note that testing for deviations from optimal predictions alone is not sufficient to ascertain whether cues are combined. Taken together, to correctly test for perceptual precision gains, we advocate for a careful comparator selection and task design to ensure that cue combination is tested with maximum power, while reducing the inflation of false positives.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, Durham University, Durham, UK.
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
6
|
Sarzedas J, Lima CF, Roberto MS, Scott SK, Pinheiro AP, Conde T. Blindness influences emotional authenticity perception in voices: Behavioral and ERP evidence. Cortex 2024; 172:254-270. [PMID: 38123404 DOI: 10.1016/j.cortex.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Collapse
Affiliation(s)
- João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| |
Collapse
|
7
|
Beck J, Dzięgiel-Fivet G, Jednoróg K. Similarities and differences in the neural correlates of letter and speech sound integration in blind and sighted readers. Neuroimage 2023; 278:120296. [PMID: 37495199 DOI: 10.1016/j.neuroimage.2023.120296] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/18/2023] [Accepted: 07/23/2023] [Indexed: 07/28/2023] Open
Abstract
Learning letter and speech sound (LS) associations is a major step in reading acquisition common for all alphabetic scripts, including Braille used by blind readers. The left superior temporal cortex (STC) plays an important role in audiovisual LS integration in sighted people, but it is still unknown what neural mechanisms are responsible for audiotactile LS integration in blind individuals. Here, we investigated the similarities and differences between LS integration in blind Braille (N = 42, age range: 9-60 y.o.) and sighted print (N = 47, age range: 9-60 y.o.) readers who acquired reading using different sensory modalities. In both groups, the STC responded to both isolated letters and isolated speech sounds, showed enhanced activation when they were presented together, and distinguished between congruent and incongruent letter and speech sound pairs. However, the direction of the congruency effect was different between the groups. Sighted subjects showed higher activity for incongruent LS pairs in the bilateral STC, similarly to previously studied typical readers of transparent orthographies. In the blind, congruent pairs resulted in an increased response in the right STC. These differences may be related to more sequential processing of Braille as compared to print reading. At the same time, behavioral efficiency in LS discrimination decisions and the congruency effect were found to be related to age and reading skill only in sighted participants, suggesting potential differences in the developmental trajectories of LS integration between blind and sighted readers.
Collapse
Affiliation(s)
- Joanna Beck
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland.
| | - Gabriela Dzięgiel-Fivet
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland.
| |
Collapse
|
8
|
Stanley BM, Chen YC, Maurer D, Lewis TL, Shore DI. Developmental changes in audiotactile event perception. J Exp Child Psychol 2023; 230:105629. [PMID: 36731280 DOI: 10.1016/j.jecp.2023.105629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 01/04/2023] [Accepted: 01/05/2023] [Indexed: 02/04/2023]
Abstract
The fission and fusion illusions provide measures of multisensory integration. The sound-induced tap fission illusion occurs when a tap is paired with two distractor sounds, resulting in the perception of two taps; the sound-induced tap fusion illusion occurs when two taps are paired with a single sound, resulting in the perception of a single tap. Using these illusions, we measured integration in three groups of children (9-, 11-, and 13-year-olds) and compared them with a group of adults. Based on accuracy, we derived a measure of magnitude of illusion and used a signal detection analysis to estimate perceptual discriminability and decisional criterion. All age groups showed a significant fission illusion, whereas only the three groups of children showed a significant fusion illusion. When compared with adults, the 9-year-olds showed larger fission and fusion illusions (i.e., reduced discriminability and greater bias), whereas the 11-year-olds were adult-like for fission but showed some differences for fusion: significantly worse discriminability and marginally greater magnitude and criterion. The 13-year-olds were adult-like on all measures. Based on the pattern of data, we speculate that the developmental trajectories for fission and fusion differ. We discuss these developmental results in the context of three non-mutually exclusive theoretical frameworks: sensory dominance, maximum likelihood estimation, and causal inference.
Collapse
Affiliation(s)
- Brendan M Stanley
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - Yi-Chuan Chen
- Department of Medicine, Mackay Medical College, New Taipei City 252, Taiwan
| | - Daphne Maurer
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - Terri L Lewis
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - David I Shore
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada; Multisensory Perception Laboratory, Division of Multisensory Mind Inc., Hamilton, Ontario L8S 4K1, Canada.
| |
Collapse
|
9
|
Lenschow C, Mendes ARP, Lima SQ. Hearing, touching, and multisensory integration during mate choice. Front Neural Circuits 2022; 16:943888. [PMID: 36247731 PMCID: PMC9559228 DOI: 10.3389/fncir.2022.943888] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 06/28/2022] [Indexed: 12/27/2022] Open
Abstract
Mate choice is a potent generator of diversity and a fundamental pillar for sexual selection and evolution. Mate choice is a multistage affair, where complex sensory information and elaborate actions are used to identify, scrutinize, and evaluate potential mating partners. While widely accepted that communication during mate assessment relies on multimodal cues, most studies investigating the mechanisms controlling this fundamental behavior have restricted their focus to the dominant sensory modality used by the species under examination, such as vision in humans and smell in rodents. However, despite their undeniable importance for the initial recognition, attraction, and approach towards a potential mate, other modalities gain relevance as the interaction progresses, amongst which are touch and audition. In this review, we will: (1) focus on recent findings of how touch and audition can contribute to the evaluation and choice of mating partners, and (2) outline our current knowledge regarding the neuronal circuits processing touch and audition (amongst others) in the context of mate choice and ask (3) how these neural circuits are connected to areas that have been studied in the light of multisensory integration.
Collapse
Affiliation(s)
- Constanze Lenschow
- Champalimaud Foundation, Champalimaud Research, Neuroscience Program, Lisbon, Portugal
| | - Ana Rita P Mendes
- Champalimaud Foundation, Champalimaud Research, Neuroscience Program, Lisbon, Portugal
| | - Susana Q Lima
- Champalimaud Foundation, Champalimaud Research, Neuroscience Program, Lisbon, Portugal
| |
Collapse
|
10
|
Maimon A, Yizhar O, Buchs G, Heimler B, Amedi A. A case study in phenomenology of visual experience with retinal prosthesis versus visual-to-auditory sensory substitution. Neuropsychologia 2022; 173:108305. [PMID: 35752268 PMCID: PMC9297294 DOI: 10.1016/j.neuropsychologia.2022.108305] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 04/30/2022] [Accepted: 06/13/2022] [Indexed: 11/26/2022]
Abstract
The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex perceptual experience, that felt more "second nature" to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| | - Or Yizhar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel; Max Planck Institute for Human Development, Research Group Adaptive Memory and Decision Making, Berlin, Germany; Max Planck Institute for Human Development, Max Planck Dahlem Campus of Cognition (MPDCC), Berlin, Germany
| | - Galit Buchs
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| |
Collapse
|
11
|
Setti W, Cuturi LF, Cocchi E, Gori M. Spatial Memory and Blindness: The Role of Visual Loss on the Exploration and Memorization of Spatialized Sounds. Front Psychol 2022; 13:784188. [PMID: 35686077 PMCID: PMC9171105 DOI: 10.3389/fpsyg.2022.784188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 04/21/2022] [Indexed: 11/20/2022] Open
Abstract
Spatial memory relies on encoding, storing, and retrieval of knowledge about objects’ positions in their surrounding environment. Blind people have to rely on sensory modalities other than vision to memorize items that are spatially displaced, however, to date, very little is known about the influence of early visual deprivation on a person’s ability to remember and process sound locations. To fill this gap, we tested sighted and congenitally blind adults and adolescents in an audio-spatial memory task inspired by the classical card game “Memory.” In this research, subjects (blind, n = 12; sighted, n = 12) had to find pairs among sounds (i.e., animal calls) displaced on an audio-tactile device composed of loudspeakers covered by tactile sensors. To accomplish this task, participants had to remember the spatialized sounds’ position and develop a proper mental spatial representation of their locations. The test was divided into two experimental conditions of increasing difficulty dependent on the number of sounds to be remembered (8 vs. 24). Results showed that sighted participants outperformed blind participants in both conditions. Findings were discussed considering the crucial role of visual experience in properly manipulating auditory spatial representations, particularly in relation to the ability to explore complex acoustic configurations.
Collapse
Affiliation(s)
- Walter Setti
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
- *Correspondence: Walter Setti,
| | - Luigi F. Cuturi
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
| | | | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
12
|
Verhaar E, Medendorp WP, Hunnius S, Stapel JC. Bayesian causal inference in visuotactile integration in children and adults. Dev Sci 2022; 25:e13184. [PMID: 34698430 PMCID: PMC9285718 DOI: 10.1111/desc.13184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 09/01/2021] [Accepted: 10/05/2021] [Indexed: 11/27/2022]
Abstract
If cues from different sensory modalities share the same cause, their information can be integrated to improve perceptual precision. While it is well established that adults exploit sensory redundancy by integrating cues in a Bayes optimal fashion, whether children under 8 years of age combine sensory information in a similar fashion is still under debate. If children differ from adults in the way they infer causality between cues, this may explain mixed findings on the development of cue integration in earlier studies. Here we investigated the role of causal inference in the development of cue integration, by means of a visuotactile localization task. Young children (6-8 years), older children (9.5-12.5 years) and adults had to localize a tactile stimulus, which was presented to the forearm simultaneously with a visual stimulus at either the same or a different location. In all age groups, responses were systematically biased toward the position of the visual stimulus, but relatively more so when the distance between the visual and tactile stimulus was small rather than large. This pattern of results was better captured by a Bayesian causal inference model than by alternative models of forced fusion or full segregation of the two stimuli. Our results suggest that already from a young age the brain implicitly infers the probability that a tactile and a visual cue share the same cause and uses this probability as a weighting factor in visuotactile localization.
Collapse
Affiliation(s)
- Erik Verhaar
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenthe Netherlands
| | | | - Sabine Hunnius
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenthe Netherlands
| | | |
Collapse
|
13
|
Cieśla K, Wolak T, Lorens A, Mentzel M, Skarżyński H, Amedi A. Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding. Sci Rep 2022; 12:3206. [PMID: 35217676 PMCID: PMC8881456 DOI: 10.1038/s41598-022-06855-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 01/28/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30-45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14-16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70-80%) showed better performance (by mean 4-6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical "critical periods" of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.
Collapse
Affiliation(s)
- K Cieśla
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel. .,World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland.
| | - T Wolak
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Lorens
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - M Mentzel
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| | - H Skarżyński
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
14
|
Kim YH, Schrode KM, Engel J, Vicencio-Jimenez S, Rodriguez G, Lee HK, Lauer AM. Auditory Behavior in Adult-Blinded Mice. J Assoc Res Otolaryngol 2022; 23:225-239. [PMID: 35084628 PMCID: PMC8964904 DOI: 10.1007/s10162-022-00835-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 12/31/2021] [Indexed: 10/19/2022] Open
Abstract
Cross-modal plasticity occurs when the function of remaining senses is enhanced following deprivation or loss of a sensory modality. Auditory neural responses are enhanced in the auditory cortex, including increased sensitivity and frequency selectivity, following short-term visual deprivation in adult mice (Petrus et al. Neuron 81:664-673, 2014). Whether or not these visual deprivation-induced neural changes translate into improved auditory perception and performance remains unclear. As an initial investigation of the effects of adult visual deprivation on auditory behaviors, CBA/CaJ mice underwent binocular enucleation at 3-4 weeks old and were tested on a battery of learned behavioral tasks, acoustic startle response (ASR), and prepulse inhibition (PPI) tests beginning at least 2 weeks after the enucleation procedure. Auditory brain stem responses (ABRs) were also measured to screen for potential effects of visual deprivation on non-behavioral hearing function. Control and enucleated mice showed similar tone detection sensitivity and frequency discrimination in a conditioned lick suppression test. Both groups showed normal reactivity to sound as measured by ASR in a quiet background. However, when startle-eliciting stimuli were presented in noise, enucleated mice showed decreased ASR amplitude relative to controls. Control and enucleated mice displayed no significant differences in ASR habituation, PPI tests, or ABR thresholds, or wave morphology. Our findings suggest that while adult-onset visual deprivation induces cross-modal plasticity at the synaptic and circuit levels, it does not substantially influence simple auditory behavioral performance.
Collapse
Affiliation(s)
- Ye-Hyun Kim
- Department of Otolaryngology-Head and Neck Surgery and Center for Hearing and Balance, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Katrina M Schrode
- Department of Otolaryngology-Head and Neck Surgery and Center for Hearing and Balance, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - James Engel
- Department of Otolaryngology-Head and Neck Surgery and Center for Hearing and Balance, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Sergio Vicencio-Jimenez
- Department of Otolaryngology-Head and Neck Surgery and Center for Hearing and Balance, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Gabriela Rodriguez
- Cell, Molecular, Developmental Biology, and Biophysics (CMDB) Graduate Program, Johns Hopkins University, Baltimore, MD, USA
| | - Hey-Kyoung Lee
- Cell, Molecular, Developmental Biology, and Biophysics (CMDB) Graduate Program, Johns Hopkins University, Baltimore, MD, USA.,Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA.,Zanvyl-Krieger Mind/Brain Institute and Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, USA
| | - Amanda M Lauer
- Department of Otolaryngology-Head and Neck Surgery and Center for Hearing and Balance, Johns Hopkins University, Baltimore, MD, 21205, USA. .,Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
15
|
Development of multisensory integration following prolonged early-onset visual deprivation. Curr Biol 2021; 31:4879-4885.e6. [PMID: 34534443 DOI: 10.1016/j.cub.2021.08.060] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 07/12/2021] [Accepted: 08/23/2021] [Indexed: 11/23/2022]
Abstract
Adult humans make effortless use of multisensory signals and typically integrate them in an optimal fashion.1 This remarkable ability takes many years for normally sighted children to develop.2,3 Would individuals born blind or with extremely low vision still be able to develop multisensory integration later in life when surgically treated for sight restoration? Late acquisition of such capability would be a vivid example of the brain's ability to retain high levels of plasticity. We studied the development of multisensory integration in individuals suffering from congenital dense bilateral cataract, surgically treated years after birth. We assessed cataract-treated individuals' reliance on their restored visual abilities when estimating the size of an object simultaneously explored by touch. Within weeks to months after surgery, when combining information from vision and touch, they developed a multisensory weighting behavior similar to matched typically sighted controls. Next, we tested whether cataract-treated individuals benefited from integrating vision with touch by increasing the precision of size estimates, as it occurs when integrating signals in a statistically optimal fashion.1 For participants retested multiple times, such a benefit developed within months after surgery to levels of precision indistinguishable from optimal behavior. To summarize, the development of multisensory integration does not merely depend on age, but requires extensive multisensory experience with the world, rendered possible by the improved post-surgical visual acuity. We conclude that early exposure to multisensory signals is not essential for the development of multisensory integration, which can still be acquired even after many years of visual deprivation.
Collapse
|
16
|
Cuturi LF, Cappagli G, Tonelli A, Cocchi E, Gori M. Perceiving size through sound in sighted and visually impaired children. COGNITIVE DEVELOPMENT 2021. [DOI: 10.1016/j.cogdev.2021.101125] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
17
|
Peter MG, Mårtensson G, Postma EM, Engström Nordin L, Westman E, Boesveldt S, Lundström JN. Seeing Beyond Your Nose? The Effects of Lifelong Olfactory Sensory Deprivation on Cerebral Audio-visual Integration. Neuroscience 2021; 472:1-10. [PMID: 34311017 DOI: 10.1016/j.neuroscience.2021.07.017] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 07/06/2021] [Accepted: 07/16/2021] [Indexed: 11/28/2022]
Abstract
Lifelong auditory and visual sensory deprivation have been demonstrated to alter both perceptual acuity and the neural processing of remaining senses. Recently, it was demonstrated that individuals with anosmia, i.e. complete olfactory sensory deprivation, displayed enhanced multisensory integration performance. Whether this ability is due to a reorganization of olfactory processing regions to focus on cross-modal multisensory information or whether it is due to enhanced processing within multisensory integration regions is not known. To dissociate these two outcomes, we investigated the neural processing of dynamic audio-visual stimuli in individuals with congenital anosmia and matched controls (both groups, n = 33) using functional magnetic resonance imaging. Specifically, we assessed whether the previously demonstrated multisensory enhancement is related to cross-modal processing of multisensory stimuli in olfactory associated regions, the piriform and olfactory orbitofrontal cortices, or enhanced multisensory processing in established multisensory integration regions, the superior temporal and intraparietal sulci. No significant group differences were found in the a priori hypothesized regions using region of interest analyses. However, exploratory whole-brain analysis suggested higher activation related to multisensory integration within the posterior superior temporal sulcus, in close proximity to the multisensory region of interest, in individuals with congenital anosmia. No group differences were demonstrated in olfactory associated regions. Although results were outside our hypothesized regions, combined, they tentatively suggest that enhanced processing of audio-visual stimuli in individuals with congenital anosmia may be mediated by multisensory, and not primary sensory, cerebral regions.
Collapse
Affiliation(s)
- Moa G Peter
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Gustav Mårtensson
- Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | - Elbrich M Postma
- Division of Human Nutrition and Health, Wageningen University, Wageningen, the Netherlands; Smell and Taste Centre, Hospital Gelderse Vallei, Ede, the Netherlands
| | - Love Engström Nordin
- Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden; Department of Diagnostic Medical Physics, Karolinska University Hospital Solna, Stockholm, Sweden
| | - Eric Westman
- Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden; Department of Neuroimaging, Centre for Neuroimaging Sciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Sanne Boesveldt
- Division of Human Nutrition and Health, Wageningen University, Wageningen, the Netherlands
| | - Johan N Lundström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Monell Chemical Senses Center, Philadelphia, PA, United States; Department of Psychology, University of Pennsylvania, Philadelphia, United States; Stockholm University Brain Imaging Centre, Stockholm University, Stockholm, Sweden
| |
Collapse
|
18
|
Lloyd-Esenkaya T, Lloyd-Esenkaya V, O'Neill E, Proulx MJ. Multisensory inclusive design with sensory substitution. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:37. [PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 07/13/2020] [Indexed: 11/10/2022]
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.
Collapse
Affiliation(s)
- Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK.,Department of Computer Science, University of Bath, Bath, UK
| | | | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK. .,Department of Psychology, University of Bath, Bath, UK.
| |
Collapse
|