1
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
2
|
Sarzedas J, Lima CF, Roberto MS, Scott SK, Pinheiro AP, Conde T. Blindness influences emotional authenticity perception in voices: Behavioral and ERP evidence. Cortex 2024; 172:254-270. [PMID: 38123404 DOI: 10.1016/j.cortex.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Collapse
Affiliation(s)
- João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| |
Collapse
|
3
|
Damera SR, Malone PS, Stevens BW, Klein R, Eberhardt SP, Auer ET, Bernstein LE, Riesenhuber M. Metamodal Coupling of Vibrotactile and Auditory Speech Processing Systems through Matched Stimulus Representations. J Neurosci 2023; 43:4984-4996. [PMID: 37197979 PMCID: PMC10324991 DOI: 10.1523/jneurosci.1710-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 03/10/2023] [Accepted: 04/29/2023] [Indexed: 05/19/2023] Open
Abstract
It has been postulated that the brain is organized by "metamodal," sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both "standard" and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm attempted to match the encoding scheme of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain.SIGNIFICANCE STATEMENT It has been proposed that the brain is organized by "metamodal," sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications, such as sensory substitution devices, for example, enabling blind individuals "to see" by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching the encoding schemes between stimuli from the novel and standard sensory modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that were matched to the neural encoding of auditory speech engaged auditory speech areas after training. This suggests that matching encoding schemes is critical to unlocking the brain's metamodal potential.
Collapse
Affiliation(s)
- Srikanth R Damera
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Patrick S Malone
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Benson W Stevens
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Richard Klein
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Silvio P Eberhardt
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | - Edward T Auer
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | - Lynne E Bernstein
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | | |
Collapse
|
4
|
Pang W, Zhou W, Ruan Y, Zhang L, Shu H, Zhang Y, Zhang Y. Visual Deprivation Alters Functional Connectivity of Neural Networks for Voice Recognition: A Resting-State fMRI Study. Brain Sci 2023; 13:brainsci13040636. [PMID: 37190601 DOI: 10.3390/brainsci13040636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/29/2023] [Accepted: 04/04/2023] [Indexed: 05/17/2023] Open
Abstract
Humans recognize one another by identifying their voices and faces. For sighted people, the integration of voice and face signals in corresponding brain networks plays an important role in facilitating the process. However, individuals with vision loss primarily resort to voice cues to recognize a person's identity. It remains unclear how the neural systems for voice recognition reorganize in the blind. In the present study, we collected behavioral and resting-state fMRI data from 20 early blind (5 females; mean age = 22.6 years) and 22 sighted control (7 females; mean age = 23.7 years) individuals. We aimed to investigate the alterations in the resting-state functional connectivity (FC) among the voice- and face-sensitive areas in blind subjects in comparison with controls. We found that the intranetwork connections among voice-sensitive areas, including amygdala-posterior "temporal voice areas" (TVAp), amygdala-anterior "temporal voice areas" (TVAa), and amygdala-inferior frontal gyrus (IFG) were enhanced in the early blind. The blind group also showed increased FCs of "fusiform face area" (FFA)-IFG and "occipital face area" (OFA)-IFG but decreased FCs between the face-sensitive areas (i.e., FFA and OFA) and TVAa. Moreover, the voice-recognition accuracy was positively related to the strength of TVAp-FFA in the sighted, and the strength of amygdala-FFA in the blind. These findings indicate that visual deprivation shapes functional connectivity by increasing the intranetwork connections among voice-sensitive areas while decreasing the internetwork connections between the voice- and face-sensitive areas. Moreover, the face-sensitive areas are still involved in the voice-recognition process in blind individuals through pathways such as the subcortical-occipital or occipitofrontal connections, which may benefit the visually impaired greatly during voice processing.
Collapse
Affiliation(s)
- Wenbin Pang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- China National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Wei Zhou
- Beijing Key Lab of Learning and Cognition, School of Psychology, Capital Normal University, Beijing 100048, China
| | - Yufang Ruan
- School of Communication Sciences and Disorders, Faculty of Medicine and Health Sciences, McGill University, Montréal, QC H3A 1G1, Canada
- Centre for Research on Brain, Language and Music, Montréal, QC H3A 1G1, Canada
| | - Linjun Zhang
- School of Chinese as a Second Language, Peking University, Beijing 100871, China
| | - Hua Shu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, The University of Minnesota, Minneapolis, MN 55455, USA
| | - Yumei Zhang
- China National Clinical Research Center for Neurological Diseases, Beijing 100070, China
- Department of Rehabilitation, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| |
Collapse
|
5
|
Sabourin CJ, Merrikhi Y, Lomber SG. Do blind people hear better? Trends Cogn Sci 2022; 26:999-1012. [PMID: 36207258 DOI: 10.1016/j.tics.2022.08.016] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 08/22/2022] [Accepted: 08/25/2022] [Indexed: 01/12/2023]
Abstract
For centuries, anecdotal evidence such as the perfect pitch of the blind piano tuner or blind musician has supported the notion that individuals who have lost their sight early in life have superior hearing abilities compared with sighted people. Recently, auditory psychophysical and functional imaging studies have identified that specific auditory enhancements in the early blind can be linked to activation in extrastriate visual cortex, suggesting crossmodal plasticity. Furthermore, the nature of the sensory reorganization in occipital cortex supports the concept of a task-based functional cartography for the cerebral cortex rather than a sensory-based organization. In total, studies of early-blind individuals provide valuable insights into mechanisms of cortical plasticity and principles of cerebral organization.
Collapse
Affiliation(s)
- Carina J Sabourin
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Biological and Biomedical Engineering Graduate Program, McGill University, Montreal, Quebec H3G 1Y6, Canada
| | - Yaser Merrikhi
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada
| | - Stephen G Lomber
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Biological and Biomedical Engineering Graduate Program, McGill University, Montreal, Quebec H3G 1Y6, Canada; Department of Psychology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec H3G 1Y6, Canada.
| |
Collapse
|
6
|
The Time Course of Emotional Authenticity Detection in Nonverbal Vocalizations. Cortex 2022; 151:116-132. [DOI: 10.1016/j.cortex.2022.02.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 12/23/2021] [Accepted: 02/16/2022] [Indexed: 11/24/2022]
|
7
|
OUP accepted manuscript. Cereb Cortex 2022; 32:4913-4933. [DOI: 10.1093/cercor/bhab524] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 12/16/2021] [Accepted: 12/17/2021] [Indexed: 11/12/2022] Open
|
8
|
Arioli M, Ricciardi E, Cattaneo Z. Social cognition in the blind brain: A coordinate-based meta-analysis. Hum Brain Mapp 2020; 42:1243-1256. [PMID: 33320395 PMCID: PMC7927293 DOI: 10.1002/hbm.25289] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 10/05/2020] [Accepted: 10/31/2020] [Indexed: 01/04/2023] Open
Abstract
Social cognition skills are typically acquired on the basis of visual information (e.g., the observation of gaze, facial expressions, gestures). In light of this, a critical issue is whether and how the lack of visual experience affects neurocognitive mechanisms underlying social skills. This issue has been largely neglected in the literature on blindness, despite difficulties in social interactions may be particular salient in the life of blind individuals (especially children). Here we provide a meta-analysis of neuroimaging studies reporting brain activations associated to the representation of self and others' in early blind individuals and in sighted controls. Our results indicate that early blindness does not critically impact on the development of the "social brain," with social tasks performed on the basis of auditory or tactile information driving consistent activations in nodes of the action observation network, typically active during actual observation of others in sighted individuals. Interestingly though, activations along this network appeared more left-lateralized in the blind than in sighted participants. These results may have important implications for the development of specific training programs to improve social skills in blind children and young adults.
Collapse
Affiliation(s)
- Maria Arioli
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | | | - Zaira Cattaneo
- Department of Psychology, University of Milano-Bicocca, Milan, Italy.,IRCCS Mondino Foundation, Pavia, Italy
| |
Collapse
|
9
|
Ratan Murty NA, Teng S, Beeler D, Mynick A, Oliva A, Kanwisher N. Visual experience is not necessary for the development of face-selectivity in the lateral fusiform gyrus. Proc Natl Acad Sci U S A 2020; 117:23011-23020. [PMID: 32839334 PMCID: PMC7502773 DOI: 10.1073/pnas.2004607117] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
The fusiform face area responds selectively to faces and is causally involved in face perception. How does face-selectivity in the fusiform arise in development, and why does it develop so systematically in the same location across individuals? Preferential cortical responses to faces develop early in infancy, yet evidence is conflicting on the central question of whether visual experience with faces is necessary. Here, we revisit this question by scanning congenitally blind individuals with fMRI while they haptically explored 3D-printed faces and other stimuli. We found robust face-selective responses in the lateral fusiform gyrus of individual blind participants during haptic exploration of stimuli, indicating that neither visual experience with faces nor fovea-biased inputs is necessary for face-selectivity to arise in the lateral fusiform gyrus. Our results instead suggest a role for long-range connectivity in specifying the location of face-selectivity in the human brain.
Collapse
Affiliation(s)
- N Apurva Ratan Murty
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Santani Teng
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA 94115
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - David Beeler
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Anna Mynick
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| |
Collapse
|
10
|
Chebat DR, Schneider FC, Ptito M. Spatial Competence and Brain Plasticity in Congenital Blindness via Sensory Substitution Devices. Front Neurosci 2020; 14:815. [PMID: 32848575 PMCID: PMC7406645 DOI: 10.3389/fnins.2020.00815] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 07/10/2020] [Indexed: 12/22/2022] Open
Abstract
In congenital blindness (CB), tactile, and auditory information can be reinterpreted by the brain to compensate for visual information through mechanisms of brain plasticity triggered by training. Visual deprivation does not cause a cognitive spatial deficit since blind people are able to acquire spatial knowledge about the environment. However, this spatial competence takes longer to achieve but is eventually reached through training-induced plasticity. Congenitally blind individuals can further improve their spatial skills with the extensive use of sensory substitution devices (SSDs), either visual-to-tactile or visual-to-auditory. Using a combination of functional and anatomical neuroimaging techniques, our recent work has demonstrated the impact of spatial training with both visual to tactile and visual to auditory SSDs on brain plasticity, cortical processing, and the achievement of certain forms of spatial competence. The comparison of performances between CB and sighted people using several different sensory substitution devices in perceptual and sensory-motor tasks uncovered the striking ability of the brain to rewire itself during perceptual learning and to interpret novel sensory information even during adulthood. We discuss here the implications of these findings for helping blind people in navigation tasks and to increase their accessibility to both real and virtual environments.
Collapse
Affiliation(s)
- Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel, Israel
| | - Fabien C. Schneider
- Department of Radiology, University of Lyon, Saint-Etienne, France
- Neuroradiology Unit, University Hospital of Saint-Etienne, Saint-Etienne, France
| | - Maurice Ptito
- BRAIN Lab, Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Chaire de Recherche Harland Sanders en Sciences de la Vision, École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
11
|
Topalidis P, Zinchenko A, Gädeke JC, Föcker J. The role of spatial selective attention in the processing of affective prosodies in congenitally blind adults: An ERP study. Brain Res 2020; 1739:146819. [PMID: 32251662 DOI: 10.1016/j.brainres.2020.146819] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 03/25/2020] [Accepted: 04/02/2020] [Indexed: 10/24/2022]
Abstract
The question whether spatial selective attention is necessary in order to process vocal affective prosody has been controversially discussed in sighted individuals: whereas some studies argue that attention is required in order to process emotions, other studies conclude that vocal prosody can be processed even outside the focus of spatial selective attention. Here, we asked whether spatial selective attention is necessary for the processing of affective prosodies after visual deprivation from birth. For this purpose, pseudowords spoken in happy, neutral, fearful or threatening prosodies were presented at the left or right loudspeaker. Congenitally blind individuals (N = 8) and sighted controls (N = 13) had to attend to one of the loudspeakers and detect rare pseudowords presented at the attended loudspeaker during EEG recording. Emotional prosody of the syllables was task-irrelevant. Blind individuals outperformed sighted controls by being more efficient in detecting deviant syllables at the attended loudspeaker. A higher auditory N1 amplitude was observed in blind individuals compared to sighted controls. Additionally, sighted controls showed enhanced attention-related ERP amplitudes in response to fearful and threatening voices during the time range of the N1. By contrast, blind individuals revealed enhanced ERP amplitudes in attended relative to unattended locations irrespective of the affective valence in all time windows (110-350 ms). These effects were mainly observed at posterior electrodes. The results provide evidence for "emotion-general" auditory spatial selective attention effects in congenitally blind individuals and suggest a potential reorganization of the voice processing brain system following visual deprivation from birth.
Collapse
Affiliation(s)
- Pavlos Topalidis
- Department of Psychology and Educational Sciences, Ludwig Maximilian University, Munich, Germany
| | - Artyom Zinchenko
- Department of Psychology and Educational Sciences, Ludwig Maximilian University, Munich, Germany
| | - Julia C Gädeke
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Julia Föcker
- Biological Psychology and Neuropsychology, University of Hamburg, Germany; University of Lincoln, School of Social Sciences, United Kingdom.
| |
Collapse
|
12
|
Behrmann M, Plaut DC. Hemispheric Organization for Visual Object Recognition: A Theoretical Account and Empirical Evidence. Perception 2020; 49:373-404. [PMID: 31980013 PMCID: PMC9944149 DOI: 10.1177/0301006619899049] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Despite the similarity in structure, the hemispheres of the human brain have somewhat different functions. A traditional view of hemispheric organization asserts that there are independent and largely lateralized domain-specific regions in ventral occipitotemporal (VOTC), specialized for the recognition of distinct classes of objects. Here, we offer an alternative account of the organization of the hemispheres, with a specific focus on face and word recognition. This alternative account relies on three computational principles: distributed representations and knowledge, cooperation and competition between representations, and topography and proximity. The crux is that visual recognition results from a network of regions with graded functional specialization that is distributed across both hemispheres. Specifically, the claim is that face recognition, which is acquired relatively early in life, is processed by VOTC regions in both hemispheres. Once literacy is acquired, word recognition, which is co-lateralized with language areas, primarily engages the left VOTC and, consequently, face recognition is primarily, albeit not exclusively, mediated by the right VOTC. We review psychological and neural evidence from a range of studies conducted with normal and brain-damaged adults and children and consider findings which challenge this account. Last, we offer suggestions for future investigations whose findings may further refine this account.
Collapse
Affiliation(s)
- Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - David C. Plaut
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
13
|
Connectivity at the origins of domain specificity in the cortical face and place networks. Proc Natl Acad Sci U S A 2020; 117:6163-6169. [PMID: 32123077 DOI: 10.1073/pnas.1911359117] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
It is well established that the adult brain contains a mosaic of domain-specific networks. But how do these domain-specific networks develop? Here we tested the hypothesis that the brain comes prewired with connections that precede the development of domain-specific function. Using resting-state fMRI in the youngest sample of newborn humans tested to date, we indeed found that cortical networks that will later develop strong face selectivity (including the "proto" occipital face area and fusiform face area) and scene selectivity (including the "proto" parahippocampal place area and retrosplenial complex) by adulthood, already show domain-specific patterns of functional connectivity as early as 27 d of age (beginning as early as 6 d of age). Furthermore, we asked how these networks are functionally connected to early visual cortex and found that the proto face network shows biased functional connectivity with foveal V1, while the proto scene network shows biased functional connectivity with peripheral V1. Given that faces are almost always experienced at the fovea, while scenes always extend across the entire periphery, these differential inputs may serve to facilitate domain-specific processing in each network after that function develops, or even guide the development of domain-specific function in each network in the first place. Taken together, these findings reveal domain-specific and eccentricity-biased connectivity in the earliest days of life, placing new constraints on our understanding of the origins of domain-specific cortical networks.
Collapse
|
14
|
Thorat S, Proklova D, Peelen MV. The nature of the animacy organization in human ventral temporal cortex. eLife 2019; 8:e47142. [PMID: 31496518 PMCID: PMC6733573 DOI: 10.7554/elife.47142] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Accepted: 07/17/2019] [Indexed: 12/14/2022] Open
Abstract
The principles underlying the animacy organization of the ventral temporal cortex (VTC) remain hotly debated, with recent evidence pointing to an animacy continuum rather than a dichotomy. What drives this continuum? According to the visual categorization hypothesis, the continuum reflects the degree to which animals contain animal-diagnostic features. By contrast, the agency hypothesis posits that the continuum reflects the degree to which animals are perceived as (social) agents. Here, we tested both hypotheses with a stimulus set in which visual categorizability and agency were dissociated based on representations in convolutional neural networks and behavioral experiments. Using fMRI, we found that visual categorizability and agency explained independent components of the animacy continuum in VTC. Modeled together, they fully explained the animacy continuum. Finally, clusters explained by visual categorizability were localized posterior to clusters explained by agency. These results show that multiple organizing principles, including agency, underlie the animacy continuum in VTC.
Collapse
Affiliation(s)
- Sushrut Thorat
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenNetherlands
| | - Daria Proklova
- Brain and Mind InstituteUniversity of Western OntarioLondonCanada
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenNetherlands
| |
Collapse
|
15
|
Op de Beeck HP, Pillet I, Ritchie JB. Factors Determining Where Category-Selective Areas Emerge in Visual Cortex. Trends Cogn Sci 2019; 23:784-797. [PMID: 31327671 DOI: 10.1016/j.tics.2019.06.006] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 06/21/2019] [Accepted: 06/21/2019] [Indexed: 11/26/2022]
Abstract
A hallmark of functional localization in the human brain is the presence of areas in visual cortex specialized for representing particular categories such as faces and words. Why do these areas appear where they do during development? Recent findings highlight several general factors to consider when answering this question. Experience-driven category selectivity arises in regions that have: (i) pre-existing selectivity for properties of the stimulus, (ii) are appropriately placed in the computational hierarchy of the visual system, and (iii) exhibit domain-specific patterns of connectivity to nonvisual regions. In other words, cortical location of category selectivity is constrained by what category will be represented, how it will be represented, and why the representation will be used.
Collapse
Affiliation(s)
- Hans P Op de Beeck
- Department of Brain and Cognition and Leuven Brain Institute, KU Leuven, Belgium. @kuleuven.be
| | - Ineke Pillet
- Department of Brain and Cognition and Leuven Brain Institute, KU Leuven, Belgium
| | - J Brendan Ritchie
- Department of Brain and Cognition and Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|
16
|
Halley AC, Krubitzer L. Not all cortical expansions are the same: the coevolution of the neocortex and the dorsal thalamus in mammals. Curr Opin Neurobiol 2019; 56:78-86. [PMID: 30658218 DOI: 10.1016/j.conb.2018.12.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 11/18/2018] [Accepted: 12/09/2018] [Indexed: 02/06/2023]
Abstract
A central question in comparative neurobiology concerns how evolution has produced brains with expanded neocortices, composed of more areas with unique connectivity and functional properties. Some mammalian lineages, such as primates, exhibit exceptionally large cortices relative to the amount of sensory inputs from the dorsal thalamus, and this expansion is associated with a larger number of distinct cortical areas, composing a larger proportion of the cortical sheet. We propose a link between the organization of the neocortex and its expansion relative to the size of the dorsal thalamus, based on a combination of work in comparative neuroanatomy and experimental research.
Collapse
Affiliation(s)
- Andrew C Halley
- Center for Neuroscience, University of California, Davis, CA, United States
| | - Leah Krubitzer
- Center for Neuroscience, University of California, Davis, CA, United States; Department of Psychology, University of California, Davis, CA, United States.
| |
Collapse
|
17
|
Powell LJ, Kosakowski HL, Saxe R. Social Origins of Cortical Face Areas. Trends Cogn Sci 2018; 22:752-763. [PMID: 30041864 PMCID: PMC6098735 DOI: 10.1016/j.tics.2018.06.009] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2018] [Revised: 05/08/2018] [Accepted: 06/28/2018] [Indexed: 01/10/2023]
Abstract
Recently acquired fMRI data from human and macaque infants provide novel insights into the origins of cortical networks specialized for perceiving faces. Data from both species converge: cortical regions responding preferentially to faces are present and spatially organized early in infancy, although fully selective face areas emerge much later. What explains the earliest cortical responses to faces? We review two proposed mechanisms: proto-organization for simple shapes in visual cortex, and an innate subcortical schematic face template. In addition, we propose a third mechanism: infants choose to look at faces to engage in positively valenced, contingent social interactions. Activity in medial prefrontal cortex during social interactions may, directly or indirectly, guide the organization of cortical face areas.
Collapse
Affiliation(s)
- Lindsey J Powell
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Heather L Kosakowski
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Rebecca Saxe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|