1
|
Maguinness C, Schall S, Mathias B, Schoemann M, von Kriegstein K. Prior multisensory learning can facilitate auditory-only voice-identity and speech recognition in noise. Q J Exp Psychol (Hove) 2024:17470218241278649. [PMID: 39164830 DOI: 10.1177/17470218241278649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/22/2024]
Abstract
Seeing the visual articulatory movements of a speaker, while hearing their voice, helps with understanding what is said. This multisensory enhancement is particularly evident in noisy listening conditions. Multisensory enhancement also occurs even in auditory-only conditions: auditory-only speech and voice-identity recognition are superior for speakers previously learned with their face, compared to control learning; an effect termed the "face-benefit." Whether the face-benefit can assist in maintaining robust perception in increasingly noisy listening conditions, similar to concurrent multisensory input, is unknown. Here, in two behavioural experiments, we examined this hypothesis. In each experiment, participants learned a series of speakers' voices together with their dynamic face or control image. Following learning, participants listened to auditory-only sentences spoken by the same speakers and recognised the content of the sentences (speech recognition, Experiment 1) or the voice-identity of the speaker (Experiment 2) in increasing levels of auditory noise. For speech recognition, we observed that 14 of 30 participants (47%) showed a face-benefit. 19 of 25 participants (76%) showed a face-benefit for voice-identity recognition. For those participants who demonstrated a face-benefit, the face-benefit increased with auditory noise levels. Taken together, the results support an audio-visual model of auditory communication and suggest that the brain can develop a flexible system in which learned facial characteristics are used to deal with varying auditory uncertainty.
Collapse
Affiliation(s)
- Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sonja Schall
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Brian Mathias
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
- School of Psychology, University of Aberdeen, Aberdeen, United Kingdom
| | - Martin Schoemann
- Chair of Psychological Methods and Cognitive Modelling, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
2
|
Stevenage SV, Edey R, Keay R, Morrison R, Robertson DJ. Familiarity Is Key: Exploring the Effect of Familiarity on the Face-Voice Correlation. Brain Sci 2024; 14:112. [PMID: 38391687 PMCID: PMC10887171 DOI: 10.3390/brainsci14020112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Revised: 01/15/2024] [Accepted: 01/19/2024] [Indexed: 02/24/2024] Open
Abstract
Recent research has examined the extent to which face and voice processing are associated by virtue of the fact that both tap into a common person perception system. However, existing findings do not yet fully clarify the role of familiarity in this association. Given this, two experiments are presented that examine face-voice correlations for unfamiliar stimuli (Experiment 1) and for familiar stimuli (Experiment 2). With care being taken to use tasks that avoid floor and ceiling effects and that use realistic speech-based voice clips, the results suggested a significant positive but small-sized correlation between face and voice processing when recognizing unfamiliar individuals. In contrast, the correlation when matching familiar individuals was significant and positive, but much larger. The results supported the existing literature suggesting that face and voice processing are aligned as constituents of an overarching person perception system. However, the difference in magnitude of their association here reinforced the view that familiar and unfamiliar stimuli are processed in different ways. This likely reflects the importance of a pre-existing mental representation and cross-talk within the neural architectures when processing familiar faces and voices, and yet the reliance on more superficial stimulus-based and modality-specific analysis when processing unfamiliar faces and voices.
Collapse
Affiliation(s)
- Sarah V Stevenage
- School of Psychology, University of Southampton, Southampton SO17 1BJ, UK
| | - Rebecca Edey
- School of Psychology, University of Southampton, Southampton SO17 1BJ, UK
| | - Rebecca Keay
- School of Psychology, University of Southampton, Southampton SO17 1BJ, UK
| | - Rebecca Morrison
- School of Psychology, University of Southampton, Southampton SO17 1BJ, UK
| | - David J Robertson
- Department of Psychological Sciences and Health, University of Strathclyde, Glasgow G1 1QE, UK
| |
Collapse
|
3
|
Schroeger A, Kaufmann JM, Zäske R, Kovács G, Klos T, Schweinberger SR. Atypical prosopagnosia following right hemispheric stroke: A 23-year follow-up study with M.T. Cogn Neuropsychol 2022; 39:196-207. [PMID: 36202621 DOI: 10.1080/02643294.2022.2119838] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Most findings on prosopagnosia to date suggest preserved voice recognition in prosopagnosia (except in cases with bilateral lesions). Here we report a follow-up examination on M.T., suffering from acquired prosopagnosia following a large unilateral right-hemispheric lesion in frontal, parietal, and anterior temporal areas excluding core ventral occipitotemporal face areas. Twenty-three years after initial testing we reassessed face and object recognition skills [Henke, K., Schweinberger, S. R., Grigo, A., Klos, T., & Sommer, W. (1998). Specificity of face recognition: Recognition of exemplars of non-face objects in prosopagnosia. Cortex, 34(2), 289-296]; [Schweinberger, S. R., Klos, T., & Sommer, W. (1995). Covert face recognition in prosopagnosia - A dissociable function? Cortex, 31(3), 517-529] and additionally studied voice recognition. Confirming the persistence of deficits, M.T. exhibited substantial impairments in famous face recognition and memory for learned faces, but preserved face matching and object recognition skills. Critically, he showed substantially impaired voice recognition skills. These findings are congruent with the ideas that (i) prosopagnosia after right anterior temporal lesions can persist over long periods > 20 years, and that (ii) such lesions can be associated with both facial and vocal deficits in person recognition.
Collapse
Affiliation(s)
- Anna Schroeger
- Department of Psychology, Faculty of Psychology and Sports Science, Justus Liebig University, Giessen, Germany.,Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University, Jena, Germany.,Department for the Psychology of Human Movement and Sport, Institute of Sport Science, Friedrich Schiller University, Jena, Germany
| | - Jürgen M Kaufmann
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University, Jena, Germany.,DFG Research Unit Person Perception, Friedrich Schiller University, Jena, Germany
| | - Romi Zäske
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University, Jena, Germany.,DFG Research Unit Person Perception, Friedrich Schiller University, Jena, Germany
| | - Gyula Kovács
- DFG Research Unit Person Perception, Friedrich Schiller University, Jena, Germany.,Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich Schiller University, Jena, Germany
| | | | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University, Jena, Germany.,DFG Research Unit Person Perception, Friedrich Schiller University, Jena, Germany
| |
Collapse
|
4
|
Unimodal and cross-modal identity judgements using an audio-visual sorting task: Evidence for independent processing of faces and voices. Mem Cognit 2021; 50:216-231. [PMID: 34254274 PMCID: PMC8763756 DOI: 10.3758/s13421-021-01198-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/02/2021] [Indexed: 11/18/2022]
Abstract
Unimodal and cross-modal information provided by faces and voices contribute to identity percepts. To examine how these sources of information interact, we devised a novel audio-visual sorting task in which participants were required to group video-only and audio-only clips into two identities. In a series of three experiments, we show that unimodal face and voice sorting were more accurate than cross-modal sorting: While face sorting was consistently most accurate followed by voice sorting, cross-modal sorting was at chancel level or below. In Experiment 1, we compared performance in our novel audio-visual sorting task to a traditional identity matching task, showing that unimodal and cross-modal identity perception were overall moderately more accurate than the traditional identity matching task. In Experiment 2, separating unimodal from cross-modal sorting led to small improvements in accuracy for unimodal sorting, but no change in cross-modal sorting performance. In Experiment 3, we explored the effect of minimal audio-visual training: Participants were shown a clip of the two identities in conversation prior to completing the sorting task. This led to small, nonsignificant improvements in accuracy for unimodal and cross-modal sorting. Our results indicate that unfamiliar face and voice perception operate relatively independently with no evidence of mutual benefit, suggesting that extracting reliable cross-modal identity information is challenging.
Collapse
|
5
|
Maguinness C, von Kriegstein K. Visual mechanisms for voice-identity recognition flexibly adjust to auditory noise level. Hum Brain Mapp 2021; 42:3963-3982. [PMID: 34043249 PMCID: PMC8288083 DOI: 10.1002/hbm.25532] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 04/26/2021] [Accepted: 05/02/2021] [Indexed: 11/24/2022] Open
Abstract
Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so‐called ‘face‐benefit’ is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face‐benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face‐sensitive regions while participants recognised the identity of auditory‐only speakers (previously learned by face) in high (SNR −4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face‐benefit in both noise levels, for most participants (16 of 21). In high‐noise, the recognition of face‐learned speakers engaged the right posterior superior temporal sulcus motion‐sensitive face area (pSTS‐mFA), a region implicated in the processing of dynamic facial cues. The face‐benefit in high‐noise also correlated positively with increased functional connectivity between this region and voice‐sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face‐benefit. In low‐noise, the face‐benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS‐mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice‐identity recognition in auditory‐only listening conditions.
Collapse
Affiliation(s)
- Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
6
|
Tsantani M, Cook R. Normal recognition of famous voices in developmental prosopagnosia. Sci Rep 2020; 10:19757. [PMID: 33184411 PMCID: PMC7661722 DOI: 10.1038/s41598-020-76819-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 11/03/2020] [Indexed: 02/06/2023] Open
Abstract
Developmental prosopagnosia (DP) is a condition characterised by lifelong face recognition difficulties. Recent neuroimaging findings suggest that DP may be associated with aberrant structure and function in multimodal regions of cortex implicated in the processing of both facial and vocal identity. These findings suggest that both facial and vocal recognition may be impaired in DP. To test this possibility, we compared the performance of 22 DPs and a group of typical controls, on closely matched tasks that assessed famous face and famous voice recognition ability. As expected, the DPs showed severe impairment on the face recognition task, relative to typical controls. In contrast, however, the DPs and controls identified a similar number of voices. Despite evidence of interactions between facial and vocal processing, these findings suggest some degree of dissociation between the two processing pathways, whereby one can be impaired while the other develops typically. A possible explanation for this dissociation in DP could be that the deficit originates in the early perceptual encoding of face structure, rather than at later, post-perceptual stages of face identity processing, which may be more likely to involve interactions with other modalities.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK.
| |
Collapse
|
7
|
Young AW, Frühholz S, Schweinberger SR. Face and Voice Perception: Understanding Commonalities and Differences. Trends Cogn Sci 2020; 24:398-410. [DOI: 10.1016/j.tics.2020.02.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 01/16/2020] [Accepted: 02/03/2020] [Indexed: 01/01/2023]
|
8
|
Behrmann M, Plaut DC. Hemispheric Organization for Visual Object Recognition: A Theoretical Account and Empirical Evidence. Perception 2020; 49:373-404. [PMID: 31980013 PMCID: PMC9944149 DOI: 10.1177/0301006619899049] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Despite the similarity in structure, the hemispheres of the human brain have somewhat different functions. A traditional view of hemispheric organization asserts that there are independent and largely lateralized domain-specific regions in ventral occipitotemporal (VOTC), specialized for the recognition of distinct classes of objects. Here, we offer an alternative account of the organization of the hemispheres, with a specific focus on face and word recognition. This alternative account relies on three computational principles: distributed representations and knowledge, cooperation and competition between representations, and topography and proximity. The crux is that visual recognition results from a network of regions with graded functional specialization that is distributed across both hemispheres. Specifically, the claim is that face recognition, which is acquired relatively early in life, is processed by VOTC regions in both hemispheres. Once literacy is acquired, word recognition, which is co-lateralized with language areas, primarily engages the left VOTC and, consequently, face recognition is primarily, albeit not exclusively, mediated by the right VOTC. We review psychological and neural evidence from a range of studies conducted with normal and brain-damaged adults and children and consider findings which challenge this account. Last, we offer suggestions for future investigations whose findings may further refine this account.
Collapse
Affiliation(s)
- Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - David C. Plaut
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
9
|
Faces and voices in the brain: A modality-general person-identity representation in superior temporal sulcus. Neuroimage 2019; 201:116004. [DOI: 10.1016/j.neuroimage.2019.07.017] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 05/17/2019] [Accepted: 07/07/2019] [Indexed: 11/18/2022] Open
|
10
|
Multifaceted Integration: Memory for Faces Is Subserved by Widespread Connections between Visual, Memory, Auditory, and Social Networks. J Neurosci 2019; 39:4976-4985. [PMID: 31036762 DOI: 10.1523/jneurosci.0217-19.2019] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Revised: 04/04/2019] [Accepted: 04/05/2019] [Indexed: 01/07/2023] Open
Abstract
Our ability to recognize others by their facial features is at the core of human social interaction, yet this ability varies widely within the general population, ranging from developmental prosopagnosia to "super-recognizers". Previous work has focused mainly on the contribution of neural activity within the well described face network to this variance. However, given the nature of face memory in everyday life, and the social context in which it takes place, we were interested in exploring how the collaboration between different networks outside the face network in humans (measured through resting state connectivity) affects face memory performance. Fifty participants (men and women) were scanned with fMRI. Our data revealed that although the nodes of the face-processing network were tightly coupled at rest, the strength of these connections did not predict face memory performance. Instead, face recognition memory was dependent on multiple connections between these face patches and regions of the medial temporal lobe memory system (including the hippocampus), and the social processing system. Moreover, this network was selective for memory for faces, and did not predict memory for other visual objects (cars). These findings suggest that in the general population, variability in face memory is dependent on how well the face processing system interacts with other processing networks, with interaction among the face patches themselves accounting for little of the variance in memory ability.SIGNIFICANCE STATEMENT Our ability to recognize and remember faces is one of the pillars of human social interaction. Face recognition however is a very complex skill, requiring specialized neural resources in visual cortex, as well as memory, identity, and social processing, all of which are inherent in our real-world experience of faces. Yet in the general population, people vary greatly in their face memory abilities. Here we show that in the neural domain this variability is underpinned by the integration of visual, memory and social circuits, with the strength of the connections between these circuits directly linked to face recognition ability.
Collapse
|
11
|
Perception of musical pitch in developmental prosopagnosia. Neuropsychologia 2019; 124:87-97. [PMID: 30625291 DOI: 10.1016/j.neuropsychologia.2018.12.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2017] [Revised: 12/19/2018] [Accepted: 12/29/2018] [Indexed: 11/21/2022]
Abstract
Studies of developmental prosopagnosia have often shown that developmental prosopagnosia differentially affects human face processing over non-face object processing. However, little consideration has been given to whether this condition is associated with perceptual or sensorimotor impairments in other modalities. Comorbidities have played a role in theories of other developmental disorders such as dyslexia, but studies of developmental prosopagnosia have often focused on the nature of the visual recognition impairment despite evidence for widespread neural anomalies that might affect other sensorimotor systems. We studied 12 subjects with developmental prosopagnosia with a battery of auditory tests evaluating pitch and rhythm processing as well as voice perception and recognition. Overall, three subjects were impaired in fine pitch discrimination, a prevalence of 25% that is higher than the estimated 4% prevalence of congenital amusia in the general population. This was a selective deficit, as rhythm perception was unaffected in all 12 subjects. Furthermore, two of the three prosopagnosic subjects who were impaired in pitch discrimination had intact voice perception and recognition, while two of the remaining nine subjects had impaired voice recognition but intact pitch perception. These results indicate that, in some subjects with developmental prosopagnosia, the face recognition deficit is not an isolated impairment but is associated with deficits in other domains, such as auditory perception. These deficits may form part of a broader syndrome which could be due to distributed microstructural anomalies in various brain networks, possibly with a common theme of right hemispheric predominance.
Collapse
|
12
|
Maguinness C, Roswandowitz C, von Kriegstein K. Understanding the mechanisms of familiar voice-identity recognition in the human brain. Neuropsychologia 2018; 116:179-193. [DOI: 10.1016/j.neuropsychologia.2018.03.039] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2017] [Revised: 03/28/2018] [Accepted: 03/29/2018] [Indexed: 11/26/2022]
|
13
|
Stevenage SV. Drawing a distinction between familiar and unfamiliar voice processing: A review of neuropsychological, clinical and empirical findings. Neuropsychologia 2017; 116:162-178. [PMID: 28694095 DOI: 10.1016/j.neuropsychologia.2017.07.005] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Revised: 06/04/2017] [Accepted: 07/07/2017] [Indexed: 11/29/2022]
Abstract
Thirty years on from their initial observation that familiar voice recognition is not the same as unfamiliar voice discrimination (van Lancker and Kreiman, 1987), the current paper reviews available evidence in support of a distinction between familiar and unfamiliar voice processing. Here, an extensive review of the literature is provided, drawing on evidence from four domains of interest: the neuropsychological study of healthy individuals, neuropsychological investigation of brain-damaged individuals, the exploration of voice recognition deficits in less commonly studied clinical conditions, and finally empirical data from healthy individuals. All evidence is assessed in terms of its contribution to the question of interest - is familiar voice processing distinct from unfamiliar voice processing. In this regard, the evidence provides compelling support for van Lancker and Kreiman's early observation. Two considerations result: First, the limits of research based on one or other type of voice stimulus are more clearly appreciated. Second, given the demonstration of a distinction between unfamiliar and familiar voice processing, a new wave of research is encouraged which examines the transition involved as a voice is learned.
Collapse
Affiliation(s)
- Sarah V Stevenage
- Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ, UK.
| |
Collapse
|
14
|
Roswandowitz C, Schelinski S, von Kriegstein K. Developmental phonagnosia: Linking neural mechanisms with the behavioural phenotype. Neuroimage 2017; 155:97-112. [DOI: 10.1016/j.neuroimage.2017.02.064] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2016] [Revised: 12/16/2016] [Accepted: 02/21/2017] [Indexed: 11/30/2022] Open
|
15
|
Maguinness C, von Kriegstein K. Cross-modal processing of voices and faces in developmental prosopagnosia and developmental phonagnosia. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1313347] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Corrina Maguinness
- Max Planck Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Max Planck Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Psychology, Humboldt University of Berlin, Berlin, Germany
| |
Collapse
|
16
|
Bülthoff I, Newell FN. Crossmodal priming of unfamiliar faces supports early interactions between voices and faces in person perception. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1290729] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
| | - Fiona N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland
| |
Collapse
|
17
|
Pendl SL, Salzwedel AP, Goldman BD, Barrett LF, Lin W, Gilmore JH, Gao W. Emergence of a hierarchical brain during infancy reflected by stepwise functional connectivity. Hum Brain Mapp 2017; 38:2666-2682. [PMID: 28263011 DOI: 10.1002/hbm.23552] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Revised: 02/17/2017] [Accepted: 02/20/2017] [Indexed: 01/06/2023] Open
Abstract
The hierarchical nature of the brain's functional organization has long been recognized, but when and how this architecture emerges during development remains largely unknown. Here the development of the brain's hierarchical organization was characterized using a modified stepwise functional connectivity approach based on resting-state fMRI in a fully longitudinal sample of infants (N = 28, with scans after birth, and at 1 and 2 years) and adults. Results obtained by placing seeds in early sensory cortices revealed novel hierarchical patterns of adult brain organization ultimately converging in limbic, paralimbic, basal ganglia, and frontoparietal brain regions. These findings are remarkably consistent with predictive coding accounts of neural processing that place these regions at the top of predictive coding hierarchies. Infants gradually developed toward this architecture in a region- and step-dependent manner, and displayed many of the same regions as adults in top hierarchical positions, starting from 1 year of age. The findings further revealed patterns of inter-sensory connectivity likely reflecting the emergence and development of multisensory processing strategies during infancy, the strengths of which were correlated with early cognitive development scores. Hum Brain Mapp 38:2666-2682, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Suzanne L Pendl
- Department of Biomedical Sciences and Imaging, Cedars-Sinai Medical Center, Biomedical Imaging Research Institute, Los Angeles, California, 90048
| | - Andrew P Salzwedel
- Department of Biomedical Sciences and Imaging, Cedars-Sinai Medical Center, Biomedical Imaging Research Institute, Los Angeles, California, 90048
| | - Barbara D Goldman
- Department of Psychology and Neuroscience, University of North Carolina Chapel Hill, and FPG Child Development Institute, Chapel Hill, North Carolina, 27599
| | - Lisa F Barrett
- Department of Psychology, Northeastern University, Boston, Massachusetts, 02115.,Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, 02129
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina Chapel Hill, Chapel Hill, North Carolina, 27599
| | - John H Gilmore
- Department of Psychiatry, University of North Carolina Chapel Hill, Chapel Hill, North Carolina, 27599
| | - Wei Gao
- Department of Biomedical Sciences and Imaging, Cedars-Sinai Medical Center, Biomedical Imaging Research Institute, Los Angeles, California, 90048
| |
Collapse
|
18
|
Awwad Shiekh Hasan B, Valdes-Sosa M, Gross J, Belin P. "Hearing faces and seeing voices": Amodal coding of person identity in the human brain. Sci Rep 2016; 6:37494. [PMID: 27881866 PMCID: PMC5121604 DOI: 10.1038/srep37494] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Accepted: 10/27/2016] [Indexed: 11/09/2022] Open
Abstract
Recognizing familiar individuals is achieved by the brain by combining cues from several sensory modalities, including the face of a person and her voice. Here we used functional magnetic resonance (fMRI) and a whole-brain, searchlight multi-voxel pattern analysis (MVPA) to search for areas in which local fMRI patterns could result in identity classification as a function of sensory modality. We found several areas supporting face or voice stimulus classification based on fMRI responses, consistent with previous reports; the classification maps overlapped across modalities in a single area of right posterior superior temporal sulcus (pSTS). Remarkably, we also found several cortical areas, mostly located along the middle temporal gyrus, in which local fMRI patterns resulted in identity “cross-classification”: vocal identity could be classified based on fMRI responses to the faces, or the reverse, or both. These findings are suggestive of a series of cortical identity representations increasingly abstracted from the input modality.
Collapse
Affiliation(s)
- Bashar Awwad Shiekh Hasan
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom.,Institute of Neuroscience, Newcastle University, Newcastle, United Kingdom
| | | | - Joachim Gross
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Pascal Belin
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom.,Département de Psychologie, Université de Montréal, Montréal, Québec, Canada.,Institut de Neurosciecnes de la Timone, UMR 7289, CNRS and Aix-Marseille Université, Marseille, France
| |
Collapse
|
19
|
The Glasgow Voice Memory Test: Assessing the ability to memorize and recognize unfamiliar voices. Behav Res Methods 2016; 49:97-110. [DOI: 10.3758/s13428-015-0689-6] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
20
|
Liu RR, Corrow SL, Pancaroglu R, Duchaine B, Barton JJS. The processing of voice identity in developmental prosopagnosia. Cortex 2015; 71:390-7. [PMID: 26321070 DOI: 10.1016/j.cortex.2015.07.030] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Revised: 06/17/2015] [Accepted: 07/20/2015] [Indexed: 01/30/2023]
Abstract
BACKGROUND Developmental prosopagnosia is a disorder of face recognition that is believed to reflect impairments of visual mechanisms. However, voice recognition has rarely been evaluated in developmental prosopagnosia to clarify if it is modality-specific or part of a multi-modal person recognition syndrome. OBJECTIVE Our goal was to examine whether voice discrimination and/or recognition are impaired in subjects with developmental prosopagnosia. DESIGN/METHODS 73 healthy controls and 12 subjects with developmental prosopagnosia performed a match-to-sample test of voice discrimination and a test of short-term voice familiarity, as well as a questionnaire about face and voice identification in daily life. RESULTS Eleven subjects with developmental prosopagnosia scored within the normal range for voice discrimination and voice recognition. One was impaired on discrimination and borderline for recognition, with equivalent scores for face and voice recognition, despite being unaware of voice processing problems. CONCLUSIONS Most subjects with developmental prosopagnosia are not impaired in short-term voice familiarity, providing evidence that developmental prosopagnosia is usually a modality-specific disorder of face recognition. However, there may be heterogeneity, with a minority having additional voice processing deficits. Objective tests of voice recognition should be integrated into the diagnostic evaluation of this disorder to distinguish it from a multi-modal person recognition syndrome.
Collapse
Affiliation(s)
- Ran R Liu
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Eye Care Centre, Vancouver, BC, Canada.
| | - Sherryse L Corrow
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Eye Care Centre, Vancouver, BC, Canada.
| | - Raika Pancaroglu
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Eye Care Centre, Vancouver, BC, Canada.
| | - Brad Duchaine
- Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| | - Jason J S Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Eye Care Centre, Vancouver, BC, Canada.
| |
Collapse
|
21
|
Riedel P, Ragert P, Schelinski S, Kiebel SJ, von Kriegstein K. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition. Cortex 2015; 68:86-99. [DOI: 10.1016/j.cortex.2014.11.016] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2014] [Revised: 10/24/2014] [Accepted: 11/25/2014] [Indexed: 12/31/2022]
|
22
|
Bülthoff I, Newell FN. Distinctive voices enhance the visual recognition of unfamiliar faces. Cognition 2015; 137:9-21. [PMID: 25584464 DOI: 10.1016/j.cognition.2014.12.006] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2013] [Revised: 12/16/2014] [Accepted: 12/18/2014] [Indexed: 11/16/2022]
Abstract
Several studies have provided evidence in favour of a norm-based representation of faces in memory. However, such models have hitherto failed to take account of how other person-relevant information affects face recognition performance. Here we investigated whether distinctive or typical auditory stimuli affect the subsequent recognition of previously unfamiliar faces and whether the type of auditory stimulus matters. In this study participants learned to associate either unfamiliar distinctive and typical voices or unfamiliar distinctive and typical sounds to unfamiliar faces. The results indicated that recognition performance was better to faces previously paired with distinctive than with typical voices but we failed to find any benefit on face recognition when the faces were previously associated with distinctive sounds. These findings possibly point to an expertise effect, as faces are usually associated to voices. More importantly, it suggests that the memory for visual faces can be modified by the perceptual quality of related vocal information and more specifically that facial distinctiveness can be of a multi-sensory nature. These results have important implications for our understanding of the structure of memory for person identification.
Collapse
Affiliation(s)
- I Bülthoff
- Max Planck Institute for Biological Cybernetics, Spemannstr. 38, D-72076 Tübingen, Germany.
| | - F N Newell
- School of Psychology and Institute of Neuroscience, Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
| |
Collapse
|
23
|
Visual abilities are important for auditory-only speech recognition: Evidence from autism spectrum disorder. Neuropsychologia 2014; 65:1-11. [DOI: 10.1016/j.neuropsychologia.2014.09.031] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2014] [Revised: 08/25/2014] [Accepted: 09/18/2014] [Indexed: 11/22/2022]
|
24
|
Person recognition and the brain: Merging evidence from patients and healthy individuals. Neurosci Biobehav Rev 2014; 47:717-34. [DOI: 10.1016/j.neubiorev.2014.10.022] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Revised: 09/19/2014] [Accepted: 10/27/2014] [Indexed: 11/23/2022]
|
25
|
Liu RR, Pancaroglu R, Hills CS, Duchaine B, Barton JJS. Voice Recognition in Face-Blind Patients. Cereb Cortex 2014; 26:1473-1487. [PMID: 25349193 DOI: 10.1093/cercor/bhu240] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
Right or bilateral anterior temporal damage can impair face recognition, but whether this is an associative variant of prosopagnosia or part of a multimodal disorder of person recognition is an unsettled question, with implications for cognitive and neuroanatomic models of person recognition. We assessed voice perception and short-term recognition of recently heard voices in 10 subjects with impaired face recognition acquired after cerebral lesions. All 4 subjects with apperceptive prosopagnosia due to lesions limited to fusiform cortex had intact voice discrimination and recognition. One subject with bilateral fusiform and anterior temporal lesions had a combined apperceptive prosopagnosia and apperceptive phonagnosia, the first such described case. Deficits indicating a multimodal syndrome of person recognition were found only in 2 subjects with bilateral anterior temporal lesions. All 3 subjects with right anterior temporal lesions had normal voice perception and recognition, 2 of whom performed normally on perceptual discrimination of faces. This confirms that such lesions can cause a modality-specific associative prosopagnosia.
Collapse
Affiliation(s)
- Ran R Liu
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Raika Pancaroglu
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Charlotte S Hills
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Brad Duchaine
- Department of Psychology, Dartmouth University, Hanover, NH, USA
| | - Jason J S Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada.,Neuro-ophthalmology Section K, VGH Eye Care Centre, Vancouver, BC, Canada V5Z 3N9
| |
Collapse
|
26
|
Blank H, Kiebel SJ, von Kriegstein K. How the human brain exchanges information across sensory modalities to recognize other people. Hum Brain Mapp 2014; 36:324-39. [PMID: 25220190 DOI: 10.1002/hbm.22631] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2014] [Revised: 08/29/2014] [Accepted: 08/29/2014] [Indexed: 11/09/2022] Open
Abstract
Recognizing the identity of other individuals across different sensory modalities is critical for successful social interaction. In the human brain, face- and voice-sensitive areas are separate, but structurally connected. What kind of information is exchanged between these specialized areas during cross-modal recognition of other individuals is currently unclear. For faces, specific areas are sensitive to identity and to physical properties. It is an open question whether voices activate representations of face identity or physical facial properties in these areas. To address this question, we used functional magnetic resonance imaging in humans and a voice-face priming design. In this design, familiar voices were followed by morphed faces that matched or mismatched with respect to identity or physical properties. The results showed that responses in face-sensitive regions were modulated when face identity or physical properties did not match to the preceding voice. The strength of this mismatch signal depended on the level of certainty the participant had about the voice identity. This suggests that both identity and physical property information was provided by the voice to face areas. The activity and connectivity profiles differed between face-sensitive areas: (i) the occipital face area seemed to receive information about both physical properties and identity, (ii) the fusiform face area seemed to receive identity, and (iii) the anterior temporal lobe seemed to receive predominantly identity information from the voice. We interpret these results within a prediction coding scheme in which both identity and physical property information is used across sensory modalities to recognize individuals.
Collapse
Affiliation(s)
- Helen Blank
- Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany; MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, United Kingdom
| | | | | |
Collapse
|
27
|
Collins JA, Olson IR. Beyond the FFA: The role of the ventral anterior temporal lobes in face processing. Neuropsychologia 2014; 61:65-79. [PMID: 24937188 DOI: 10.1016/j.neuropsychologia.2014.06.005] [Citation(s) in RCA: 142] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2013] [Revised: 05/19/2014] [Accepted: 06/08/2014] [Indexed: 11/17/2022]
Abstract
Extensive research has supported the existence of a specialized face-processing network that is distinct from the visual processing areas used for general object recognition. The majority of this work has been aimed at characterizing the response properties of the fusiform face area (FFA) and the occipital face area (OFA), which together are thought to constitute the core network of brain areas responsible for facial identification. Although accruing evidence has shown that face-selective patches in the ventral anterior temporal lobes (vATLs) are interconnected with the FFA and OFA, and that they play a role in facial identification, the relative contribution of these brain areas to the core face-processing network has remained unarticulated. Here we review recent research critically implicating the vATLs in face perception and memory. We propose that current models of face processing should be revised such that the ventral anterior temporal lobes serve a centralized role in the visual face-processing network. We speculate that a hierarchically organized system of face processing areas extends bilaterally from the inferior occipital gyri to the vATLs, with facial representations becoming increasingly complex and abstracted from low-level perceptual features as they move forward along this network. The anterior temporal face areas may serve as the apex of this hierarchy, instantiating the final stages of face recognition. We further argue that the anterior temporal face areas are ideally suited to serve as an interface between face perception and face memory, linking perceptual representations of individual identity with person-specific semantic knowledge.
Collapse
Affiliation(s)
- Jessica A Collins
- Department of Psychology, Temple University, 1701 North 13th street, Philadelphia, PA 19122, USA.
| | - Ingrid R Olson
- Department of Psychology, Temple University, 1701 North 13th street, Philadelphia, PA 19122, USA.
| |
Collapse
|
28
|
Schall S, von Kriegstein K. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception. PLoS One 2014; 9:e86325. [PMID: 24466026 PMCID: PMC3900530 DOI: 10.1371/journal.pone.0086325] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2013] [Accepted: 12/06/2013] [Indexed: 11/29/2022] Open
Abstract
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
Collapse
Affiliation(s)
- Sonja Schall
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- * E-mail:
| | - Katharina von Kriegstein
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Humboldt University of Berlin, Berlin, Germany
| |
Collapse
|
29
|
The face-sensitive N170 component in developmental prosopagnosia. Neuropsychologia 2012; 50:3588-99. [DOI: 10.1016/j.neuropsychologia.2012.10.017] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2012] [Revised: 09/17/2012] [Accepted: 10/14/2012] [Indexed: 11/18/2022]
|
30
|
Stevenage SV, Hale S, Morgan Y, Neil GJ. Recognition by association: Within- and cross-modality associative priming with faces and voices. Br J Psychol 2012; 105:1-16. [DOI: 10.1111/bjop.12011] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2011] [Revised: 09/18/2012] [Indexed: 12/01/2022]
Affiliation(s)
| | - Sarah Hale
- School of Psychology; University of Southampton; Hampshire UK
| | - Yasmin Morgan
- School of Psychology; University of Southampton; Hampshire UK
| | - Greg J. Neil
- School of Psychology; University of Southampton; Hampshire UK
| |
Collapse
|
31
|
The superiority in voice processing of the blind arises from neural plasticity at sensory processing stages. Neuropsychologia 2012; 50:2056-67. [DOI: 10.1016/j.neuropsychologia.2012.05.006] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2011] [Revised: 03/07/2012] [Accepted: 05/06/2012] [Indexed: 11/17/2022]
|
32
|
Hailstone JC, Ridgway GR, Bartlett JW, Goll JC, Buckley AH, Crutch SJ, Warren JD. Voice processing in dementia: a neuropsychological and neuroanatomical analysis. ACTA ACUST UNITED AC 2011; 134:2535-47. [PMID: 21908871 PMCID: PMC3170540 DOI: 10.1093/brain/awr205] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Voice processing in neurodegenerative disease is poorly understood. Here we undertook a systematic investigation of voice processing in a cohort of patients with clinical diagnoses representing two canonical dementia syndromes: temporal variant frontotemporal lobar degeneration (n = 14) and Alzheimer’s disease (n = 22). Patient performance was compared with a healthy matched control group (n = 35). All subjects had a comprehensive neuropsychological assessment including measures of voice perception (vocal size, gender, speaker discrimination) and voice recognition (familiarity, identification, naming and cross-modal matching) and equivalent measures of face and name processing. Neuroanatomical associations of voice processing performance were assessed using voxel-based morphometry. Both disease groups showed deficits on all aspects of voice recognition and impairment was more severe in the temporal variant frontotemporal lobar degeneration group than the Alzheimer’s disease group. Face and name recognition were also impaired in both disease groups and name recognition was significantly more impaired than other modalities in the temporal variant frontotemporal lobar degeneration group. The Alzheimer’s disease group showed additional deficits of vocal gender perception and voice discrimination. The neuroanatomical analysis across both disease groups revealed common grey matter associations of familiarity, identification and cross-modal recognition in all modalities in the right temporal pole and anterior fusiform gyrus; while in the Alzheimer’s disease group, voice discrimination was associated with grey matter in the right inferior parietal lobe. The findings suggest that impairments of voice recognition are significant in both these canonical dementia syndromes but particularly severe in temporal variant frontotemporal lobar degeneration, whereas impairments of voice perception may show relative specificity for Alzheimer’s disease. The right anterior temporal lobe is likely to have a critical role in the recognition of voices and other modalities of person knowledge.
Collapse
Affiliation(s)
- Julia C Hailstone
- Dementia Research Centre, Institute of Neurology, University College London, Queen Square, London WC1N 3BG, UK
| | | | | | | | | | | | | |
Collapse
|
33
|
Abstract
Currently, there are two opposing models for how voice and face information is integrated in the human brain to recognize person identity. The conventional model assumes that voice and face information is only combined at a supramodal stage (Bruce and Young, 1986; Burton et al., 1990; Ellis et al., 1997). An alternative model posits that areas encoding voice and face information also interact directly and that this direct interaction is behaviorally relevant for optimizing person recognition (von Kriegstein et al., 2005; von Kriegstein and Giraud, 2006). To disambiguate between the two different models, we tested for evidence of direct structural connections between voice- and face-processing cortical areas by combining functional and diffusion magnetic resonance imaging. We localized, at the individual subject level, three voice-sensitive areas in anterior, middle, and posterior superior temporal sulcus (STS) and face-sensitive areas in the fusiform gyrus [fusiform face area (FFA)]. Using probabilistic tractography, we show evidence that the FFA is structurally connected with voice-sensitive areas in STS. In particular, our results suggest that the FFA is more strongly connected to middle and anterior than to posterior areas of the voice-sensitive STS. This specific structural connectivity pattern indicates that direct links between face- and voice-recognition areas could be used to optimize human person recognition.
Collapse
|
34
|
von Kriegstein K. A Multisensory Perspective on Human Auditory Communication. Front Neurosci 2011. [DOI: 10.1201/b11092-43] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
35
|
Kayser C, Petkov C, Remedios R, Logothetis N. Multisensory Influences on Auditory Processing. Front Neurosci 2011. [DOI: 10.1201/9781439812174-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
36
|
Kayser C, Petkov C, Remedios R, Logothetis N. Multisensory Influences on Auditory Processing. Front Neurosci 2011. [DOI: 10.1201/b11092-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
37
|
|
38
|
Person identification through faces and voices: An ERP study. Brain Res 2011; 1407:13-26. [DOI: 10.1016/j.brainres.2011.03.029] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2011] [Accepted: 03/11/2011] [Indexed: 11/17/2022]
|
39
|
Furl N, Garrido L, Dolan RJ, Driver J, Duchaine B. Fusiform gyrus face selectivity relates to individual differences in facial recognition ability. J Cogn Neurosci 2011; 23:1723-40. [PMID: 20617881 PMCID: PMC3322334 DOI: 10.1162/jocn.2010.21545] [Citation(s) in RCA: 145] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Regions of the occipital and temporal lobes, including a region in the fusiform gyrus (FG), have been proposed to constitute a "core" visual representation system for faces, in part because they show face selectivity and face repetition suppression. But recent fMRI studies of developmental prosopagnosics (DPs) raise questions about whether these measures relate to face processing skills. Although DPs manifest deficient face processing, most studies to date have not shown unequivocal reductions of functional responses in the proposed core regions. We scanned 15 DPs and 15 non-DP control participants with fMRI while employing factor analysis to derive behavioral components related to face identification or other processes. Repetition suppression specific to facial identities in FG or to expression in FG and STS did not show compelling relationships with face identification ability. However, we identified robust relationships between face selectivity and face identification ability in FG across our sample for several convergent measures, including voxel-wise statistical parametric mapping, peak face selectivity in individually defined "fusiform face areas" (FFAs), and anatomical extents (cluster sizes) of those FFAs. None of these measures showed associations with behavioral expression or object recognition ability. As a group, DPs had reduced face-selective responses in bilateral FFA when compared with non-DPs. Individual DPs were also more likely than non-DPs to lack expected face-selective activity in core regions. These findings associate individual differences in face processing ability with selectivity in core face processing regions. This confirms that face selectivity can provide a valid marker for neural mechanisms that contribute to face identification ability.
Collapse
|
40
|
|
41
|
|
42
|
O’Mahony C, Newell FN. Integration of faces and voices, but not faces and names, in person recognition. Br J Psychol 2011; 103:73-82. [DOI: 10.1111/j.2044-8295.2011.02044.x] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
43
|
Latinus M, Crabbe F, Belin P. Learning-Induced Changes in the Cerebral Processing of Voice Identity. Cereb Cortex 2011; 21:2820-8. [DOI: 10.1093/cercor/bhr077] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
44
|
Stollhoff R, Jost J, Elze T, Kennerknecht I. Deficits in long-term recognition memory reveal dissociated subtypes in congenital prosopagnosia. PLoS One 2011; 6:e15702. [PMID: 21283572 PMCID: PMC3026793 DOI: 10.1371/journal.pone.0015702] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2010] [Accepted: 11/22/2010] [Indexed: 11/29/2022] Open
Abstract
The study investigates long-term recognition memory in congenital prosopagnosia (CP), a lifelong impairment in face identification that is present from birth. Previous investigations of processing deficits in CP have mostly relied on short-term recognition tests to estimate the scope and severity of individual deficits. We firstly report on a controlled test of long-term (one year) recognition memory for faces and objects conducted with a large group of participants with CP. Long-term recognition memory is significantly impaired in eight CP participants (CPs). In all but one case, this deficit was selective to faces and didn't extend to intra-class recognition of object stimuli. In a test of famous face recognition, long-term recognition deficits were less pronounced, even after accounting for differences in media consumption between controls and CPs. Secondly, we combined test results on long-term and short-term recognition of faces and objects, and found a large heterogeneity in severity and scope of individual deficits. Analysis of the observed heterogeneity revealed a dissociation of CP into subtypes with a homogeneous phenotypical profile. Thirdly, we found that among CPs self-assessment of real-life difficulties, based on a standardized questionnaire, and experimentally assessed face recognition deficits are strongly correlated. Our results demonstrate that controlled tests of long-term recognition memory are needed to fully assess face recognition deficits in CP. Based on controlled and comprehensive experimental testing, CP can be dissociated into subtypes with a homogeneous phenotypical profile. The CP subtypes identified align with those found in prosopagnosia caused by cortical lesions; they can be interpreted with respect to a hierarchical neural system for face perception.
Collapse
Affiliation(s)
- Rainer Stollhoff
- Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany.
| | | | | | | |
Collapse
|
45
|
Hoover AEN, Démonet JF, Steeves JKE. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia. Neuropsychologia 2010; 48:3725-32. [PMID: 20850465 DOI: 10.1016/j.neuropsychologia.2010.09.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Revised: 09/09/2010] [Accepted: 09/09/2010] [Indexed: 11/24/2022]
Abstract
Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life.
Collapse
|
46
|
Hailstone JC, Crutch SJ, Vestergaard MD, Patterson RD, Warren JD. Progressive associative phonagnosia: a neuropsychological analysis. Neuropsychologia 2009; 48:1104-14. [PMID: 20006628 PMCID: PMC2833414 DOI: 10.1016/j.neuropsychologia.2009.12.011] [Citation(s) in RCA: 70] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2009] [Revised: 11/23/2009] [Accepted: 12/07/2009] [Indexed: 11/01/2022]
Abstract
There are few detailed studies of impaired voice recognition, or phonagnosia. Here we describe two patients with progressive phonagnosia in the context of frontotemporal lobar degeneration. Patient QR presented with behavioural decline and increasing difficulty recognising familiar voices, while patient KL presented with progressive prosopagnosia. In a series of neuropsychological experiments we assessed the ability of QR and KL to recognise and judge the familiarity of voices, faces and proper names, to recognise vocal emotions, to perceive and discriminate voices, and to recognise environmental sounds and musical instruments. The patients were assessed in relation to a group of healthy age-matched control subjects. QR exhibited severe impairments of voice identification and familiarity judgments with relatively preserved recognition of difficulty-matched faces and environmental sounds; recognition of musical instruments was impaired, though better than recognition of voices. In contrast, patient KL exhibited severe impairments of both voice and face recognition, with relatively preserved recognition of musical instruments and environmental sounds. Both patients demonstrated preserved ability to analyse perceptual properties of voices and to recognise vocal emotions. The voice processing deficit in both patients could be characterised as associative phonagnosia: in the case of QR, this was relatively selective for voices, while in the case of KL, there was evidence for a multimodal impairment of person knowledge. The findings have implications for current cognitive models of voice recognition.
Collapse
Affiliation(s)
- Julia C Hailstone
- Dementia Research Centre, Institute of Neurology, University College London, Queen Square, London WC1N 3BG, United Kingdom
| | | | | | | | | |
Collapse
|
47
|
Abstract
The perception of a face allows us to recognize the person, infer his or her emotional state, better understand what the person is saying, and derive general information, such as age and gender. This unique visual stimulus has generated a wealth of research, and subsequently theoretical and methodological debate. This special issue brings together 16 original papers that show the extraordinary diversity and fruitfulness of the approaches now being pursued. They are aimed at understanding different aspects of face perception in populations ranging from healthy children to adults with brain lesions and with techniques covering the entire spectrum from paper-and-pencil tests to functional brain imaging. Together, these contributions provide an insightful overview of the current state of research on face perception and exemplify the questions that dominate the field. To one such question, whether 'face perception' is a special issue in the broad field of the cognitive neurosciences, the answer is clearly yes!
Collapse
Affiliation(s)
- Andrew W Young
- Department of Psychology and York Neuroimaging Centre, University of York, York, UK
| | | | | |
Collapse
|
48
|
Hocking J, Price CJ. The influence of colour and sound on neuronal activation during visual object naming. Brain Res 2008; 1241:92-102. [PMID: 18789907 PMCID: PMC2693529 DOI: 10.1016/j.brainres.2008.08.037] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2008] [Revised: 08/04/2008] [Accepted: 08/10/2008] [Indexed: 11/18/2022]
Abstract
This paper investigates how neuronal activation for naming photographs of objects is influenced by the addition of appropriate colour or sound. Behaviourally, both colour and sound are known to facilitate object recognition from visual form. However, previous functional imaging studies have shown inconsistent effects. For example, the addition of appropriate colour has been shown to reduce antero-medial temporal activation whereas the addition of sound has been shown to increase posterior superior temporal activation. Here we compared the effect of adding colour or sound cues in the same experiment. We found that the addition of either the appropriate colour or sound increased activation for naming photographs of objects in bilateral occipital regions and the right anterior fusiform. Moreover, the addition of colour reduced left antero-medial temporal activation but this effect was not observed for the addition of object sound. We propose that activation in bilateral occipital and right fusiform areas precedes the integration of visual form with either its colour or associated sound. In contrast, left antero-medial temporal activation is reduced because object recognition is facilitated after colour and form have been integrated.
Collapse
Affiliation(s)
- Julia Hocking
- Centre for Magnetic Resonance, The University of Queensland, Brisbane, Australia.
| | | |
Collapse
|
49
|
Semantics and the multisensory brain: How meaning modulates processes of audio-visual integration. Brain Res 2008; 1242:136-50. [DOI: 10.1016/j.brainres.2008.03.071] [Citation(s) in RCA: 148] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2008] [Revised: 03/11/2008] [Accepted: 03/12/2008] [Indexed: 11/24/2022]
|
50
|
Arnott SR, Cant JS, Dutton GN, Goodale MA. Crinkling and crumpling: an auditory fMRI study of material properties. Neuroimage 2008; 43:368-78. [PMID: 18718543 DOI: 10.1016/j.neuroimage.2008.07.033] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2007] [Revised: 05/25/2008] [Accepted: 07/12/2008] [Indexed: 10/21/2022] Open
Abstract
Knowledge of an object's material composition (i.e., what it is made of) alters how we interact with that object. Seeing the bright glint or hearing the metallic crinkle of a foil plate for example, confers information about that object before we have even touched it. Recent research indicates that the medial aspect of the ventral visual pathway is sensitive to the surface properties of objects. In the present functional magnetic resonance imaging (fMRI) study, we investigated whether the ventral pathway is also sensitive to material properties derived from sound alone. Relative to scrambled material sounds and non-verbal human vocalizations, audio recordings of materials being manipulated (i.e., crumpled) in someone's hands elicited greater BOLD activity in the right parahippocampal cortex of neurologically intact listeners, as well as a cortically blind participant. Additional left inferior parietal lobe activity was also observed in the neurologically intact group. Taken together, these results support a ventro-medial pathway that is specialized for processing the material properties of objects, and suggest that there are sub-regions within this pathway that subserve the processing of acoustically-derived information about material composition.
Collapse
Affiliation(s)
- Stephen R Arnott
- CIHR Group for Action and Perception, Department of Psychology, University of Western Ontario, London, Ontario, Canada.
| | | | | | | |
Collapse
|