1
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
2
|
Zäske R, Kaufmann JM, Schweinberger SR. Neural Correlates of Voice Learning with Distinctive and Non-Distinctive Faces. Brain Sci 2023; 13:637. [PMID: 37190602 PMCID: PMC10136676 DOI: 10.3390/brainsci13040637] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/31/2023] [Accepted: 04/04/2023] [Indexed: 05/17/2023] Open
Abstract
Recognizing people from their voices may be facilitated by a voice's distinctiveness, in a manner similar to that which has been reported for faces. However, little is known about the neural time-course of voice learning and the role of facial information in voice learning. Based on evidence for audiovisual integration in the recognition of familiar people, we studied the behavioral and electrophysiological correlates of voice learning associated with distinctive or non-distinctive faces. We repeated twelve unfamiliar voices uttering short sentences, together with either distinctive or non-distinctive faces (depicted before and during voice presentation) in six learning-test cycles. During learning, distinctive faces increased early visually-evoked (N170, P200, N250) potentials relative to non-distinctive faces, and face distinctiveness modulated voice-elicited slow EEG activity at the occipito-temporal and fronto-central electrodes. At the test, unimodally-presented voices previously learned with distinctive faces were classified more quickly than were voices learned with non-distinctive faces, and also more quickly than novel voices. Moreover, voices previously learned with faces elicited an N250-like component that was similar in topography to that typically observed for facial stimuli. The preliminary source localization of this voice-induced N250 was compatible with a source in the fusiform gyrus. Taken together, our findings provide support for a theory of early interaction between voice and face processing areas during both learning and voice recognition.
Collapse
Affiliation(s)
- Romi Zäske
- Department of Experimental Otorhinolaryngology, Jena University Hospital, Stoystraße 3, 07743 Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
- Voice Research Unit, Friedrich Schiller University of Jena, Leutragraben 1, 07743 Jena, Germany
| | - Jürgen M. Kaufmann
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
- Voice Research Unit, Friedrich Schiller University of Jena, Leutragraben 1, 07743 Jena, Germany
| |
Collapse
|
3
|
Karlsson T, Schaefer H, Barton JJS, Corrow SL. Effects of Voice and Biographic Data on Face Encoding. Brain Sci 2023; 13:brainsci13010148. [PMID: 36672128 PMCID: PMC9857090 DOI: 10.3390/brainsci13010148] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 01/05/2023] [Accepted: 01/10/2023] [Indexed: 01/18/2023] Open
Abstract
There are various perceptual and informational cues for recognizing people. How these interact in the recognition process is of interest. Our goal was to determine if the encoding of faces was enhanced by the concurrent presence of a voice, biographic data, or both. Using a between-subject design, four groups of 10 subjects learned the identities of 24 faces seen in video-clips. Half of the faces were seen only with their names, while the other half had additional information. For the first group this was the person's voice, for the second, it was biographic data, and for the third, both voice and biographic data. In a fourth control group, the additional information was the voice of a generic narrator relating non-biographic information. In the retrieval phase, subjects performed a familiarity task and then a face-to-name identification task with dynamic faces alone. Our results consistently showed no benefit to face encoding with additional information, for either the familiarity or identification task. Tests for equivalency indicated that facilitative effects of a voice or biographic data on face encoding were not likely to exceed 3% in accuracy. We conclude that face encoding is minimally influenced by cross-modal information from voices or biographic data.
Collapse
Affiliation(s)
- Thilda Karlsson
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
- Faculty of Medicine, Linköping University, 582 25 Linköping, Sweden
| | - Heidi Schaefer
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
| | - Jason J. S. Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
- Correspondence: ; Tel.: +604-875-4339; Fax: +604-875-4302
| | - Sherryse L. Corrow
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
- Department of Psychology, Bethel University, St. Paul, MN 55112, USA
| |
Collapse
|
4
|
Fransson S, Corrow S, Yeung S, Schaefer H, Barton JJS. Effects of Faces and Voices on the Encoding of Biographic Information. Brain Sci 2022; 12:brainsci12121716. [PMID: 36552175 PMCID: PMC9775626 DOI: 10.3390/brainsci12121716] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 12/10/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022] Open
Abstract
There are multiple forms of knowledge about people. Whether diverse person-related data interact is of interest regarding the more general issue of integration of multi-source information about the world. Our goal was to examine whether perception of a person's face or voice enhanced the encoding of their biographic data. We performed three experiments. In the first experiment, subjects learned the biographic data of a character with or without a video clip of their face. In the second experiment, they learned the character's data with an audio clip of either a generic narrator's voice or the character's voice relating the same biographic information. In the third experiment, an audiovisual clip of both the face and voice of either a generic narrator or the character accompanied the learning of biographic data. After learning, a test phase presented biographic data alone, and subjects were tested first for familiarity and second for matching of biographic data to the name. The results showed equivalent learning of biographic data across all three experiments, and none showed evidence that a character's face or voice enhanced the learning of biographic information. We conclude that the simultaneous processing of perceptual representations of people may not modulate the encoding of biographic data.
Collapse
Affiliation(s)
- Sarah Fransson
- Faculty of Medicine, Linköping University, 581 83 Linköping, Sweden
| | - Sherryse Corrow
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vanacouver, BC V5Z 3N9, Canada
- Department of Psychology, Bethel University, St. Paul, MN 55112, USA
| | - Shanna Yeung
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vanacouver, BC V5Z 3N9, Canada
| | - Heidi Schaefer
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vanacouver, BC V5Z 3N9, Canada
| | - Jason J. S. Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vanacouver, BC V5Z 3N9, Canada
- Correspondence: ; Tel.: +1-604-875-4339; Fax: +1-604-875-4302
| |
Collapse
|
5
|
Maguinness C, von Kriegstein K. Visual mechanisms for voice-identity recognition flexibly adjust to auditory noise level. Hum Brain Mapp 2021; 42:3963-3982. [PMID: 34043249 PMCID: PMC8288083 DOI: 10.1002/hbm.25532] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 04/26/2021] [Accepted: 05/02/2021] [Indexed: 11/24/2022] Open
Abstract
Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so‐called ‘face‐benefit’ is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face‐benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face‐sensitive regions while participants recognised the identity of auditory‐only speakers (previously learned by face) in high (SNR −4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face‐benefit in both noise levels, for most participants (16 of 21). In high‐noise, the recognition of face‐learned speakers engaged the right posterior superior temporal sulcus motion‐sensitive face area (pSTS‐mFA), a region implicated in the processing of dynamic facial cues. The face‐benefit in high‐noise also correlated positively with increased functional connectivity between this region and voice‐sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face‐benefit. In low‐noise, the face‐benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS‐mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice‐identity recognition in auditory‐only listening conditions.
Collapse
Affiliation(s)
- Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
6
|
Simhi N, Yovel G. Dissociating gait from static appearance: A virtual reality study of the role of dynamic identity signatures in person recognition. Cognition 2020; 205:104445. [PMID: 32920344 DOI: 10.1016/j.cognition.2020.104445] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2019] [Revised: 08/16/2020] [Accepted: 08/20/2020] [Indexed: 11/18/2022]
Abstract
Studies on person recognition have primarily examined recognition of static faces, presented on a computer screen at a close distance. Nevertheless, in naturalistic situations we typically see the whole dynamic person, often approaching from a distance. In such cases, facial information may be less clear, and the motion pattern of an individual, their dynamic identity signature (DIS), may be used for person recognition. Studies that examined the role of motion in person recognition, presented videos of people in motion. However, such stimuli do not allow for the dissociation of gait from face and body form, as different identities differ both in their gait and static appearance. To examine the contribution of gait in person recognition, independently from static appearance, we used a virtual environment, and presented across participants, the same face and body form with different gaits. The virtual environment also enabled us to assess the distance at which a person is recognized as a continuous variable. Using this setting, we assessed the accuracy and distance at which identities are recognized based on their gait, as a function of gait distinctiveness. We find that the accuracy and distance at which people were recognized increased with gait distinctiveness. Importantly, these effects were found when recognizing identities in motion but not from static displays, indicating that DIS rather than attention, enabled more accurate person recognition. Overall these findings highlight that gait contributes to person recognition beyond the face and body and stress an important role for gait in real-life person recognition.
Collapse
Affiliation(s)
- Noa Simhi
- The School of Psychological Sciences, Tel-Aviv University, Tel-Aviv 69978, Israel.
| | - Galit Yovel
- The School of Psychological Sciences, Tel-Aviv University, Tel-Aviv 69978, Israel; The Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv 69978, Israel.
| |
Collapse
|
7
|
Face-voice space: Integrating visual and auditory cues in judgments of person distinctiveness. Atten Percept Psychophys 2020; 82:3710-3727. [PMID: 32696231 DOI: 10.3758/s13414-020-02084-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Faces and voices each convey multiple cues enabling us to tell people apart. Research on face and voice distinctiveness commonly utilizes multidimensional space to represent these complex, perceptual abilities. We extend this framework to examine how a combined face-voice space would relate to its constituent face and voice spaces. Participants rated videos of speakers for their dissimilarity in face only, voice only, and face-voice together conditions. Multiple dimensional scaling (MDS) and regression analyses showed that whereas face-voice space more closely resembled face space, indicating visual dominance, face-voice distinctiveness was best characterized by a multiplicative integration of face-only and voice-only distinctiveness, indicating that auditory and visual cues are used interactively in person-distinctiveness judgments. Further, the multiplicative integration could not be explained by the small correlation found between face-only and voice-only distinctiveness. As an exploratory analysis, we next identified auditory and visual features that correlated with the dimensions in the MDS solutions. Features pertaining to facial width, lip movement, spectral centroid, fundamental frequency, and loudness variation were identified as important features in face-voice space. We discuss the implications of our findings in terms of person perception, recognition, and face-voice matching abilities.
Collapse
|
8
|
Stevenage SV, Neil GJ, Parsons B, Humphreys A. A sound effect: Exploration of the distinctiveness advantage in voice recognition. APPLIED COGNITIVE PSYCHOLOGY 2018; 32:526-536. [PMID: 30333682 PMCID: PMC6175009 DOI: 10.1002/acp.3424] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2017] [Revised: 05/21/2018] [Accepted: 06/01/2018] [Indexed: 11/30/2022]
Abstract
Two experiments are presented, which explore the presence of a distinctiveness advantage when recognising unfamiliar voices. In Experiment 1, distinctive voices were recognised significantly better, and with greater confidence, in a sequential same/different matching task compared with typical voices. These effects were replicated and extended in Experiment 2, as distinctive voices were recognised better even under challenging listening conditions imposed by nonsense sentences and temporal reversal. Taken together, the results aligned well with similar results when processing faces, and provided a useful point of comparison between voice and face processing.
Collapse
Affiliation(s)
| | - Greg J. Neil
- Southampton Solent UniversitySchool of Sport, Health and Social SciencesSouthamptonUK
| | - Beth Parsons
- University of WinchesterDepartment of PsychologyWinchesterUK
| | - Abi Humphreys
- University of SouthamptonDepartment of PsychologySouthamptonUK
| |
Collapse
|
9
|
Maguinness C, Roswandowitz C, von Kriegstein K. Understanding the mechanisms of familiar voice-identity recognition in the human brain. Neuropsychologia 2018; 116:179-193. [DOI: 10.1016/j.neuropsychologia.2018.03.039] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2017] [Revised: 03/28/2018] [Accepted: 03/29/2018] [Indexed: 11/26/2022]
|
10
|
Stevenage SV. Drawing a distinction between familiar and unfamiliar voice processing: A review of neuropsychological, clinical and empirical findings. Neuropsychologia 2017; 116:162-178. [PMID: 28694095 DOI: 10.1016/j.neuropsychologia.2017.07.005] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Revised: 06/04/2017] [Accepted: 07/07/2017] [Indexed: 11/29/2022]
Abstract
Thirty years on from their initial observation that familiar voice recognition is not the same as unfamiliar voice discrimination (van Lancker and Kreiman, 1987), the current paper reviews available evidence in support of a distinction between familiar and unfamiliar voice processing. Here, an extensive review of the literature is provided, drawing on evidence from four domains of interest: the neuropsychological study of healthy individuals, neuropsychological investigation of brain-damaged individuals, the exploration of voice recognition deficits in less commonly studied clinical conditions, and finally empirical data from healthy individuals. All evidence is assessed in terms of its contribution to the question of interest - is familiar voice processing distinct from unfamiliar voice processing. In this regard, the evidence provides compelling support for van Lancker and Kreiman's early observation. Two considerations result: First, the limits of research based on one or other type of voice stimulus are more clearly appreciated. Second, given the demonstration of a distinction between unfamiliar and familiar voice processing, a new wave of research is encouraged which examines the transition involved as a voice is learned.
Collapse
Affiliation(s)
- Sarah V Stevenage
- Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ, UK.
| |
Collapse
|
11
|
Abstract
Experimental research has shown that pairs of stimuli which are congruent and assumed to ‘go together’ are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.
Collapse
Affiliation(s)
- Merle T. Fairhurst
- Centre for the Study of the Senses, School of Advanced Study, University of London, London, United Kingdom
- Munich Centre for Neuroscience, Ludwig Maximilian University, Munich, Germany
- * E-mail:
| | - Minnie Scott
- Tate Leaning, Tate Britain, London, United Kingdom
| | - Ophelia Deroy
- Centre for the Study of the Senses, School of Advanced Study, University of London, London, United Kingdom
- Munich Centre for Neuroscience, Ludwig Maximilian University, Munich, Germany
| |
Collapse
|
12
|
Maguinness C, von Kriegstein K. Cross-modal processing of voices and faces in developmental prosopagnosia and developmental phonagnosia. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1313347] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Corrina Maguinness
- Max Planck Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Max Planck Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Psychology, Humboldt University of Berlin, Berlin, Germany
| |
Collapse
|
13
|
The MUSOS (MUsic SOftware System) Toolkit: A computer-based, open source application for testing memory for melodies. Behav Res Methods 2017; 50:684-702. [PMID: 28432568 DOI: 10.3758/s13428-017-0894-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite numerous innovative studies, rates of replication in the field of music psychology are extremely low (Frieler et al., 2013). Two key methodological challenges affecting researchers wishing to administer and reproduce studies in music cognition are the difficulty of measuring musical responses, particularly when conducting free-recall studies, and access to a reliable set of novel stimuli unrestricted by copyright or licensing issues. In this article, we propose a solution for these challenges in computer-based administration. We present a computer-based application for testing memory for melodies. Created using the software Max/MSP (Cycling '74, 2014a), the MUSOS (Music Software System) Toolkit uses a simple modular framework configurable for testing common paradigms such as recall, old-new recognition, and stem completion. The program is accompanied by a stimulus set of 156 novel, copyright-free melodies, in audio and Max/MSP file formats. Two pilot tests were conducted to establish the properties of the accompanying stimulus set that are relevant to music cognition and general memory research. By using this software, a researcher without specialist musical training may administer and accurately measure responses from common paradigms used in the study of memory for music.
Collapse
|
14
|
Bülthoff I, Newell FN. Crossmodal priming of unfamiliar faces supports early interactions between voices and faces in person perception. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1290729] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
| | - Fiona N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland
| |
Collapse
|
15
|
Butcher N, Lander K. Exploring the motion advantage: evaluating the contribution of familiarity and differences in facial motion. Q J Exp Psychol (Hove) 2016; 70:919-929. [PMID: 26822035 DOI: 10.1080/17470218.2016.1138974] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Seeing a face move can improve familiar face recognition, face matching, and learning. More specifically, familiarity with a face may facilitate the learning of an individual's "dynamic facial signature". In the outlined research we examine the relationship between participant ratings of familiarity, the distinctiveness of motion, the amount of facial motion, and the recognition of familiar moving faces (Experiment 1) as well as the magnitude of the motion advantage (Experiment 2). Significant positive correlations were found between all factors. Findings suggest that faces rated as moving a lot and in a distinctive manner benefited the most from being seen in motion. Additionally findings indicate that facial motion information becomes a more important cue to recognition the more familiar a face is, suggesting that "dynamic facial signatures" continue to be learnt over time and integrated within the face representation. Results are discussed in relation to theoretical explanations of the moving face advantage.
Collapse
Affiliation(s)
- Natalie Butcher
- a Social Futures Institute, Teesside University , Middlesbrough , UK
| | - Karen Lander
- b School of Psychological Sciences , University of Manchester , Manchester , UK
| |
Collapse
|
16
|
Perrodin C, Kayser C, Abel TJ, Logothetis NK, Petkov CI. Who is That? Brain Networks and Mechanisms for Identifying Individuals. Trends Cogn Sci 2015; 19:783-796. [PMID: 26454482 PMCID: PMC4673906 DOI: 10.1016/j.tics.2015.09.002] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2015] [Revised: 08/31/2015] [Accepted: 09/01/2015] [Indexed: 01/29/2023]
Abstract
Social animals can identify conspecifics by many forms of sensory input. However, whether the neuronal computations that support this ability to identify individuals rely on modality-independent convergence or involve ongoing synergistic interactions along the multiple sensory streams remains controversial. Direct neuronal measurements at relevant brain sites could address such questions, but this requires better bridging the work in humans and animal models. Here, we overview recent studies in nonhuman primates on voice and face identity-sensitive pathways and evaluate the correspondences to relevant findings in humans. This synthesis provides insights into converging sensory streams in the primate anterior temporal lobe (ATL) for identity processing. Furthermore, we advance a model and suggest how alternative neuronal mechanisms could be tested.
Collapse
Affiliation(s)
- Catherine Perrodin
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany; Institute of Behavioural Neuroscience, University College London, London, WC1H 0AP, UK
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, UK
| | - Taylor J Abel
- Department of Neurosurgery, University of Iowa, Iowa City, IA 52242 USA
| | - Nikos K Logothetis
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany; Division of Imaging Science and Biomedical Engineering, University of Manchester, Manchester, M13 9PT, UK
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University Medical School, Newcastle upon Tyne, NE2 4HH, UK.
| |
Collapse
|
17
|
Liu RR, Corrow SL, Pancaroglu R, Duchaine B, Barton JJS. The processing of voice identity in developmental prosopagnosia. Cortex 2015; 71:390-7. [PMID: 26321070 DOI: 10.1016/j.cortex.2015.07.030] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Revised: 06/17/2015] [Accepted: 07/20/2015] [Indexed: 01/30/2023]
Abstract
BACKGROUND Developmental prosopagnosia is a disorder of face recognition that is believed to reflect impairments of visual mechanisms. However, voice recognition has rarely been evaluated in developmental prosopagnosia to clarify if it is modality-specific or part of a multi-modal person recognition syndrome. OBJECTIVE Our goal was to examine whether voice discrimination and/or recognition are impaired in subjects with developmental prosopagnosia. DESIGN/METHODS 73 healthy controls and 12 subjects with developmental prosopagnosia performed a match-to-sample test of voice discrimination and a test of short-term voice familiarity, as well as a questionnaire about face and voice identification in daily life. RESULTS Eleven subjects with developmental prosopagnosia scored within the normal range for voice discrimination and voice recognition. One was impaired on discrimination and borderline for recognition, with equivalent scores for face and voice recognition, despite being unaware of voice processing problems. CONCLUSIONS Most subjects with developmental prosopagnosia are not impaired in short-term voice familiarity, providing evidence that developmental prosopagnosia is usually a modality-specific disorder of face recognition. However, there may be heterogeneity, with a minority having additional voice processing deficits. Objective tests of voice recognition should be integrated into the diagnostic evaluation of this disorder to distinguish it from a multi-modal person recognition syndrome.
Collapse
Affiliation(s)
- Ran R Liu
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Eye Care Centre, Vancouver, BC, Canada.
| | - Sherryse L Corrow
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Eye Care Centre, Vancouver, BC, Canada.
| | - Raika Pancaroglu
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Eye Care Centre, Vancouver, BC, Canada.
| | - Brad Duchaine
- Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| | - Jason J S Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Eye Care Centre, Vancouver, BC, Canada.
| |
Collapse
|