1
|
Gao D, Liang X, Ting Q, Nichols ES, Bai Z, Xu C, Cai M, Liu L. A meta-analysis of letter-sound integration: Assimilation and accommodation in the superior temporal gyrus. Hum Brain Mapp 2024; 45:e26713. [PMID: 39447213 PMCID: PMC11501095 DOI: 10.1002/hbm.26713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 04/15/2024] [Accepted: 05/02/2024] [Indexed: 10/26/2024] Open
Abstract
Despite being a relatively new cultural phenomenon, the ability to perform letter-sound integration is readily acquired even though it has not had time to evolve in the brain. Leading theories of how the brain accommodates literacy acquisition include the neural recycling hypothesis and the assimilation-accommodation hypothesis. The neural recycling hypothesis proposes that a new cultural skill is developed by "invading" preexisting neural structures to support a similar cognitive function, while the assimilation-accommodation hypothesis holds that a new cognitive skill relies on direct invocation of preexisting systems (assimilation) and adds brain areas based on task requirements (accommodation). Both theories agree that letter-sound integration may be achieved by reusing pre-existing functionally similar neural bases, but differ in their proposals of how this occurs. We examined the evidence for each hypothesis by systematically comparing the similarities and differences between letter-sound integration and two other types of preexisting and functionally similar audiovisual (AV) processes, namely object-sound and speech-sound integration, by performing an activation likelihood estimation (ALE) meta-analysis. All three types of AV integration recruited the left posterior superior temporal gyrus (STG), while speech-sound integration additionally activated the bilateral middle STG and letter-sound integration directly invoked the AV areas involved in speech-sound integration. These findings suggest that letter-sound integration may reuse the STG for speech-sound and object-sound integration through an assimilation-accommodation mechanism.
Collapse
Affiliation(s)
- Danqi Gao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Xitong Liang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Qi Ting
- Department of Brain Cognition and Intelligent MedicineBeijing University of Posts and TelecommunicationsBeijingChina
| | | | - Zilin Bai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Chaoying Xu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Mingnan Cai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Li Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| |
Collapse
|
2
|
Giurgola S, Lo Gerfo E, Farnè A, Roy AC, Bolognini N. Multisensory integration and motor resonance in the primary motor cortex. Cortex 2024; 179:235-246. [PMID: 39213776 DOI: 10.1016/j.cortex.2024.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 06/09/2024] [Accepted: 07/15/2024] [Indexed: 09/04/2024]
Abstract
Humans are endowed with a motor system that resonates to speech sounds, but whether concurrent visual information from lip movements can improve speech perception at a motor level through multisensory integration mechanisms remains unknown. Therefore, the aim of the study was to explore behavioral and neurophysiological correlates of multisensory influences on motor resonance in speech perception. Motor-evoked potentials (MEPs), by single pulse transcranial magnetic stimulation (TMS) applied over the left lip muscle (orbicularis oris) representation in the primary motor cortex, were recorded in healthy participants during the presentation of syllables in unimodal (visual or auditory) or multisensory (audio-visual) congruent or incongruent conditions. At the behavioral level, subjects showed better syllable identification in the congruent audio-visual condition as compared to the unimodal conditions, hence showing a multisensory enhancement effect. Accordingly, at the neurophysiological level, increased MEPs amplitudes were found in the congruent audio-visual condition, as compared to the unimodal ones. Incongruent audio-visual syllables resulting in illusory percepts did not increase corticospinal excitability, which in fact was comparable to that induced by the real perception of the same syllable. In conclusion, seeing and hearing congruent bilabial syllables increases the excitability of the lip representation in the primary motor cortex, hence documenting that multisensory integration can facilitate speech processing by influencing motor resonance. These findings highlight the modulation role of multisensory processing showing that it can boost speech perception and that multisensory interactions occur not only within higher-order regions, but also within primary motor areas, as shown by corticospinal excitability changes.
Collapse
Affiliation(s)
- Serena Giurgola
- Department of Psychology & NeuroMI - Milan Center for Neuroscience, University of Milano-Bicocca, Milan, Italy.
| | | | - Alessandro Farnè
- Impact Team of the Lyon Neuroscience Research Centre, INSERM U1028 CNRS UMR5292, University Claude Bernard Lyon 1, Lyon, France
| | - Alice C Roy
- Laboratoire Dynamique du Langage, Centre National de la Recherche Scientifique, UMR 5596, CNRS Université de Lyon 2, Lyon, France
| | - Nadia Bolognini
- Department of Psychology & NeuroMI - Milan Center for Neuroscience, University of Milano-Bicocca, Milan, Italy; IRCCS Istituto Auxologico Italiano, Laboratory of Neuropsychology, Milan, Italy.
| |
Collapse
|
3
|
Scheliga S, Kellermann T, Lampert A, Rolke R, Spehr M, Habel U. Neural correlates of multisensory integration in the human brain: an ALE meta-analysis. Rev Neurosci 2023; 34:223-245. [PMID: 36084305 DOI: 10.1515/revneuro-2022-0065] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 07/22/2022] [Indexed: 02/07/2023]
Abstract
Previous fMRI research identified superior temporal sulcus as central integration area for audiovisual stimuli. However, less is known about a general multisensory integration network across senses. Therefore, we conducted activation likelihood estimation meta-analysis with multiple sensory modalities to identify a common brain network. We included 49 studies covering all Aristotelian senses i.e., auditory, visual, tactile, gustatory, and olfactory stimuli. Analysis revealed significant activation in bilateral superior temporal gyrus, middle temporal gyrus, thalamus, right insula, and left inferior frontal gyrus. We assume these regions to be part of a general multisensory integration network comprising different functional roles. Here, thalamus operate as first subcortical relay projecting sensory information to higher cortical integration centers in superior temporal gyrus/sulcus while conflict-processing brain regions as insula and inferior frontal gyrus facilitate integration of incongruent information. We additionally performed meta-analytic connectivity modelling and found each brain region showed co-activations within the identified multisensory integration network. Therefore, by including multiple sensory modalities in our meta-analysis the results may provide evidence for a common brain network that supports different functional roles for multisensory integration.
Collapse
Affiliation(s)
- Sebastian Scheliga
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Angelika Lampert
- Institute of Physiology, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Roman Rolke
- Department of Palliative Medicine, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Marc Spehr
- Department of Chemosensation, RWTH Aachen University, Institute for Biology, Worringerweg 3, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
4
|
Zhao T, Hu A, Su R, Lyu C, Wang L, Yan N. Phonetic versus spatial processes during motor-oriented imitations of visuo-labial and visuo-lingual speech: A functional near-infrared spectroscopy study. Eur J Neurosci 2021; 55:154-174. [PMID: 34854143 DOI: 10.1111/ejn.15550] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 10/21/2021] [Accepted: 11/23/2021] [Indexed: 12/28/2022]
Abstract
While a large amount of research has studied the facilitation of visual speech on auditory speech recognition, few have investigated the processing of visual speech gestures in motor-oriented tasks that focus on the spatial and motor features of the articulator actions instead of the phonetic features of auditory and visual speech. The current study examined the engagement of spatial and phonetic processing of visual speech in a motor-oriented speech imitation task. Functional near-infrared spectroscopy (fNIRS) was used to measure the haemodynamic activities related to spatial processing and audiovisual integration in the superior parietal lobe (SPL) and the posterior superior/middle temporal gyrus (pSTG/pMTG) respectively. In addition, visuo-labial and visuo-lingual speech were compared with examine the influence of visual familiarity and audiovisual association on the processes in question. fNIRS revealed significant activations in the SPL but found no supra-additive audiovisual activations in the pSTG/pMTG, suggesting that the processing of audiovisual speech stimuli was primarily focused on spatial processes related to action comprehension and preparation, whereas phonetic processes related to audiovisual integration was minimal. Comparisons between visuo-labial and visuo-lingual speech imitations revealed no significant difference in the activation of the SPL or the pSTG/pMTG, suggesting that a higher degree of visual familiarity and audiovisual association did not significantly influence how visuo-labial speech was processed compared with visuo-lingual speech. The current study offered insights on the pattern of visual-speech processing under a motor-oriented task objective and provided further evidence for the modulation of multimodal speech integration by voluntary selective attention and task objective.
Collapse
Affiliation(s)
- Tinghao Zhao
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Anming Hu
- Department of Rehabilitation Medicine, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Rongfeng Su
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Chengchen Lyu
- Institute of Software, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Lan Wang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Nan Yan
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
5
|
Treille A, Vilain C, Schwartz JL, Hueber T, Sato M. Electrophysiological evidence for Audio-visuo-lingual speech integration. Neuropsychologia 2018; 109:126-133. [DOI: 10.1016/j.neuropsychologia.2017.12.024] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Revised: 11/21/2017] [Accepted: 12/13/2017] [Indexed: 01/25/2023]
|