1
|
Endo N, Vilain C, Nakazawa K, Ito T. Somatosensory influence on auditory cortical response of self-generated sound. Neuropsychologia 2025; 211:109103. [PMID: 40021117 DOI: 10.1016/j.neuropsychologia.2025.109103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2024] [Revised: 02/24/2025] [Accepted: 02/24/2025] [Indexed: 03/03/2025]
Abstract
Motor execution which results in the generation of sounds attenuates the cortical response to these self-generated sounds. This attenuation has been explained as a result of motor relevant processing. The current study shows that corresponding somatosensory inputs can also change the auditory processing of a self-generated sound. We recorded auditory event-related potentials (ERP) in response to self-generated sounds and assessed how the amount of auditory attenuation changed according to the somatosensory inputs. The sound stimuli were generated by a finger movement that pressed on a virtual object, which was produced by a haptic robotic device. Somatosensory inputs were modulated by changing the stiffness of this virtual object (low and high) in an unpredictable manner. For comparison purposes, we carried out the same test with a computer keyboard, which is conventionally used to induce the auditory attenuation of self-generated sound. While N1 and P2 attenuations were clearly induced in the control condition with the keyboard as has been observed in previous studies, when using the robotic device the amplitude of N1 was found to vary according to the stiffness of the virtual object. The amplitude of N1 in the low stiffness condition was similar to that found using the keyboard for the same condition but not in the high stiffness condition. In addition, P2 attenuation did not differ between stiffness conditions. The waveforms of auditory ERP after 200 ms also differed according to the stiffness conditions. The estimated source of N1 attenuation was located in the right parietal area. These results suggest that somatosensory inputs during movement can modify the auditory processing of self-generated sound. The auditory processing of self-generated sound may represent self-referenced processing like an embodied process or an action-perception mechanism.
Collapse
Affiliation(s)
- Nozomi Endo
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1, Komaba, Meguro-ku, Tokyo, 153-8902, Japan
| | - Coriandre Vilain
- Univ. Grenoble Alpes, CNRS, Grenoble-INP, GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, Saint Martin d'Heres, CEDEX, France
| | - Kimitaka Nakazawa
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1, Komaba, Meguro-ku, Tokyo, 153-8902, Japan
| | - Takayuki Ito
- Univ. Grenoble Alpes, CNRS, Grenoble-INP, GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, Saint Martin d'Heres, CEDEX, France.
| |
Collapse
|
2
|
De Ridder D, Adhia D, Vanneste S. The brain's duck test in phantom percepts: Multisensory congruence in neuropathic pain and tinnitus. Brain Res 2024; 1844:149137. [PMID: 39103069 DOI: 10.1016/j.brainres.2024.149137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 06/26/2024] [Accepted: 08/01/2024] [Indexed: 08/07/2024]
Abstract
Chronic neuropathic pain and chronic tinnitus have been likened to phantom percepts, in which a complete or partial sensory deafferentation results in a filling in of the missing information derived from memory. 150 participants, 50 with tinnitus, 50 with chronic pain and 50 healthy controls underwent a resting state EEG. Source localized current density is recorded from all the sensory cortices (olfactory, gustatory, somatosensory, auditory, vestibular, visual) as well as the parahippocampal area. Functional connectivity by means of lagged phase synchronization is also computed between these regions of interest. Pain and tinnitus are associated with gamma band activity, reflecting prediction errors, in all sensory cortices except the olfactory and gustatory cortex. Functional connectivity identifies theta frequency connectivity between each of the sensory cortices except the chemical senses to the parahippocampus, but not between the individual sensory cortices. When one sensory domain is deprived, the other senses may provide the parahippocampal 'contextual' area with the most likely sound or somatosensory sensation to fill in the gap, applying an abductive 'duck test' approach, i.e., based on stored multisensory congruence. This novel concept paves the way to develop novel treatments for pain and tinnitus, using multisensory (i.e. visual, vestibular, somatosensory, auditory) modulation with or without associated parahippocampal targeting.
Collapse
Affiliation(s)
- Dirk De Ridder
- Unit of Neurosurgery, Department of Surgical Sciences, Dunedin School of Medicine, University of Otago, Dunedin, New Zealand
| | - Divya Adhia
- Unit of Neurosurgery, Department of Surgical Sciences, Dunedin School of Medicine, University of Otago, Dunedin, New Zealand
| | - Sven Vanneste
- School of Psychology, Trinity College Dublin, Dublin, Ireland; Global Brain Health Institute & Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland. https://www.lab-clint.org
| |
Collapse
|
3
|
Ashokumar M, Schwartz JL, Ito T. Changes in Speech Production Following Perceptual Training With Orofacial Somatosensory Inputs. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:3962-3973. [PMID: 38497731 DOI: 10.1044/2023_jslhr-23-00249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
PURPOSE Orofacial somatosensory inputs play an important role in speech motor control and speech learning. Since receiving specific auditory-somatosensory inputs during speech perceptual training alters speech perception, similar perceptual training could also alter speech production. We examined whether the production performance was changed by perceptual training with orofacial somatosensory inputs. METHOD We focused on the French vowels /e/ and /ø/, contrasted in their articulation by horizontal gestures. Perceptual training consisted of a vowel identification task contrasting /e/ and /ø/. Along with training, for the first group of participants, somatosensory stimulation was applied as facial skin stretch in backward direction. We recorded the target vowels uttered by the participants before and after the perceptual training and compared their F1, F2, and F3 formants. We also tested a control group with no somatosensory stimulation and another somatosensory group with a different vowel continuum (/e/-/i/) for perceptual training. RESULTS Perceptual training with somatosensory stimulation induced changes in F2 and F3 in the produced vowel sounds. F2 decreased consistently in the two somatosensory groups. F3 increased following the /e/-/ø/ training and decreased following the /e/-/i/ training. F2 change was significantly correlated with the perceptual shift between the first and second half of the training phase in the somatosensory group with the /e/-/ø/ training, but not with the /e/-/i/ training. The control group displayed no effect on F2 and F3, and just a tendency of F1 increase. CONCLUSION The results suggest that somatosensory inputs associated to speech sound inputs can play a role in speech training and learning in both production and perception.
Collapse
Affiliation(s)
| | | | - Takayuki Ito
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, France
| |
Collapse
|
4
|
Trudeau-Fisette P, Vidou C, Ménard L. Speech sensorimotor relationships in francophone preschoolers and adults: Adaptation to real-time auditory feedback perturbations. PLoS One 2024; 19:e0306246. [PMID: 39172970 PMCID: PMC11341022 DOI: 10.1371/journal.pone.0306246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 06/12/2024] [Indexed: 08/24/2024] Open
Abstract
PURPOSE This study investigates the development of sensorimotor relationships by examining adaptation to real-time perturbations of auditory feedback. METHOD Acoustic signals were recorded while preschoolers and adult speakers of Canadian French produced several utterances of the front rounded vowel /ø/ for which F2 was gradually shifted up to a maximum of 40%. RESULTS The findings indicate that, although preschool-aged children produced overall similar responses to the perturbed feedback, they displayed significantly more trial-to-trial variability than adults. Furthermore, whereas the magnitude of the adaptation in adults was positively correlated with the slope of the perceptual categorical function, the amount of adaptation in children was linked to the variability of their productions in the baseline condition. These patterns suggest that the immature motor control observed in children, which contributes to increased variability in their speech production, plays a role in shaping adaptive behavior, as it allows children to explore articulatory/acoustic spaces and learn sensorimotor relationships.
Collapse
Affiliation(s)
- Paméla Trudeau-Fisette
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, Quebec, Canada
- Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
| | - Camille Vidou
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, Quebec, Canada
- Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
| | - Lucie Ménard
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, Quebec, Canada
- Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
| |
Collapse
|
5
|
Kent RD. The Feel of Speech: Multisystem and Polymodal Somatosensation in Speech Production. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1424-1460. [PMID: 38593006 DOI: 10.1044/2024_jslhr-23-00575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
PURPOSE The oral structures such as the tongue and lips have remarkable somatosensory capacities, but understanding the roles of somatosensation in speech production requires a more comprehensive knowledge of somatosensation in the speech production system in its entirety, including the respiratory, laryngeal, and supralaryngeal subsystems. This review was conducted to summarize the system-wide somatosensory information available for speech production. METHOD The search was conducted with PubMed/Medline and Google Scholar for articles published until November 2023. Numerous search terms were used in conducting the review, which covered the topics of psychophysics, basic and clinical behavioral research, neuroanatomy, and neuroscience. RESULTS AND CONCLUSIONS The current understanding of speech somatosensation rests primarily on the two pillars of psychophysics and neuroscience. The confluence of polymodal afferent streams supports the development, maintenance, and refinement of speech production. Receptors are both canonical and noncanonical, with the latter occurring especially in the muscles innervated by the facial nerve. Somatosensory representation in the cortex is disproportionately large and provides for sensory interactions. Speech somatosensory function is robust over the lifespan, with possible declines in advanced aging. The understanding of somatosensation in speech disorders is largely disconnected from research and theory on speech production. A speech somatoscape is proposed as the generalized, system-wide sensation of speech production, with implications for speech development, speech motor control, and speech disorders.
Collapse
|
6
|
Saito H, Tiede M, Whalen DH, Ménard L. The effect of native language and bilingualism on multimodal perception in speech: A study of audio-aerotactile integrationa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2209-2220. [PMID: 38526052 PMCID: PMC10965246 DOI: 10.1121/10.0025381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 02/22/2024] [Accepted: 02/27/2024] [Indexed: 03/26/2024]
Abstract
Previous studies of speech perception revealed that tactile sensation can be integrated into the perception of stop consonants. It remains uncertain whether such multisensory integration can be shaped by linguistic experience, such as the listener's native language(s). This study investigates audio-aerotactile integration in phoneme perception for English and French monolinguals as well as English-French bilingual listeners. Six step voice onset time continua of alveolar (/da/-/ta/) and labial (/ba/-/pa/) stops constructed from both English and French end points were presented to listeners who performed a forced-choice identification task. Air puffs were synchronized to syllable onset and randomly applied to the back of the hand. Results show that stimuli with an air puff elicited more "voiceless" responses for the /da/-/ta/ continuum by both English and French listeners. This suggests that audio-aerotactile integration can occur even though the French listeners did not have an aspiration/non-aspiration contrast in their native language. Furthermore, bilingual speakers showed larger air puff effects compared to monolinguals in both languages, perhaps due to bilinguals' heightened receptiveness to multimodal information in speech.
Collapse
Affiliation(s)
- Haruka Saito
- Département de Linguistique, Université du Québec à Montréal, Montréal, Québec H2L2C5, Canada
| | - Mark Tiede
- Department of Psychiatry, Yale School of Medicine, New Haven, Connecticut 06520, USA
| | - D H Whalen
- The Graduate Center, City University of New York (CUNY), New York, New York 10016, USA
- Yale Child Study Center, New Haven, Connecticut 06520, USA
| | - Lucie Ménard
- Département de Linguistique, Université du Québec à Montréal, Montréal, Québec H2L2C5, Canada
| |
Collapse
|
7
|
Choi D, Yeung HH, Werker JF. Sensorimotor foundations of speech perception in infancy. Trends Cogn Sci 2023:S1364-6613(23)00124-9. [PMID: 37302917 DOI: 10.1016/j.tics.2023.05.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 06/13/2023]
Abstract
The perceptual system for speech is highly organized from early infancy. This organization bootstraps young human learners' ability to acquire their native speech and language from speech input. Here, we review behavioral and neuroimaging evidence that perceptual systems beyond the auditory modality are also specialized for speech in infancy, and that motor and sensorimotor systems can influence speech perception even in infants too young to produce speech-like vocalizations. These investigations complement existing literature on infant vocal development and on the interplay between speech perception and production systems in adults. We conclude that a multimodal speech and language network is present before speech-like vocalizations emerge.
Collapse
Affiliation(s)
- Dawoon Choi
- Department of Psychology, Yale University, Yale, CT, USA.
| | - H Henny Yeung
- Department of Linguistics, Simon Fraser University, Burnaby, BC, Canada
| | - Janet F Werker
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
8
|
Ménard L, Beaudry L, Perrier P. Effects of somatosensory perturbation on the perception of French /u/. JASA EXPRESS LETTERS 2023; 3:2887654. [PMID: 37125874 DOI: 10.1121/10.0017933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 04/05/2023] [Indexed: 05/03/2023]
Abstract
In a study of whether somatosensory feedback related to articulatory configuration is involved in speech perception, 30 French-speaking adults performed a speech discrimination task in which vowel pairs along the French /u/ (rounded vowel requiring a small lip area) to /œ/ (rounded vowel associated with larger lip area) continuum were used as stimuli. Listeners had to perform the test in two conditions: with a 2-cm-diameter lip-tube in place (mimicking /œ/) and without the lip-tube (neutral lip position). Results show that, in the lip-tube condition, listeners perceived more stimuli as /œ/, in line with the proposal that an auditory-somatosensory interaction exists.
Collapse
Affiliation(s)
- Lucie Ménard
- Laboratoire de Phonétique, Université du Québec à Montréal, Center for Research on Brain, Language, and Music, CP. 8888, succ. Centre-Ville, Montreal, Québec H3C 3P8, Canada
| | - Lambert Beaudry
- Laboratoire de Phonétique, Université du Québec à Montréal, Center for Research on Brain, Language, and Music, CP. 8888, succ. Centre-Ville, Montreal, Québec H3C 3P8, Canada
| | - Pascal Perrier
- Université Grenoble Alpes, Centre National de la Recherche Scientifique (CNRS), Grenoble Institut National Polytechnique (INP), Institute of Engineering, and GIPSA-Lab, 38000 Grenoble, , ,
| |
Collapse
|
9
|
De Ridder D, Friston K, Sedley W, Vanneste S. A parahippocampal-sensory Bayesian vicious circle generates pain or tinnitus: a source-localized EEG study. Brain Commun 2023; 5:fcad132. [PMID: 37223127 PMCID: PMC10202557 DOI: 10.1093/braincomms/fcad132] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 02/14/2023] [Accepted: 04/19/2023] [Indexed: 05/25/2023] Open
Abstract
Pain and tinnitus share common pathophysiological mechanisms, clinical features, and treatment approaches. A source-localized resting-state EEG study was conducted in 150 participants: 50 healthy controls, 50 pain, and 50 tinnitus patients. Resting-state activity as well as functional and effective connectivity was computed in source space. Pain and tinnitus were characterized by increased theta activity in the pregenual anterior cingulate cortex, extending to the lateral prefrontal cortex and medial anterior temporal lobe. Gamma-band activity was increased in both auditory and somatosensory cortex, irrespective of the pathology, and extended to the dorsal anterior cingulate cortex and parahippocampus. Functional and effective connectivity were largely similar in pain and tinnitus, except for a parahippocampal-sensory loop that distinguished pain from tinnitus. In tinnitus, the effective connectivity between parahippocampus and auditory cortex is bidirectional, whereas the effective connectivity between parahippocampus and somatosensory cortex is unidirectional. In pain, the parahippocampal-somatosensory cortex is bidirectional, but parahippocampal auditory cortex unidirectional. These modality-specific loops exhibited theta-gamma nesting. Applying a Bayesian brain model of brain functioning, these findings suggest that the phenomenological difference between auditory and somatosensory phantom percepts result from a vicious circle of belief updating in the context of missing sensory information. This finding may further our understanding of multisensory integration and speaks to a universal treatment for pain and tinnitus-by selectively disrupting parahippocampal-somatosensory and parahippocampal-auditory theta-gamma activity and connectivity.
Collapse
Affiliation(s)
- Dirk De Ridder
- Unit of Neurosurgery, Department of Surgical Sciences, Dunedin School of Medicine, University of Otago, Dunedin 9016, New Zealand
| | - Karl Friston
- Wellcome Trust Centre for Neuroimaging, University College London, London WC1N 3AR, UK
| | - William Sedley
- Translational and Clinical Research Institute, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
| | - Sven Vanneste
- Correspondence to: Sven Vanneste Lab for Clinical & Integrative Neuroscience Global Brain Health Institute and Institute of Neuroscience Trinity College Dublin, College Green 2, Dublin D02 PN40, Ireland E-mail:
| |
Collapse
|
10
|
Franken MK, Liu BC, Ostry DJ. Towards a somatosensory theory of speech perception. J Neurophysiol 2022; 128:1683-1695. [PMID: 36416451 PMCID: PMC9762980 DOI: 10.1152/jn.00381.2022] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Revised: 11/19/2022] [Accepted: 11/19/2022] [Indexed: 11/24/2022] Open
Abstract
Speech perception is known to be a multimodal process, relying not only on auditory input but also on the visual system and possibly on the motor system as well. To date there has been little work on the potential involvement of the somatosensory system in speech perception. In the present review, we identify the somatosensory system as another contributor to speech perception. First, we argue that evidence in favor of a motor contribution to speech perception can just as easily be interpreted as showing somatosensory involvement. Second, physiological and neuroanatomical evidence for auditory-somatosensory interactions across the auditory hierarchy indicates the availability of a neural infrastructure that supports somatosensory involvement in auditory processing in general. Third, there is accumulating evidence for somatosensory involvement in the context of speech specifically. In particular, tactile stimulation modifies speech perception, and speech auditory input elicits activity in somatosensory cortical areas. Moreover, speech sounds can be decoded from activity in somatosensory cortex; lesions to this region affect perception, and vowels can be identified based on somatic input alone. We suggest that the somatosensory involvement in speech perception derives from the somatosensory-auditory pairing that occurs during speech production and learning. By bringing together findings from a set of studies that have not been previously linked, the present article identifies the somatosensory system as a presently unrecognized contributor to speech perception.
Collapse
Affiliation(s)
| | | | - David J Ostry
- McGill University, Montreal, Quebec, Canada
- Haskins Laboratories, New Haven, Connecticut
| |
Collapse
|
11
|
Ashokumar M, Guichet C, Schwartz JL, Ito T. Correlation between the effect of orofacial somatosensory inputs in speech perception and speech production performance. AUDITORY PERCEPTION & COGNITION 2022; 6:97-107. [PMID: 37260602 PMCID: PMC10229140 DOI: 10.1080/25742442.2022.2134674] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 09/20/2022] [Indexed: 06/02/2023]
Abstract
Introduction Orofacial somatosensory inputs modify the perception of speech sounds. Such auditory-somatosensory integration likely develops alongside speech production acquisition. We examined whether the somatosensory effect in speech perception varies depending on individual characteristics of speech production. Methods The somatosensory effect in speech perception was assessed by changes in category boundary between /e/ and /ø/ in a vowel identification test resulting from somatosensory stimulation providing facial skin deformation in the rearward direction corresponding to articulatory movement for /e/ applied together with the auditory input. Speech production performance was quantified by the acoustic distances between the average first, second and third formants of /e/ and /ø/ utterances recorded in a separate test. Results The category boundary between /e/ and /ø/ was significantly shifted towards /ø/ due to the somatosensory stimulation which is consistent with previous research. The amplitude of the category boundary shift was significantly correlated with the acoustic distance between the mean second - and marginally third - formants of /e/ and /ø/ productions, with no correlation with the first formant distance. Discussion Greater acoustic distances can be related to larger contrasts between the articulatory targets of vowels in speech production. These results suggest that the somatosensory effect in speech perception can be linked to speech production performance.
Collapse
Affiliation(s)
- Monica Ashokumar
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Clément Guichet
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Jean-Luc Schwartz
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Takayuki Ito
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
- Haskins Laboratories, New Haven, USA
| |
Collapse
|
12
|
Fritzsch B, Elliott KL, Yamoah EN. Neurosensory development of the four brainstem-projecting sensory systems and their integration in the telencephalon. Front Neural Circuits 2022; 16:913480. [PMID: 36213204 PMCID: PMC9539932 DOI: 10.3389/fncir.2022.913480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 08/23/2022] [Indexed: 11/18/2022] Open
Abstract
Somatosensory, taste, vestibular, and auditory information is first processed in the brainstem. From the brainstem, the respective information is relayed to specific regions within the cortex, where these inputs are further processed and integrated with other sensory systems to provide a comprehensive sensory experience. We provide the organization, genetics, and various neuronal connections of four sensory systems: trigeminal, taste, vestibular, and auditory systems. The development of trigeminal fibers is comparable to many sensory systems, for they project mostly contralaterally from the brainstem or spinal cord to the telencephalon. Taste bud information is primarily projected ipsilaterally through the thalamus to reach the insula. The vestibular fibers develop bilateral connections that eventually reach multiple areas of the cortex to provide a complex map. The auditory fibers project in a tonotopic contour to the auditory cortex. The spatial and tonotopic organization of trigeminal and auditory neuron projections are distinct from the taste and vestibular systems. The individual sensory projections within the cortex provide multi-sensory integration in the telencephalon that depends on context-dependent tertiary connections to integrate other cortical sensory systems across the four modalities.
Collapse
Affiliation(s)
- Bernd Fritzsch
- Department of Biology, The University of Iowa, Iowa City, IA, United States
- Department of Otolaryngology, The University of Iowa, Iowa City, IA, United States
- *Correspondence: Bernd Fritzsch,
| | - Karen L. Elliott
- Department of Biology, The University of Iowa, Iowa City, IA, United States
| | - Ebenezer N. Yamoah
- Department of Physiology and Cell Biology, School of Medicine, University of Nevada, Reno, Reno, NV, United States
| |
Collapse
|
13
|
Ito T, Ogane R. Repetitive Exposure to Orofacial Somatosensory Inputs in Speech Perceptual Training Modulates Vowel Categorization in Speech Perception. Front Psychol 2022; 13:839087. [PMID: 35558689 PMCID: PMC9088678 DOI: 10.3389/fpsyg.2022.839087] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 03/24/2022] [Indexed: 11/24/2022] Open
Abstract
Orofacial somatosensory inputs may play a role in the link between speech perception and production. Given the fact that speech motor learning, which involves paired auditory and somatosensory inputs, results in changes to speech perceptual representations, somatosensory inputs may also be involved in learning or adaptive processes of speech perception. Here we show that repetitive pairing of somatosensory inputs and sounds, such as occurs during speech production and motor learning, can also induce a change of speech perception. We examined whether the category boundary between /ε/ and /a/ was changed as a result of perceptual training with orofacial somatosensory inputs. The experiment consisted of three phases: Baseline, Training, and Aftereffect. In all phases, a vowel identification test was used to identify the perceptual boundary between /ε/ and /a/. In the Baseline and the Aftereffect phase, an adaptive method based on the maximum-likelihood procedure was applied to detect the category boundary using a small number of trials. In the Training phase, we used the method of constant stimuli in order to expose participants to stimulus variants which covered the range between /ε/ and /a/ evenly. In this phase, to mimic the sensory input that accompanies speech production and learning in an experimental group, somatosensory stimulation was applied in the upward direction when the stimulus sound was presented. A control group (CTL) followed the same training procedure in the absence of somatosensory stimulation. When we compared category boundaries prior to and following paired auditory-somatosensory training, the boundary for participants in the experimental group reliably changed in the direction of /ε/, indicating that the participants perceived /a/ more than /ε/ as a consequence of training. In contrast, the CTL did not show any change. Although a limited number of participants were tested, the perceptual shift was reduced and almost eliminated 1 week later. Our data suggest that repetitive exposure of somatosensory inputs in a task that simulates the sensory pairing which occurs during speech production, changes perceptual system and supports the idea that somatosensory inputs play a role in speech perceptual adaptation, probably contributing to the formation of sound representations for speech perception.
Collapse
Affiliation(s)
- Takayuki Ito
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
- Haskins Laboratories, New Haven, CT, United States
| | - Rintaro Ogane
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
- Haskins Laboratories, New Haven, CT, United States
| |
Collapse
|
14
|
Trudeau-Fisette P, Arnaud L, Ménard L. Visual Influence on Auditory Perception of Vowels by French-Speaking Children and Adults. Front Psychol 2022; 13:740271. [PMID: 35282186 PMCID: PMC8913716 DOI: 10.3389/fpsyg.2022.740271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 01/04/2022] [Indexed: 11/26/2022] Open
Abstract
Audiovisual interaction in speech perception is well defined in adults. Despite the large body of evidence suggesting that children are also sensitive to visual input, very few empirical studies have been conducted. To further investigate whether visual inputs influence auditory perception of phonemes in preschoolers in the same way as in adults, we conducted an audiovisual identification test. The auditory stimuli (/e/-/ø/ continuum) were presented either in an auditory condition only or simultaneously with a visual presentation of the articulation of the vowel /e/ or /ø/. The results suggest that, although all participants experienced visual influence on auditory perception, substantial individual differences exist in the 5- to 6-year-old group. While additional work is required to confirm this hypothesis, we suggest that auditory and visual systems are developing at that age and that multisensory phonological categorization of the rounding contrast took place only in children whose sensory systems and sensorimotor representations were mature.
Collapse
Affiliation(s)
- Paméla Trudeau-Fisette
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, QC, Canada
- Centre for Research on Brain, Language and Music, Montreal, QC, Canada
- *Correspondence: Paméla Trudeau-Fisette,
| | - Laureline Arnaud
- Centre for Research on Brain, Language and Music, Montreal, QC, Canada
- Integrated Program in Neuroscience, McGill University, Montreal, QC, Canada
| | - Lucie Ménard
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, QC, Canada
- Centre for Research on Brain, Language and Music, Montreal, QC, Canada
| |
Collapse
|
15
|
Ward R, Hennessey N, Barty E, Elliott C, Valentine J, Cantle Moore R. Clinical utilisation of the Infant Monitor of vocal Production (IMP) for early identification of communication impairment in young infants at-risk of cerebral palsy: a prospective cohort study. Dev Neurorehabil 2022; 25:101-114. [PMID: 34241555 DOI: 10.1080/17518423.2021.1942280] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
AIM To report prospective longitudinal data of early vocaliszations of infants identified "at-risk" of cerebral palsy (CP) for early identification of communication impairment. METHOD This case-control longitudinal prospective cohort study reports on the assessment of 36 infants, 18 identified as at-risk of CP at the time of enrolment and 18 typically developing (TD) children, at three time points: 6 months, 9 months and 12 months of age, Data were obtained through criterion and norm referenced assessments of vocaliszation behaviors. RESULTS Early vocal behaviors of infants identified as at-risk of CP did not differ from their age matched peers at 6 months of age, however, significant group differences emerged at 9 and 12 months when pre-canonical and canonical babble typically emerge. Generalized linear mixed models analysis showed that the rate of development of early language ability and more complex speech-related vocal behaviors was slower for infants at risk of CP when compared to TD infants, with over 75% of infants with CP showing below normal vocal production and impaired language by 12 months of age. INTERPRETATION Our data suggest characteristics of infant vocalizations associated with pre-canonical and canonical babbling provide a strong evidence base for predicting communication outcomes in infants at risk of CP.
Collapse
Affiliation(s)
- R Ward
- Kids Rehab, Perth Children's, Hospital, Perth, Australia.,School of Allied Health, Curtin University, Perth, Australia.,Institute of Health Research, University of Notre Dame Australia, Fremantle, Australia
| | - N Hennessey
- School of Allied Health, Curtin University, Perth, Australia
| | - E Barty
- Kids Rehab, Perth Children's, Hospital, Perth, Australia
| | - C Elliott
- Kids Rehab, Perth Children's, Hospital, Perth, Australia.,School of Allied Health, Curtin University, Perth, Australia.,Telethon Kids Institute, Perth, Australia
| | - J Valentine
- Kids Rehab, Perth Children's, Hospital, Perth, Australia
| | - R Cantle Moore
- NextSense Institute/Macquarie University, Sydney, New South Wales
| |
Collapse
|
16
|
Endo N, Ito T, Watanabe K, Nakazawa K. Enhancement of loudness discrimination acuity for self-generated sound is independent of musical experience. PLoS One 2021; 16:e0260859. [PMID: 34874970 PMCID: PMC8651135 DOI: 10.1371/journal.pone.0260859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 11/17/2021] [Indexed: 11/18/2022] Open
Abstract
Musicians tend to have better auditory and motor performance than non-musicians because of their extensive musical experience. In a previous study, we established that loudness discrimination acuity is enhanced when sound is produced by a precise force generation task. In this study, we compared the enhancement effect between experienced pianists and non-musicians. Without the force generation task, loudness discrimination acuity was better in pianists than non-musicians in the condition. However, the force generation task enhanced loudness discrimination acuity similarly in both pianists and non-musicians. The reaction time was also reduced with the force control task, but only in the non-musician group. The results suggest that the enhancement of loudness discrimination acuity with the precise force generation task is independent of musical experience and is, therefore, a fundamental function in auditory-motor interaction.
Collapse
Affiliation(s)
- Nozomi Endo
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Takayuki Ito
- CNRS, Grenoble INP, GIPSA-Lab, Univ. Grenoble Alpes, Grenoble, France
- Haskins Laboratories, New Haven, Connecticut, United States of America
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
- Faculty of Arts, Design and Architecture, University of New South Wales, Sydney, Australia
| | - Kimitaka Nakazawa
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- * E-mail:
| |
Collapse
|
17
|
Ito T, Ohashi H, Gracco VL. Somatosensory contribution to audio-visual speech processing. Cortex 2021; 143:195-204. [PMID: 34450567 DOI: 10.1016/j.cortex.2021.07.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 07/20/2021] [Accepted: 07/28/2021] [Indexed: 10/20/2022]
Abstract
Recent studies have demonstrated that the auditory speech perception of a listener can be modulated by somatosensory input applied to the facial skin suggesting that perception is an embodied process. However, speech perception is a multisensory process involving both the auditory and visual modalities. It is unknown whether and to what extent somatosensory stimulation to the facial skin modulates audio-visual speech perception. If speech perception is an embodied process, then somatosensory stimulation applied to the perceiver should influence audio-visual speech processing. Using the McGurk effect (the perceptual illusion that occurs when a sound is paired with the visual representation of a different sound, resulting in the perception of a third sound) we tested the prediction using a simple behavioral paradigm and at the neural level using event-related potentials (ERPs) and their cortical sources. We recorded ERPs from 64 scalp sites in response to congruent and incongruent audio-visual speech randomly presented with and without somatosensory stimulation associated with facial skin deformation. Subjects judged whether the production was /ba/ or not under all stimulus conditions. In the congruent audio-visual condition subjects identifying the sound as /ba/, but not in the incongruent condition consistent with the McGurk effect. Concurrent somatosensory stimulation improved the ability of participants to more correctly identify the production as /ba/ relative to the non-somatosensory condition in both congruent and incongruent conditions. ERP in response to the somatosensory stimulation for the incongruent condition reliably diverged 220 msec after stimulation onset. Cortical sources were estimated around the left anterior temporal gyrus, the right middle temporal gyrus, the right posterior superior temporal lobe and the right occipital region. The results demonstrate a clear multisensory convergence of somatosensory and audio-visual processing in both behavioral and neural processing consistent with the perspective that speech perception is a self-referenced, sensorimotor process.
Collapse
Affiliation(s)
- Takayuki Ito
- University Grenoble-Alpes, CNRS, Grenoble-INP, GIPSA-Lab, Saint Martin D'heres Cedex, France; Haskins Laboratories, New Haven, CT, USA.
| | | | - Vincent L Gracco
- Haskins Laboratories, New Haven, CT, USA; McGill University, Montréal, QC, Canada
| |
Collapse
|
18
|
Bradshaw AR, Lametti DR, McGettigan C. The Role of Sensory Feedback in Developmental Stuttering: A Review. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:308-334. [PMID: 37216145 PMCID: PMC10158644 DOI: 10.1162/nol_a_00036] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 03/16/2021] [Indexed: 05/24/2023]
Abstract
Developmental stuttering is a neurodevelopmental disorder that severely affects speech fluency. Multiple lines of evidence point to a role of sensory feedback in the disorder; this has led to a number of theories proposing different disruptions to the use of sensory feedback during speech motor control in people who stutter. The purpose of this review was to bring together evidence from studies using altered auditory feedback paradigms with people who stutter, in order to evaluate the predictions of these different theories. This review highlights converging evidence for particular patterns of differences in the responses of people who stutter to feedback perturbations. The implications for hypotheses on the nature of the disruption to sensorimotor control of speech in the disorder are discussed, with reference to neurocomputational models of speech control (predominantly, the DIVA model; Guenther et al., 2006; Tourville et al., 2008). While some consistent patterns are emerging from this evidence, it is clear that more work in this area is needed with developmental samples in particular, in order to tease apart differences related to symptom onset from those related to compensatory strategies that develop with experience of stuttering.
Collapse
Affiliation(s)
- Abigail R. Bradshaw
- Department of Speech, Hearing & Phonetic Sciences, University College London, UK
| | | | - Carolyn McGettigan
- Department of Speech, Hearing & Phonetic Sciences, University College London, UK
| |
Collapse
|
19
|
Gritsyk O, Kabakoff H, Li JJ, Ayala S, Shiller DM, McAllister T. Toward an index of oral somatosensory acuity: Comparison of three measures in adults. PERSPECTIVES OF THE ASHA SPECIAL INTEREST GROUPS 2021; 6:500-512. [PMID: 34746411 DOI: 10.1044/2021_persp-20-00218] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
PURPOSE Somatosensory targets and feedback are instrumental in ensuring accurate speech production. Individuals differ in their ability to access and respond to somatosensory information, but there is no established standard for measuring somatosensory acuity. The primary objective of this study was to determine which of three measures of somatosensory acuity had the strongest association with change in production accuracy in a vowel learning task, while controlling for the better-studied covariate of auditory acuity. METHOD Three somatosensory tasks were administered to 20 female college students: an oral stereognosis task, a bite block task with auditory masking, and a novel phonetic awareness task. Individual scores from the tasks were compared to their performance on a speech learning task in which participants were trained to produce novel Mandarin vowels with visual biofeedback. RESULTS Of the three tasks, only bite block adaptation with auditory masking was significantly associated with performance in the speech learning task. Participants with weaker somatosensory acuity tended to demonstrate larger increases in production accuracy over the course of training. CONCLUSIONS The bite block adaptation task measures proprioceptive awareness rather than tactile acuity and assesses somatosensory knowledge implicitly, with limited metalinguistic demands. This small-scale study provides preliminary evidence that these characteristics may be desirable for the assessment of oral somatosensory acuity, at least in the context of vowel learning tasks. Well-normed somatosensory measures could be of clinical utility by informing diagnosis/prognosis and treatment planning.
Collapse
Affiliation(s)
- Olesia Gritsyk
- Department of Communicative Sciences and Disorders, New York University, New York, NY
| | - Heather Kabakoff
- Department of Communicative Sciences and Disorders, New York University, New York, NY
| | - Joanne Jingwen Li
- Department of Communicative Sciences and Disorders, New York University, New York, NY
| | - Samantha Ayala
- Department of Communicative Sciences and Disorders, New York University, New York, NY
| | - Douglas M Shiller
- École d'orthophonie et d'audiologie, Université de Montréal, Montreal, CA
| | - Tara McAllister
- Department of Communicative Sciences and Disorders, New York University, New York, NY
| |
Collapse
|