1
|
Hidalgo C, Mohamed I, Zielinski C, Schön D. The effect of speech degradation on the ability to track and predict turn structure in conversation. Cortex 2022; 151:105-115. [DOI: 10.1016/j.cortex.2022.01.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 11/15/2021] [Accepted: 01/20/2022] [Indexed: 11/03/2022]
|
2
|
Fischer HG, Schmidtbauer C, Seiffart A, Bucher M, Plontke SK, Rahne T. Contribution of ambient noise and hyperbaric atmosphere to olfactory and gustatory function. PLoS One 2020; 15:e0240537. [PMID: 33048988 PMCID: PMC7553350 DOI: 10.1371/journal.pone.0240537] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Accepted: 09/28/2020] [Indexed: 11/18/2022] Open
Abstract
Introduction Taste and smell are important for occupational performance and quality of life. Previous studies suggested that the function of these senses might be influenced by ambient pressure and noise. This knowledge would be helpful for divers, submarine crews, or mine workers. The present study aimed to investigate the effects of noise and hyperbaric pressure on olfactory and gustatory functions. Methods This prospective controlled study included 16 healthy male divers. Inside a hyperbaric chamber, participants performed olfactory and gustatory function tests at sea level pressure and at 2 bar pressure. The olfaction threshold, and the discrimination and identification of odorants were measured with validated ´Sniffin sticks´. Taste identification and the gustation threshold scores were examined with validated filter paper strips. Tests were performed under two conditions: noise reduction (silence) and white noise stimulation presented at 70 dB sound pressure level. Results The results showed that normobaric and hyperbaric ambient pressures did not significantly affect olfactory or gustatory function. Moreover, noise had no relevant impact on taste or odor sensation. The odor identification score was not influenced in hyperbaric conditions, and the odor threshold score was not influenced by ambient noise or both barometric conditions. The only taste modality affected by hyperbaric conditions was the sensitivity to salty taste, but it was not significant. Conclusion We concluded that hyperbaric and noisy environments have no influence on gustatory and olfactory function. From a practical point of view, the influence of pressure in moderate hyperbaric occupations should be negligible.
Collapse
Affiliation(s)
- Hans-Georg Fischer
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), Martin Luther University Halle-Wittenberg, Halle (Saale), Germany
- Department of Otorhinolaryngology (ENT), Head and Neck Surgery, Military Hospital Hamburg, Hamburg, Germany
| | - Christopher Schmidtbauer
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), Martin Luther University Halle-Wittenberg, Halle (Saale), Germany
- * E-mail:
| | - Annett Seiffart
- University Clinic for Anesthesiology and Operative Intensive Care, University Hospital Halle (Saale), Martin Luther University Halle-Wittenberg, Halle (Saale), Germany
| | - Michael Bucher
- University Clinic for Anesthesiology and Operative Intensive Care, University Hospital Halle (Saale), Martin Luther University Halle-Wittenberg, Halle (Saale), Germany
| | - Stefan K. Plontke
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), Martin Luther University Halle-Wittenberg, Halle (Saale), Germany
| | - Torsten Rahne
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), Martin Luther University Halle-Wittenberg, Halle (Saale), Germany
| |
Collapse
|
3
|
Lewendon J, Mortimore L, Egan C. The Phonological Mapping (Mismatch) Negativity: History, Inconsistency, and Future Direction. Front Psychol 2020; 11:1967. [PMID: 32982833 PMCID: PMC7477318 DOI: 10.3389/fpsyg.2020.01967] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 07/15/2020] [Indexed: 12/03/2022] Open
Affiliation(s)
- Jennifer Lewendon
- School of Languages, Literatures and Linguistics, Prifysgol Bangor University, Bangor, United Kingdom
| | - Laurie Mortimore
- School of Psychology, Prifysgol Bangor University, Bangor, United Kingdom
| | - Ciara Egan
- School of Psychology, Prifysgol Bangor University, Bangor, United Kingdom.,Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
4
|
Thézé R, Gadiri MA, Albert L, Provost A, Giraud AL, Mégevand P. Animated virtual characters to explore audio-visual speech in controlled and naturalistic environments. Sci Rep 2020; 10:15540. [PMID: 32968127 PMCID: PMC7511320 DOI: 10.1038/s41598-020-72375-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Accepted: 08/31/2020] [Indexed: 11/09/2022] Open
Abstract
Natural speech is processed in the brain as a mixture of auditory and visual features. An example of the importance of visual speech is the McGurk effect and related perceptual illusions that result from mismatching auditory and visual syllables. Although the McGurk effect has widely been applied to the exploration of audio-visual speech processing, it relies on isolated syllables, which severely limits the conclusions that can be drawn from the paradigm. In addition, the extreme variability and the quality of the stimuli usually employed prevents comparability across studies. To overcome these limitations, we present an innovative methodology using 3D virtual characters with realistic lip movements synchronized on computer-synthesized speech. We used commercially accessible and affordable tools to facilitate reproducibility and comparability, and the set-up was validated on 24 participants performing a perception task. Within complete and meaningful French sentences, we paired a labiodental fricative viseme (i.e. /v/) with a bilabial occlusive phoneme (i.e. /b/). This audiovisual mismatch is known to induce the illusion of hearing /v/ in a proportion of trials. We tested the rate of the illusion while varying the magnitude of background noise and audiovisual lag. Overall, the effect was observed in 40% of trials. The proportion rose to about 50% with added background noise and up to 66% when controlling for phonetic features. Our results conclusively demonstrate that computer-generated speech stimuli are judicious, and that they can supplement natural speech with higher control over stimulus timing and content.
Collapse
Affiliation(s)
- Raphaël Thézé
- Department of Basic Neurosciences, University of Geneva, Campus Biotech, Chemin des Mines 9, 1202, Geneva, Switzerland
| | - Mehdi Ali Gadiri
- Department of Basic Neurosciences, University of Geneva, Campus Biotech, Chemin des Mines 9, 1202, Geneva, Switzerland
| | - Louis Albert
- Human Neuroscience Platform, Fondation Campus Biotech Geneva, Geneva, Switzerland
| | - Antoine Provost
- Human Neuroscience Platform, Fondation Campus Biotech Geneva, Geneva, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, University of Geneva, Campus Biotech, Chemin des Mines 9, 1202, Geneva, Switzerland
| | - Pierre Mégevand
- Department of Basic Neurosciences, University of Geneva, Campus Biotech, Chemin des Mines 9, 1202, Geneva, Switzerland. .,Division of Neurology, Geneva University Hospitals, Geneva, Switzerland.
| |
Collapse
|
5
|
Wang F, Karipidis II, Pleisch G, Fraga-González G, Brem S. Development of Print-Speech Integration in the Brain of Beginning Readers With Varying Reading Skills. Front Hum Neurosci 2020; 14:289. [PMID: 32922271 PMCID: PMC7457077 DOI: 10.3389/fnhum.2020.00289] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 06/26/2020] [Indexed: 12/13/2022] Open
Abstract
Learning print-speech sound correspondences is a crucial step at the beginning of reading acquisition and often impaired in children with developmental dyslexia. Despite increasing insight into audiovisual language processing, it remains largely unclear how integration of print and speech develops at the neural level during initial learning in the first years of schooling. To investigate this development, 32 healthy, German-speaking children at varying risk for developmental dyslexia (17 typical readers and 15 poor readers) participated in a longitudinal study including behavioral and fMRI measurements in first (T1) and second (T2) grade. We used an implicit audiovisual (AV) non-word target detection task aimed at characterizing differential activation to congruent (AVc) and incongruent (AVi) audiovisual non-word pairs. While children’s brain activation did not differ between AVc and AVi pairs in first grade, an incongruency effect (AVi > AVc) emerged in bilateral inferior temporal and superior frontal gyri in second grade. Of note, pseudoword reading performance improvements with time were associated with the development of the congruency effect (AVc > AVi) in the left posterior superior temporal gyrus (STG) from first to second grade. Finally, functional connectivity analyses indicated divergent development and reading expertise dependent coupling from the left occipito-temporal and superior temporal cortex to regions of the default mode (precuneus) and fronto-temporal language networks. Our results suggest that audiovisual integration areas as well as their functional coupling to other language areas and areas of the default mode network show a different development in poor vs. typical readers at varying familial risk for dyslexia.
Collapse
Affiliation(s)
- Fang Wang
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry, University of Zurich, Zurich, Switzerland.,Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Iliana I Karipidis
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry, University of Zurich, Zurich, Switzerland.,Center for Interdisciplinary Brain Sciences Research, Department of Psychiatry and Behavioral Sciences, School of Medicine, Stanford University, Stanford, CA, United States
| | - Georgette Pleisch
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry, University of Zurich, Zurich, Switzerland
| | - Gorka Fraga-González
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry, University of Zurich, Zurich, Switzerland
| | - Silvia Brem
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry, University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, University of Zurich and ETH Zürich, Zurich, Switzerland
| |
Collapse
|
6
|
Plumridge JMA, Barham MP, Foley DL, Ware AT, Clark GM, Albein-Urios N, Hayden MJ, Lum JAG. The Effect of Visual Articulatory Information on the Neural Correlates of Non-native Speech Sound Discrimination. Front Hum Neurosci 2020; 14:25. [PMID: 32116609 PMCID: PMC7019039 DOI: 10.3389/fnhum.2020.00025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Accepted: 01/20/2020] [Indexed: 11/13/2022] Open
Abstract
Behavioral studies have shown that the ability to discriminate between non-native speech sounds improves after seeing how the sounds are articulated. This study examined the influence of visual articulatory information on the neural correlates of non-native speech sound discrimination. English speakers’ discrimination of the Hindi dental and retroflex sounds was measured using the mismatch negativity (MMN) event-related potential, before and after they completed one of three 8-min training conditions. In an audio-visual speech training condition (n = 14), each sound was presented with its corresponding visual articulation. In one control condition (n = 14), both sounds were presented with the same visual articulation, resulting in one congruent and one incongruent audio-visual pairing. In another control condition (n = 14), both sounds were presented with the same image of a still face. The control conditions aimed to rule out the possibility that the MMN is influenced by non-specific audio-visual pairings, or by general exposure to the dental and retroflex sounds over the course of the study. The results showed that audio-visual speech training reduced the latency of the MMN but did not affect MMN amplitude. No change in MMN amplitude or latency was observed for the two control conditions. The pattern of results suggests that a relatively short audio-visual speech training session (i.e., 8 min) may increase the speed with which the brain processes non-native speech sound contrasts. The absence of a training effect on MMN amplitude suggests a single session of audio-visual speech training does not lead to the formation of more discrete memory traces for non-native speech sounds. Longer and/or multiple sessions might be needed to influence the MMN amplitude.
Collapse
Affiliation(s)
- James M A Plumridge
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Michael P Barham
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Denise L Foley
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Anna T Ware
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Gillian M Clark
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Natalia Albein-Urios
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Melissa J Hayden
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Jarrad A G Lum
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| |
Collapse
|
7
|
Proverbio AM, Camporeale E, Brusa A. Multimodal Recognition of Emotions in Music and Facial Expressions. Front Hum Neurosci 2020; 14:32. [PMID: 32116613 PMCID: PMC7027335 DOI: 10.3389/fnhum.2020.00032] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 01/23/2020] [Indexed: 01/24/2023] Open
Abstract
The aim of the study was to investigate the neural processing of congruent vs. incongruent affective audiovisual information (facial expressions and music) by means of ERPs (Event Related Potentials) recordings. Stimuli were 200 infant faces displaying Happiness, Relaxation, Sadness, Distress and 32 piano musical pieces conveying the same emotional states (as specifically assessed). Music and faces were presented simultaneously, and paired so that in half cases they were emotionally congruent or incongruent. Twenty subjects were told to pay attention and respond to infrequent targets (adult neutral faces) while their EEG was recorded from 128 channels. The face-related N170 (160-180 ms) component was the earliest response affected by the emotional content of faces (particularly by distress), while visual P300 (250-450 ms) and auditory N400 (350-550 ms) responses were specifically modulated by the emotional content of both facial expressions and musical pieces. Face/music emotional incongruence elicited a wide N400 negativity indicating the detection of a mismatch in the expressed emotion. A swLORETA inverse solution applied to N400 (difference wave Incong. - Cong.), showed the crucial role of Inferior and Superior Temporal Gyri in the multimodal representation of emotional information extracted from faces and music. Furthermore, the prefrontal cortex (superior and medial, BA 10) was also strongly active, possibly supporting working memory. The data hints at a common system for representing emotional information derived by social cognition and music processing, including uncus and cuneus.
Collapse
|
8
|
Fjaeldstad AW, Nørgaard HJ, Fernandes HM. The Impact of Acoustic fMRI-Noise on Olfactory Sensitivity and Perception. Neuroscience 2019; 406:262-267. [PMID: 30904663 DOI: 10.1016/j.neuroscience.2019.03.028] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 03/07/2019] [Accepted: 03/12/2019] [Indexed: 11/25/2022]
Abstract
Sensory perception is neither static nor simple. The senses influence each other during multisensory stimulation and can be both suppressive and super-additive. As most knowledge of human olfactory perception is derived from functional neuroimaging studies, in particular fMRI, our current understanding of olfactory perception has systematically been investigated in an environment with concurrent loud sounds. To date, the confounding effects of acoustic fMRI-noise during scanning on olfactory perception have not yet been investigated. In this study we investigate how acoustic noise derived from the rapid switching of MR gradient coils, affects olfactory perception. For this, 50 subjects were tested in both a silent setting and an fMRI-noise setting, in a randomised order. We found that fMRI-related acoustic noise had a significant negative effect on the olfactory detection threshold score. No significant effects were identified on olfactory discrimination, identification, identification certainty, hedonic rating, or intensity rating.
Collapse
Affiliation(s)
- Alexander Wieck Fjaeldstad
- Flavour Institute, Aarhus University, Noerrebrogade 44, 10G, 8000 Aarhus, Denmark; Flavour Clinic, ENT Department, Holstebro Regional Hospital, Laegaardsvej 12, 7500, Holstebro, Denmark; Department of Psychiatry, Warneford Hospital, University of Oxford, OX3 7JX Oxford, United Kingdom.
| | - Hans Jacob Nørgaard
- Flavour Institute, Aarhus University, Noerrebrogade 44, 10G, 8000 Aarhus, Denmark
| | - Henrique Miguel Fernandes
- Flavour Institute, Aarhus University, Noerrebrogade 44, 10G, 8000 Aarhus, Denmark; Department of Psychiatry, Warneford Hospital, University of Oxford, OX3 7JX Oxford, United Kingdom; Center for Music in the Brain (MIB), Aarhus University, Noerrebrogade 44, 10G, 8000 Aarhus, Denmark
| |
Collapse
|