1
|
Hoshino H, Shiga T, Mori Y, Nozaki M, Kanno K, Osakabe Y, Ochiai H, Wada T, Hikita M, Itagaki S, Miura I, Yabe H. Effect of the Temporal Window of Integration of Speech Sound on Mismatch Negativity. Clin EEG Neurosci 2023; 54:620-627. [PMID: 35410509 DOI: 10.1177/15500594221093607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Speech-sound stimuli have a complex structure, and it is unclear how the brain processes them. An event-related potential (ERP), known as mismatch negativity (MMN), is elicited when an individual's brain detects a rare sound. In this study, MMNs were measured in response to an omitted segment of a complex sound consisting of a Japanese vowel. The results indicated that the latency from onset in the right hemisphere was significantly shorter than that in the frontal midline and left hemispheres during left ear stimulation. Additionally, the results of latency from omission showed that the latency of stimuli omitted in the latter part of the temporal window of integration (TWI) was longer than that of stimuli omitted in the first part of the TWI. The mean peak amplitude was found to be higher in the right hemisphere than in the frontal midline and left hemispheres in response to left ear stimulation. In conclusion, the results of this study suggest that would be incorrect to believe that the stimuli have strictly the characteristics of speech-sound. However. the results of the interaction effect in the latencies from omission were insignificant. These results suggest that the detection time for deviance may not be related to the stimulus ear. However, the type of deviant stimuli on latencies was found to be significant. This is because the detection of the deviants was delayed when a deviation occurred in the latter part of the TWI, regardless of the stimulation of the ear.
Collapse
Affiliation(s)
- Hiroshi Hoshino
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| | - Tetsuya Shiga
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| | - Yuhei Mori
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| | - Michinari Nozaki
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| | - Kazuko Kanno
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| | - Yusuke Osakabe
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| | - Haruka Ochiai
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| | - Tomohiro Wada
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| | - Masayuki Hikita
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| | - Shuntaro Itagaki
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| | - Itaru Miura
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| | - Hirooki Yabe
- Department of Neuropsychiatry, Fukushima Medical University, Hikarigaoka, Fukushima-city, Fukushima, 960-1295, Japan
| |
Collapse
|
2
|
Tsunada J, Cohen YE. Neural mechanisms of auditory categorization: from across brain areas to within local microcircuits. Front Neurosci 2014; 8:161. [PMID: 24987324 PMCID: PMC4060728 DOI: 10.3389/fnins.2014.00161] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2014] [Accepted: 05/27/2014] [Indexed: 11/13/2022] Open
Abstract
Categorization enables listeners to efficiently encode and respond to auditory stimuli. Behavioral evidence for auditory categorization has been well documented across a broad range of human and non-human animal species. Moreover, neural correlates of auditory categorization have been documented in a variety of different brain regions in the ventral auditory pathway, which is thought to underlie auditory-object processing and auditory perception. Here, we review and discuss how neural representations of auditory categories are transformed across different scales of neural organization in the ventral auditory pathway: from across different brain areas to within local microcircuits. We propose different neural transformations across different scales of neural organization in auditory categorization. Along the ascending auditory system in the ventral pathway, there is a progression in the encoding of categories from simple acoustic categories to categories for abstract information. On the other hand, in local microcircuits, different classes of neurons differentially compute categorical information.
Collapse
Affiliation(s)
- Joji Tsunada
- Department of Otorhinolaryngology-Head and Neck Surgery, Perelman School of Medicine, University of PennsylvaniaPhiladelphia, PA, USA
| | - Yale E. Cohen
- Department of Otorhinolaryngology-Head and Neck Surgery, Perelman School of Medicine, University of PennsylvaniaPhiladelphia, PA, USA
- Department of Neuroscience, University of PennsylvaniaPhiladelphia, PA, USA
- Department of Bioengineering, University of PennsylvaniaPhiladelphia, PA, USA
| |
Collapse
|
3
|
Abstract
Human vocalizations are sounds made exclusively by a human vocal tract. Among other vocalizations, for example, laughs or screams, speech is the most important. Speech is the primary medium of that supremely human symbolic communication system called language. One of the functions of a voice, perhaps the main one, is to realize language, by conveying some of the speaker's thoughts in linguistic form. Speech is language made audible. Moreover, when phoneticians compare and describe voices, they usually do so with respect to linguistic units, especially speech sounds, like vowels or consonants. It is therefore necessary to understand the structure as well as nature of speech sounds and how they are described. In order to understand and evaluate the speech, it is important to have at least a basic understanding of science of speech acoustics: how the acoustics of speech are produced, how they are described, and how differences, both between speakers and within speakers, arise in an acoustic output. One of the aims of this article is try to facilitate this understanding.
Collapse
Affiliation(s)
- Manjul Tiwari
- Department of Oral Pathology and Microbiology, School of Dental Sciences, Sharda University, Greater Noida, Uttar Pradesh, India
| |
Collapse
|
4
|
Ahmed M, Mällo T, Leppänen PHT, Hämäläinen J, Ayräväinen L, Ruusuvirta T, Astikainen P. Mismatch brain response to speech sound changes in rats. Front Psychol 2011; 2:283. [PMID: 22059082 PMCID: PMC3203552 DOI: 10.3389/fpsyg.2011.00283] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2011] [Accepted: 10/08/2011] [Indexed: 12/04/2022] Open
Abstract
Understanding speech is based on neural representations of individual speech sounds. In humans, such representations are capable of supporting an automatic and memory-based mechanism for auditory change detection, as reflected by the mismatch negativity (MMN) of event-related potentials. There are also findings of neural representations of speech sounds in animals, but it is not known whether these representations can support the change detection mechanism analogous to that underlying the MMN in humans. To this end, we presented synthesized spoken syllables to urethane-anesthetized rats while local field potentials were epidurally recorded above their primary auditory cortex. In an oddball condition, a deviant stimulus /ga/ or /ba/ (probability 1:12 for each) was rarely and randomly interspersed between frequently presented standard stimulus /da/ (probability 10:12). In an equiprobable condition, 12 syllables, including /da/, /ga/, and /ba/, were presented in a random order (probability 1:12 for each). We found evoked responses of higher amplitude to the deviant /ba/, albeit not to /ga/, relative to the standard /da/ in the oddball condition. Furthermore, the responses to /ba/ were higher in amplitude in the oddball condition than in the equiprobable condition. The findings suggest that anesthetized rat’s brain can form representations of human speech sounds, and that these representations can support the memory-based change detection mechanism analogous to that underlying the MMN in humans. Our findings show a striking parallel in speech processing between humans and rodents and may thus pave the way for feasible animal models of memory-based change detection.
Collapse
Affiliation(s)
- Mustak Ahmed
- Department of Psychology, University of Jyväskylä Jyväskylä, Finland
| | | | | | | | | | | | | |
Collapse
|
5
|
Tomblin JB, Peng SC, Spencer LJ, Lu N. Long-term trajectories of the development of speech sound production in pediatric cochlear implant recipients. J Speech Lang Hear Res 2008; 51:1353-68. [PMID: 18695018 PMCID: PMC3209961 DOI: 10.1044/1092-4388(2008/07-0083)] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
PURPOSE This study characterized the development of speech sound production in prelingually deaf children with a minimum of 8 years of cochlear implant (CI) experience. METHOD Twenty-seven pediatric CI recipients' spontaneous speech samples from annual evaluation sessions were phonemically transcribed. Accuracy for these speech samples was evaluated in piecewise regression models. RESULTS As a group, pediatric CI recipients showed steady improvement in speech sound production following implantation, but the improvement rate declined after 6 years of device experience. Piecewise regression models indicated that the slope estimating the participants' improvement rate was statistically greater than 0 during the first 6 years postimplantation, but not after 6 years. The group of pediatric CI recipients' accuracy of speech sound production after 4 years of device experience reasonably predicts their speech sound production after 5-10 years of device experience. CONCLUSIONS The development of speech sound production in prelingually deaf children stabilizes after 6 years of device experience, and typically approaches a plateau by 8 years of device use. Early growth in speech before 4 years of device experience did not predict later rates of growth or levels of achievement. However, good predictions could be made after 4 years of device use.
Collapse
|