1
|
Haiduk F, Zatorre RJ, Benjamin L, Morillon B, Albouy P. Spectrotemporal cues and attention jointly modulate fMRI network topology for sentence and melody perception. Sci Rep 2024; 14:5501. [PMID: 38448636 PMCID: PMC10917817 DOI: 10.1038/s41598-024-56139-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 03/01/2024] [Indexed: 03/08/2024] Open
Abstract
Speech and music are two fundamental modes of human communication. Lateralisation of key processes underlying their perception has been related both to the distinct sensitivity to low-level spectrotemporal acoustic features and to top-down attention. However, the interplay between bottom-up and top-down processes needs to be clarified. In the present study, we investigated the contribution of acoustics and attention to melodies or sentences to lateralisation in fMRI functional network topology. We used sung speech stimuli selectively filtered in temporal or spectral modulation domains with crossed and balanced verbal and melodic content. Perception of speech decreased with degradation of temporal information, whereas perception of melodies decreased with spectral degradation. Applying graph theoretical metrics on fMRI connectivity matrices, we found that local clustering, reflecting functional specialisation, linearly increased when spectral or temporal cues crucial for the task goal were incrementally degraded. These effects occurred in a bilateral fronto-temporo-parietal network for processing temporally degraded sentences and in right auditory regions for processing spectrally degraded melodies. In contrast, global topology remained stable across conditions. These findings suggest that lateralisation for speech and music partially depends on an interplay of acoustic cues and task goals under increased attentional demands.
Collapse
Affiliation(s)
- Felix Haiduk
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria.
- Department of General Psychology, University of Padua, Padua, Italy.
| | - Robert J Zatorre
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) - CRBLM, Montreal, QC, Canada
| | - Lucas Benjamin
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, 91191, Gif/Yvette, France
| | - Benjamin Morillon
- Aix Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Philippe Albouy
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) - CRBLM, Montreal, QC, Canada
- CERVO Brain Research Centre, School of Psychology, Laval University, Quebec, QC, Canada
| |
Collapse
|
2
|
Mullin HAC, Norkey EA, Kodwani A, Vitevitch MS, Castro N. Does age affect perception of the Speech-to-Song Illusion? PLoS One 2021; 16:e0250042. [PMID: 33872326 PMCID: PMC8055000 DOI: 10.1371/journal.pone.0250042] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 03/30/2021] [Indexed: 11/19/2022] Open
Abstract
The Speech-to-Song Illusion is an auditory illusion that occurs when a spoken phrase is repeatedly presented. After several presentations, listeners report that the phrase seems to be sung rather than spoken. Previous work [1] indicates that the mechanisms-priming, activation, and satiation-found in the language processing model, Node Structure Theory (NST), may account for the Speech-to-Song Illusion. NST also accounts for other language-related phenomena, including increased experiences in older adults of the tip-of-the-tongue state (where you know a word, but can't retrieve it). Based on the mechanism in NST used to account for the age-related increase in the tip-of-the-tongue phenomenon, we predicted that older adults may be less likely to experience the Speech-to-Song Illusion than younger adults. Adults of a wide range of ages heard a stimulus known to evoke the Speech-to-Song Illusion. Then, they were asked to indicate if they experienced the illusion or not (Study 1), to respond using a 5-point song-likeness rating scale (Study 2), or to indicate when the percept changed from speech to song (Study 3). The results of these studies suggest that the illusion is experienced with similar frequency and strength, and after the same number of repetitions by adult listeners regardless of age.
Collapse
Affiliation(s)
| | - Evan A. Norkey
- University of Kansas, Lawrence, KS, United States of America
| | - Anisha Kodwani
- University of Kansas, Lawrence, KS, United States of America
| | | | - Nichol Castro
- University at Buffalo, Buffalo, NY, United States of America
| |
Collapse
|
3
|
Musical Mental Imagery as Suspected Migraine Aura in Patient without Psychiatric Disease. Can J Neurol Sci 2020; 47:278-279. [DOI: 10.1017/cjn.2020.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
|
4
|
Tsai CG, Li CW. Is It Speech or Song? Effect of Melody Priming on Pitch Perception of Modified Mandarin Speech. Brain Sci 2019; 9:brainsci9100286. [PMID: 31652522 PMCID: PMC6826721 DOI: 10.3390/brainsci9100286] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Revised: 10/17/2019] [Accepted: 10/21/2019] [Indexed: 01/22/2023] Open
Abstract
Tonal languages make use of pitch variation for distinguishing lexical semantics, and their melodic richness seems comparable to that of music. The present study investigated a novel priming effect of melody on the pitch processing of Mandarin speech. When a spoken Mandarin utterance is preceded by a musical melody, which mimics the melody of the utterance, the listener is likely to perceive this utterance as song. We used functional magnetic resonance imaging to examine the neural substrates of this speech-to-song transformation. Pitch contours of spoken utterances were modified so that these utterances can be perceived as either speech or song. When modified speech (target) was preceded by a musical melody (prime) that mimics the speech melody, a task of judging the melodic similarity between the target and prime was associated with increased activity in the inferior frontal gyrus (IFG) and superior/middle temporal gyrus (STG/MTG) during target perception. We suggest that the pars triangularis of the right IFG may allocate attentional resources to the multi-modal processing of speech melody, and the STG/MTG may integrate the phonological and musical (melodic) information of this stimulus. These results are discussed in relation to subvocal rehearsal, a speech-to-song illusion, and song perception.
Collapse
Affiliation(s)
- Chen-Gia Tsai
- Graduate Institute of Musicology, National Taiwan University, Taipei 106, Taiwan.
- Neurobiology and Cognitive Science Center, National Taiwan University, Taipei 106, Taiwan.
| | - Chia-Wei Li
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan.
| |
Collapse
|
5
|
Gennari SP, Millman RE, Hymers M, Mattys SL. Anterior paracingulate and cingulate cortex mediates the effects of cognitive load on speech sound discrimination. Neuroimage 2018; 178:735-743. [DOI: 10.1016/j.neuroimage.2018.06.035] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 06/07/2018] [Accepted: 06/10/2018] [Indexed: 11/28/2022] Open
|
6
|
Castro N, Mendoza JM, Tampke EC, Vitevitch MS. An account of the Speech-to-Song Illusion using Node Structure Theory. PLoS One 2018; 13:e0198656. [PMID: 29883451 PMCID: PMC5993277 DOI: 10.1371/journal.pone.0198656] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Accepted: 05/23/2018] [Indexed: 11/25/2022] Open
Abstract
In the Speech-to-Song Illusion, repetition of a spoken phrase results in it being perceived as if it were sung. Although a number of previous studies have examined which characteristics of the stimulus will produce the illusion, there is, until now, no description of the cognitive mechanism that underlies the illusion. We suggest that the processes found in Node Structure Theory that are used to explain normal language processing as well as other auditory illusions might also account for the Speech-to-Song Illusion. In six experiments we tested whether the satiation of lexical nodes, but continued priming of syllable nodes may lead to the Speech-to-Song Illusion. The results of these experiments provide evidence for the role of priming, activation, and satiation as described in Node Structure Theory as an explanation of the Speech-to-Song Illusion.
Collapse
Affiliation(s)
- Nichol Castro
- Spoken Language Laboratory, Department of Psychology, University of Kansas, Lawrence, Kansas, United States of America
| | - Joshua M. Mendoza
- Spoken Language Laboratory, Department of Psychology, University of Kansas, Lawrence, Kansas, United States of America
| | - Elizabeth C. Tampke
- Spoken Language Laboratory, Department of Psychology, University of Kansas, Lawrence, Kansas, United States of America
| | - Michael S. Vitevitch
- Spoken Language Laboratory, Department of Psychology, University of Kansas, Lawrence, Kansas, United States of America
| |
Collapse
|
7
|
Hallam GP, Thompson HE, Hymers M, Millman RE, Rodd JM, Lambon Ralph MA, Smallwood J, Jefferies E. Task-based and resting-state fMRI reveal compensatory network changes following damage to left inferior frontal gyrus. Cortex 2018; 99:150-165. [DOI: 10.1016/j.cortex.2017.10.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Revised: 07/05/2017] [Accepted: 10/06/2017] [Indexed: 10/18/2022]
|
8
|
Graber E, Simchy-Gross R, Margulis EH. Musical and linguistic listening modes in the speech-to-song illusion bias timing perception and absolute pitch memory. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:3593. [PMID: 29289094 DOI: 10.1121/1.5016806] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The speech-to-song (STS) illusion is a phenomenon in which some spoken utterances perceptually transform to song after repetition [Deutsch, Henthorn, and Lapidis (2011). J. Acoust. Soc. Am. 129, 2245-2252]. Tierney, Dick, Deutsch, and Sereno [(2013). Cereb. Cortex. 23, 249-254] developed a set of stimuli where half tend to transform to perceived song with repetition and half do not. Those that transform and those that do not can be understood to induce a musical or linguistic mode of listening, respectively. By comparing performance on perceptual tasks related to transforming and non-transforming utterances, the current study examines whether the musical mode of listening entails higher sensitivity to temporal regularity and better absolute pitch (AP) memory compared to the linguistic mode. In experiment 1, inter-stimulus intervals within STS trials were steady, slightly variable, or highly variable. Participants reported how temporally regular utterance entrances were. In experiment 2, participants performed an AP memory task after a blocked STS exposure phase. Utterances identically matching those used in the exposure phase were targets among transposed distractors in the test phase. Results indicate that listeners exhibit heightened awareness of temporal manipulations but reduced awareness of AP manipulations to transforming utterances. This methodology establishes a framework for implicitly differentiating musical from linguistic perception.
Collapse
Affiliation(s)
- Emily Graber
- Center for Computer Research in Music and Acoustics, Stanford University, 660 Lomita Court, Stanford, California 94305, USA
| | - Rhimmon Simchy-Gross
- Department of Psychological Science, University of Arkansas, 216 Memorial Hall, Fayetteville, Arkansas 72701, USA
| | | |
Collapse
|