1
|
Hierarchical syntax model of music predicts theta power during music listening. Neuropsychologia 2024; 199:108905. [PMID: 38740179 DOI: 10.1016/j.neuropsychologia.2024.108905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 03/07/2024] [Accepted: 05/06/2024] [Indexed: 05/16/2024]
Abstract
Linguistic research showed that the depth of syntactic embedding is reflected in brain theta power. Here, we test whether this also extends to non-linguistic stimuli, specifically music. We used a hierarchical model of musical syntax to continuously quantify two types of expert-annotated harmonic dependencies throughout a piece of Western classical music: prolongation and preparation. Prolongations can roughly be understood as a musical analogue to linguistic coordination between constituents that share the same function (e.g., 'pizza' and 'pasta' in 'I ate pizza and pasta'). Preparation refers to the dependency between two harmonies whereby the first implies a resolution towards the second (e.g., dominant towards tonic; similar to how the adjective implies the presence of a noun in 'I like spicy … '). Source reconstructed MEG data of sixty-five participants listening to the musical piece was then analysed. We used Bayesian Mixed Effects models to predict theta envelope in the brain, using the number of open prolongation and preparation dependencies as predictors whilst controlling for audio envelope. We observed that prolongation and preparation both carry independent and distinguishable predictive value for theta band fluctuation in key linguistic areas such as the Angular, Superior Temporal, and Heschl's Gyri, or their right-lateralised homologues, with preparation showing additional predictive value for areas associated with the reward system and prediction. Musical expertise further mediated these effects in language-related brain areas. Results show that predictions of precisely formalised music-theoretical models are reflected in the brain activity of listeners which furthers our understanding of the perception and cognition of musical structure.
Collapse
|
2
|
Beyond the ears: A review exploring the interconnected brain behind the hierarchical memory of music. Psychon Bull Rev 2024; 31:507-530. [PMID: 37723336 DOI: 10.3758/s13423-023-02376-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/22/2023] [Indexed: 09/20/2023]
Abstract
Music is a ubiquitous element of daily life. Understanding how music memory is represented and expressed in the brain is key to understanding how music can influence human daily cognitive tasks. Current music-memory literature is built on data from very heterogeneous tasks for measuring memory, and the neural correlates appear to differ depending on different forms of memory function targeted. Such heterogeneity leaves many exceptions and conflicts in the data underexplained (e.g., hippocampal involvement in music memory is debated). This review provides an overview of existing neuroimaging results from music-memory related studies and concludes that although music is a special class of event in our lives, the memory systems behind it do in fact share neural mechanisms with memories from other modalities. We suggest that dividing music memory into different levels of a hierarchy (structural level and semantic level) helps understand overlap and divergence in neural networks involved. This is grounded in the fact that memorizing a piece of music recruits brain clusters that separately support functions including-but not limited to-syntax storage and retrieval, temporal processing, prediction versus reality comparison, stimulus feature integration, personal memory associations, and emotion perception. The cross-talk between frontal-parietal music structural processing centers and the subcortical emotion and context encoding areas explains why music is not only so easily memorable but can also serve as strong contextual information for encoding and retrieving nonmusic information in our lives.
Collapse
|
3
|
The human language system, including its inferior frontal component in "Broca's area," does not support music perception. Cereb Cortex 2023; 33:7904-7929. [PMID: 37005063 PMCID: PMC10505454 DOI: 10.1093/cercor/bhad087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 04/04/2023] Open
Abstract
Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially for structure processing. Such claims often concern the inferior frontal component of the language system located within "Broca's area." However, others have failed to find overlap. Using a robust individual-subject fMRI approach, we examined the responses of language brain regions to music stimuli, and probed the musical abilities of individuals with severe aphasia. Across 4 experiments, we obtained a clear answer: music perception does not engage the language system, and judgments about music structure are possible even in the presence of severe damage to the language network. In particular, the language regions' responses to music are generally low, often below the fixation baseline, and never exceed responses elicited by nonmusic auditory conditions, like animal sounds. Furthermore, the language regions are not sensitive to music structure: they show low responses to both intact and structure-scrambled music, and to melodies with vs. without structural violations. Finally, in line with past patient investigations, individuals with aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, the mechanisms that process structure in language do not appear to process music, including music syntax.
Collapse
|
4
|
The Association between Music and Language in Children: A State-of-the-Art Review. CHILDREN (BASEL, SWITZERLAND) 2023; 10:children10050801. [PMID: 37238349 DOI: 10.3390/children10050801] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 04/26/2023] [Accepted: 04/27/2023] [Indexed: 05/28/2023]
Abstract
Music and language are two complex systems that specifically characterize the human communication toolkit. There has been a heated debate in the literature on whether music was an evolutionary precursor to language or a byproduct of cognitive faculties that developed to support language. The present review of existing literature about the relationship between music and language highlights that music plays a critical role in language development in early life. Our findings revealed that musical properties, such as rhythm and melody, could affect language acquisition in semantic processing and grammar, including syntactic aspects and phonological awareness. Overall, the results of the current review shed further light on the complex mechanisms involving the music-language link, highlighting that music plays a central role in the comprehension of language development from the early stages of life.
Collapse
|
5
|
Mental representations of speech and musical pitch contours reveal a diversity of profiles in autism spectrum disorder. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2022; 27:629-646. [PMID: 35848413 PMCID: PMC10074762 DOI: 10.1177/13623613221111207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
LAY ABSTRACT As a key auditory attribute of sounds, pitch is ubiquitous in our everyday listening experience involving language, music and environmental sounds. Given its critical role in auditory processing related to communication, numerous studies have investigated pitch processing in autism spectrum disorder. However, the findings have been mixed, reporting either enhanced, typical or impaired performance among autistic individuals. By investigating top-down comparisons of internal mental representations of pitch contours in speech and music, this study shows for the first time that, while autistic individuals exhibit diverse profiles of pitch processing compared to non-autistic individuals, their mental representations of pitch contours are typical across domains. These findings suggest that pitch-processing mechanisms are shared across domains in autism spectrum disorder and provide theoretical implications for using music to improve speech for those autistic individuals who have language problems.
Collapse
|
6
|
Musicality in human vocal communication: an evolutionary perspective. Philos Trans R Soc Lond B Biol Sci 2022; 377:20200391. [PMID: 34775823 PMCID: PMC8591388 DOI: 10.1098/rstb.2020.0391] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 07/08/2021] [Indexed: 12/02/2022] Open
Abstract
Studies show that specific vocal modulations, akin to those of infant-directed speech (IDS) and perhaps music, play a role in communicating intentions and mental states during human social interaction. Based on this, we propose a model for the evolution of musicality-the capacity to process musical information-in relation to human vocal communication. We suggest that a complex social environment, with strong social bonds, promoted the appearance of musicality-related abilities. These social bonds were not limited to those between offspring and mothers or other carers, although these may have been especially influential in view of altriciality of human infants. The model can be further tested in other species by comparing levels of sociality and complexity of vocal communication. By integrating several theories, our model presents a radically different view of musicality, not limited to specifically musical scenarios, but one in which this capacity originally evolved to aid parent-infant communication and bonding, and even today plays a role not only in music but also in IDS, as well as in some adult-directed speech contexts. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.
Collapse
|
7
|
Dissociation of Connectivity for Syntactic Irregularity and Perceptual Ambiguity in Musical Chord Stimuli. Front Neurosci 2021; 15:693629. [PMID: 34526877 PMCID: PMC8435864 DOI: 10.3389/fnins.2021.693629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 07/30/2021] [Indexed: 11/18/2022] Open
Abstract
Musical syntax has been studied mainly in terms of “syntactic irregularity” in harmonic/melodic sequences. However, “perceptual ambiguity” referring to the uncertainty of judgment/classification of presented stimuli can in addition be involved in our musical stimuli using three different chord sequences. The present study addresses how “syntactic irregularity” and “perceptual ambiguity” on musical syntax are dissociated, in terms of effective connectivity between the bilateral inferior frontal gyrus (IFGs) and superior temporal gyrus (STGs) by linearized time-delayed mutual information (LTDMI). Three conditions were of five-chord sequences with endings of dominant to tonic, dominant to submediant, and dominant to supertonic. The dominant to supertonic is most irregular, compared with the regular dominant to tonic. The dominant to submediant of the less irregular condition is the most ambiguous condition. In the LTDMI results, connectivity from the right to the left IFG (IFG-LTDMI) was enhanced for the most irregular condition, whereas that from the right to the left STG (STG-LTDMI) was enhanced for the most ambiguous condition (p = 0.024 in IFG-LTDMI, p < 0.001 in STG-LTDMI, false discovery rate (FDR) corrected). Correct rate was negatively correlated with STG-LTDMI, further reflecting perceptual ambiguity (p = 0.026). We found for the first time that syntactic irregularity and perceptual ambiguity coexist in chord stimulus testing musical syntax and that the two processes are dissociated in interhemispheric connectivities in the IFG and STG, respectively.
Collapse
|
8
|
Perception and Production of Statement-Question Intonation in Autism Spectrum Disorder: A Developmental Investigation. J Autism Dev Disord 2021; 52:3456-3472. [PMID: 34355295 PMCID: PMC9296411 DOI: 10.1007/s10803-021-05220-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/25/2021] [Indexed: 11/25/2022]
Abstract
Prosody or “melody in speech” in autism spectrum disorder (ASD) is often perceived as atypical. This study examined perception and production of statements and questions in 84 children, adolescents and adults with and without ASD, as well as participants’ pitch direction discrimination thresholds. The results suggested that the abilities to discriminate (in both speech and music conditions), identify, and imitate statement-question intonation were intact in individuals with ASD across age cohorts. Sensitivity to pitch direction predicted performance on intonation processing in both groups, who also exhibited similar developmental changes. These findings provide evidence for shared mechanisms in pitch processing between speech and music, as well as associations between low- and high-level pitch processing and between perception and production of pitch.
Collapse
|
9
|
Individuals with autism spectrum disorder are impaired in absolute but not relative pitch and duration matching in speech and song imitation. Autism Res 2021; 14:2355-2372. [PMID: 34214243 DOI: 10.1002/aur.2569] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 05/03/2021] [Accepted: 06/22/2021] [Indexed: 11/08/2022]
Abstract
Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation. However, few studies have identified clear quantitative characteristics of vocal imitation in ASD. This study investigated imitation of speech and song in English-speaking individuals with and without ASD and its modulation by age. Participants consisted of 25 autistic children and 19 autistic adults, who were compared to 25 children and 19 adults with typical development matched on age, gender, musical training, and cognitive abilities. The task required participants to imitate speech and song stimuli with varying pitch and duration patterns. Acoustic analyses of the imitation performance suggested that individuals with ASD were worse than controls on absolute pitch and duration matching for both speech and song imitation, although they performed as well as controls on relative pitch and duration matching. Furthermore, the two groups produced similar numbers of pitch contour, pitch interval-, and time errors. Across both groups, sung pitch was imitated more accurately than spoken pitch, whereas spoken duration was imitated more accurately than sung duration. Children imitated spoken pitch more accurately than adults when it came to speech stimuli, whereas age showed no significant relationship to song imitation. These results reveal a vocal imitation deficit across speech and music domains in ASD that is specific to absolute pitch and duration matching. This finding provides evidence for shared mechanisms between speech and song imitation, which involves independent implementation of relative versus absolute features. LAY SUMMARY: Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation of actions and gestures. Characteristics of vocal imitation in ASD remain unclear. By comparing speech and song imitation, this study shows that individuals with ASD have a vocal imitative deficit that is specific to absolute pitch and duration matching, while performing as well as controls on relative pitch and duration matching, across speech and music domains.
Collapse
|
10
|
Change in left inferior frontal connectivity with less unexpected harmonic cadence by musical expertise. PLoS One 2019; 14:e0223283. [PMID: 31714920 PMCID: PMC6850538 DOI: 10.1371/journal.pone.0223283] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Accepted: 09/17/2019] [Indexed: 11/19/2022] Open
Abstract
In terms of harmonic expectancy, compared to an expected dominant-to-tonic and an unexpected dominant-to-supertonic, a dominant-to-submediant is a less unexpected cadence, the perception of which may depend on the subject’s musical expertise. The present study investigated how aforementioned 3 different cadences are processed in the networks of bilateral inferior frontal gyri (IFGs) and superior temporal gyri (STGs) with magnetoencephalography. We compared the correct rate and brain connectivity in 9 music-majors (mean age, 23.5 ± 3.4 years; musical training period, 18.7 ± 4.0 years) and 10 non-music-majors (mean age, 25.2 ± 2.6 years; musical training period, 4.2 ± 1.5 years). For the brain connectivity, we computed the summation of partial directed coherence (PDC) values for inflows/outflows to/from each area (sPDCi/sPDCo) in bilateral IFGs and STGs. In the behavioral responses, music-majors were better than non-music-majors for all 3 cadences (p < 0.05). However, sPDCi/sPDCo was prominent only for the dominant-to-submediant in the left IFG. The sPDCi was more strongly enhanced in music-majors than in non-music-majors (p = 0.002, Bonferroni corrected), while the sPDCo was vice versa (p = 0.005, Bonferroni corrected). Our data show that music-majors, with higher musical expertise, are better in identifying a less unexpected cadence than non-music-majors, with connectivity changes centered on the left IFG.
Collapse
|
11
|
Born to Speak and Sing: Musical Predictors of Language Development in Pre-schoolers. Front Psychol 2019; 10:948. [PMID: 31231260 PMCID: PMC6558368 DOI: 10.3389/fpsyg.2019.00948] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2019] [Accepted: 04/09/2019] [Indexed: 11/13/2022] Open
Abstract
The relationship between musical and linguistic skills has received particular attention in infants and school-aged children. However, very little is known about pre-schoolers. This leaves a gap in our understanding of the concurrent development of these skills during development. Moreover, attention has been focused on the effects of formal musical training, while neglecting the influence of informal musical activities at home. To address these gaps, in Study 1, 3- and 4-year-old children (n = 40) performed novel musical tasks (perception and production) adapted for young children in order to examine the link between musical skills and the development of key language capacities, namely grammar and phonological awareness. In Study 2, we investigated the influence of informal musical experience at home on musical and linguistic skills of young pre-schoolers, using the same evaluation tools. We found systematic associations between distinct musical and linguistic skills. Rhythm perception and production were the best predictors of phonological awareness, while melody perception was the best predictor of grammar acquisition, a novel association not previously observed in developmental research. These associations could not be explained by variability in general cognitive functioning, such as verbal memory and non-verbal abilities. Thus, selective music-related auditory and motor skills are likely to underpin different aspects of language development and can be dissociated in pre-schoolers. We also found that informal musical experience at home contributes to the development of grammar. An effect of musical skills on both phonological awareness and language grammar is mediated by home musical experience. These findings pave the way for the development of dedicated musical activities for pre-schoolers to support specific areas of language development.
Collapse
|
12
|
Abstract
Age-related differences in episodic memory have been explained by a decrement in strategic encoding implementation. It has been shown in clinical populations that music can be used during the encoding stage as a mnemonic strategy to learn verbal information. The effectiveness of this strategy remains equivocal in older adults (OA). Furthermore, the impact of the emotional valence of the music used has never been investigated in this context. Thirty OA and 24 young adults (YA) learned texts that were either set to music that was positively or negatively valenced, or spoken only. Immediate and delayed recalls were measured. Results showed that: (i) OA perform worse than YA in immediate and delayed recall; (ii) sung lyrics are better remembered than spoken ones in OA, but only when the associated music is positively-valenced; (iii) this pattern is observed regardless the retention delay. These findings support the benefit of a musical encoding on verbal learning in healthy OA and are consistent with the positivity effect classically reported in normal aging. Added to the potential applications in daily life, the results are discussed with respect to the theoretical hypotheses of the mechanisms underlying the advantage of musical encoding.
Collapse
|
13
|
Cognitive representation of "musical fractals": Processing hierarchy and recursion in the auditory domain. Cognition 2017; 161:31-45. [PMID: 28103526 PMCID: PMC5348576 DOI: 10.1016/j.cognition.2017.01.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Revised: 10/01/2016] [Accepted: 01/03/2017] [Indexed: 11/21/2022]
Abstract
The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general ‘musical’ aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes.
Collapse
|
14
|
Language and thought are not the same thing: evidence from neuroimaging and neurological patients. Ann N Y Acad Sci 2016; 1369:132-53. [PMID: 27096882 PMCID: PMC4874898 DOI: 10.1111/nyas.13046] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 02/18/2016] [Accepted: 02/25/2016] [Indexed: 01/29/2023]
Abstract
Is thought possible without language? Individuals with global aphasia, who have almost no ability to understand or produce language, provide a powerful opportunity to find out. Surprisingly, despite their near-total loss of language, these individuals are nonetheless able to add and subtract, solve logic problems, think about another person's thoughts, appreciate music, and successfully navigate their environments. Further, neuroimaging studies show that healthy adults strongly engage the brain's language areas when they understand a sentence, but not when they perform other nonlinguistic tasks such as arithmetic, storing information in working memory, inhibiting prepotent responses, or listening to music. Together, these two complementary lines of evidence provide a clear answer: many aspects of thought engage distinct brain regions from, and do not depend on, language.
Collapse
|
15
|
Abstract
The cortical correlates of speech and music perception are essentially overlapping, and the specific effects of different types of training on these networks remain unknown. We compared two groups of vocally trained professionals for music and speech, singers and actors, using recited and sung rhyme sequences from German art songs with semantic and/ or prosodic/melodic violations (i.e. violations of pitch) of the last word, in order to measure the evoked activation in a magnetoencephalographic (MEG) experiment. MEG data confirmed the existence of intertwined networks for the sung and spoken modality in an early time window after word violation. In essence for this early response, higher activity was measured after melodic/prosodic than semantic violations in predominantly right temporal areas. For singers as well as for actors, modality-specific effects were evident in predominantly left-temporal lateralized activity after semantic expectancy violations in the spoken modality, and right-dominant temporal activity in response to melodic violations in the sung modality. As an indication of a special group-dependent audiation process, higher neuronal activity for singers appeared in a late time window in right temporal and left parietal areas, both after the recited and the sung sequences.
Collapse
|
16
|
|
17
|
|
18
|
Abstract
BACKGROUND Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. METHOD Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. RESULTS Highly significant deficits were seen between patients and controls across auditory tasks (p < 0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. DISCUSSION This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia.
Collapse
|
19
|
Syntax in a pianist's hand: ERP signatures of “embodied” syntax processing in music. Cortex 2013; 49:1325-39. [DOI: 10.1016/j.cortex.2012.06.007] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2011] [Revised: 04/02/2012] [Accepted: 06/13/2012] [Indexed: 11/19/2022]
|
20
|
The cortical representation of simple mathematical expressions. Neuroimage 2012; 61:1444-60. [DOI: 10.1016/j.neuroimage.2012.04.020] [Citation(s) in RCA: 59] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2011] [Revised: 04/03/2012] [Accepted: 04/07/2012] [Indexed: 01/29/2023] Open
|
21
|
Brains “in concert”: Frontal oscillatory alpha rhythms and empathy in professional musicians. Neuroimage 2012; 60:105-16. [DOI: 10.1016/j.neuroimage.2011.12.008] [Citation(s) in RCA: 66] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2011] [Revised: 11/28/2011] [Accepted: 12/02/2011] [Indexed: 10/14/2022] Open
|
22
|
Shadows of music-language interaction on low frequency brain oscillatory patterns. BRAIN AND LANGUAGE 2011; 119:50-57. [PMID: 21683995 DOI: 10.1016/j.bandl.2011.05.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2010] [Revised: 05/15/2011] [Accepted: 05/23/2011] [Indexed: 05/30/2023]
Abstract
Electrophysiological studies investigating similarities between music and language perception have relied exclusively on the signal averaging technique, which does not adequately represent oscillatory aspects of electrical brain activity that are relevant for higher cognition. The current study investigated the patterns of brain oscillations during simultaneous processing of music and language using visually presented sentences and auditorily presented chord sequences. Music-syntactically regular or irregular chord functions were presented in sync with syntactically or semantically correct or incorrect words. Irregular chord functions (presented simultaneously with a syntactically correct word) produced an early (150-250 ms) spectral power decrease over anterior frontal regions in the theta band (5-7 Hz) and a late (350-700 ms) power increase in both the delta and the theta band (2-7 Hz) over parietal regions. Syntactically incorrect words (presented simultaneously with a regular chord) elicited a similar late power increase in delta-theta band over parietal sites, but no early effect. Interestingly, the late effect was significantly diminished when the language-syntactic and music-syntactic irregularities occurred at the same time. Further, the presence of a semantic violation occurring simultaneously with regular chords produced a significant increase in later delta-theta power at posterior regions; this effect was marginally decreased when the identical semantic violation occurred simultaneously with a music syntactical violation. Altogether, these results show that low frequency oscillatory networks get activated during the syntactic processing of both music and language, and further, these networks may possibly be shared.
Collapse
|
23
|
|
24
|
sLORETA allows reliable distributed source reconstruction based on subdural strip and grid recordings. Hum Brain Mapp 2011; 33:1172-88. [PMID: 21618659 DOI: 10.1002/hbm.21276] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2009] [Revised: 12/20/2010] [Accepted: 01/04/2011] [Indexed: 11/09/2022] Open
Abstract
Source localization based on invasive recordings by subdural strip and grid electrodes is a topic of increasing interest. This simulation study addresses the question, which factors are relevant for reliable source reconstruction based on sLORETA. MRI and electrode positions of a patient undergoing invasive presurgical epilepsy diagnostics were the basis of sLORETA simulations. A boundary element head model derived from the MRI was used for the simulation of electrical potentials and source reconstruction. Focal dipolar sources distributed on a regular three-dimensional lattice and spatiotemporal distributed patches served as input for simulation. In addition to the distance between original and reconstructed source maxima, the activation volume of the reconstruction and the correlation of time courses between the original and reconstructed sources were investigated. Simulations were supplemented by the localization of the patient's spike activity. For noise-free simulated data, sLORETA achieved results with zero localization error. Added noise diminished the percentage of reliable source localizations with a localization error ≤15 mm to 67.8%. Only for source positions close to the electrode contacts the activation volume correctly represented focal generators. Time-courses of original and reconstructed sources were significantly correlated. The case study results showed accurate localization. sLORETA is a distributed source model, which can be applied for reliable grid and strip based source localization. For distant source positions, overestimation of the extent of the generator has to be taken into account. sLORETA-based source reconstruction has the potential to improve the localization of distributed generators in presurgical epilepsy diagnostics and cognitive neuroscience.
Collapse
|
25
|
Auditory cortical volumes and musical ability in Williams syndrome. Neuropsychologia 2010; 48:2602-9. [PMID: 20457168 DOI: 10.1016/j.neuropsychologia.2010.05.007] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2009] [Revised: 04/26/2010] [Accepted: 05/01/2010] [Indexed: 11/19/2022]
Abstract
Individuals with Williams syndrome (WS) have been shown to have atypical morphology in the auditory cortex, an area associated with aspects of musicality. Some individuals with WS have demonstrated specific musical abilities, despite intellectual delays. Primary auditory cortex and planum temporale volumes were manually segmented in 25 individuals with WS and 25 control participants, and the participants also underwent testing of musical abilities. Left and right planum temporale volumes were significantly larger in the participants with WS than in controls, with no significant difference noted between groups in planum temporale asymmetry or primary auditory cortical volumes. Left planum temporale volume was significantly increased in a subgroup of the participants with WS who demonstrated specific musical strengths, as compared to the remaining WS participants, and was highly correlated with scores on a musical task. These findings suggest that differences in musical ability within WS may be in part associated with variability in the left auditory cortical region, providing further evidence of cognitive and neuroanatomical heterogeneity within this syndrome.
Collapse
|