1
|
Musicality - Tuned to the melody of vocal emotions. Br J Psychol 2024; 115:206-225. [PMID: 37851369 DOI: 10.1111/bjop.12684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 09/12/2023] [Accepted: 09/24/2023] [Indexed: 10/19/2023]
Abstract
Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi-professional musicians (N = 39) and non-musicians (N = 38), all socialized in Western music culture. Compared to non-musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0-modulated conditions, but was absent in the timbre-modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time-varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.
Collapse
|
2
|
Encoding of melody in the human auditory cortex. SCIENCE ADVANCES 2024; 10:eadk0010. [PMID: 38363839 PMCID: PMC10871532 DOI: 10.1126/sciadv.adk0010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 01/17/2024] [Indexed: 02/18/2024]
Abstract
Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception varies along several pitch-based dimensions: (i) the absolute pitch of notes, (ii) the difference in pitch between successive notes, and (iii) the statistical expectation of each note given prior context. How the brain represents these dimensions and whether their encoding is specialized for music remains unknown. We recorded high-density neurophysiological activity directly from the human auditory cortex while participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial map for representing distinct melodic dimensions. The same participants listened to spoken English, and we compared responses to music and speech. Cortical sites selective for music encoded expectation, while sites that encoded pitch and pitch-change in music used the same neural code to represent equivalent properties of speech. Findings reveal how the perception of melody recruits both music-specific and general-purpose sound representations.
Collapse
|
3
|
The transformative power of music: Insights into neuroplasticity, health, and disease. Brain Behav Immun Health 2024; 35:100716. [PMID: 38178844 PMCID: PMC10765015 DOI: 10.1016/j.bbih.2023.100716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 12/04/2023] [Accepted: 12/08/2023] [Indexed: 01/06/2024] Open
Abstract
Music is a universal language that can elicit profound emotional and cognitive responses. In this literature review, we explore the intricate relationship between music and the brain, from how it is decoded by the nervous system to its therapeutic potential in various disorders. Music engages a diverse network of brain regions and circuits, including sensory-motor processing, cognitive, memory, and emotional components. Music-induced brain network oscillations occur in specific frequency bands, and listening to one's preferred music can grant easier access to these brain functions. Moreover, music training can bring about structural and functional changes in the brain, and studies have shown its positive effects on social bonding, cognitive abilities, and language processing. We also discuss how music therapy can be used to retrain impaired brain circuits in different disorders. Understanding how music affects the brain can open up new avenues for music-based interventions in healthcare, education, and wellbeing.
Collapse
|
4
|
Subcortical responses to music and speech are alike while cortical responses diverge. Sci Rep 2024; 14:789. [PMID: 38191488 PMCID: PMC10774448 DOI: 10.1038/s41598-023-50438-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 12/20/2023] [Indexed: 01/10/2024] Open
Abstract
Music and speech are encountered daily and are unique to human beings. Both are transformed by the auditory pathway from an initial acoustical encoding to higher level cognition. Studies of cortex have revealed distinct brain responses to music and speech, but differences may emerge in the cortex or may be inherited from different subcortical encoding. In the first part of this study, we derived the human auditory brainstem response (ABR), a measure of subcortical encoding, to recorded music and speech using two analysis methods. The first method, described previously and acoustically based, yielded very different ABRs between the two sound classes. The second method, however, developed here and based on a physiological model of the auditory periphery, gave highly correlated responses to music and speech. We determined the superiority of the second method through several metrics, suggesting there is no appreciable impact of stimulus class (i.e., music vs speech) on the way stimulus acoustics are encoded subcortically. In this study's second part, we considered the cortex. Our new analysis method resulted in cortical music and speech responses becoming more similar but with remaining differences. The subcortical and cortical results taken together suggest that there is evidence for stimulus-class dependent processing of music and speech at the cortical but not subcortical level.
Collapse
|
5
|
Music and verbal ability - a twin study of genetic and environmental associations. PSYCHOLOGY OF AESTHETICS, CREATIVITY, AND THE ARTS 2023; 17:675-681. [PMID: 38269365 PMCID: PMC10805386 DOI: 10.1037/aca0000401] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Musical aptitude and music training are associated with language-related cognitive outcomes, even when controlling for general intelligence. However, genetic and environmental influences on these associations have not been studied, and it remains unclear whether music training can causally increase verbal ability. In a sample of 1,336 male twins, we tested the associations between verbal ability measured at time of conscription at age 18 and two music related variables: overall musical aptitude and total amount of music training before the age of 18. We estimated the amount of specific genetic and environmental influences on the association between verbal ability and musical aptitude, over and above the factors shared with general intelligence, using classical twin modelling. Further, we tested whether music training could causally influence verbal ability using a co-twin-control analysis. Musical aptitude and music training were significantly associated with verbal ability. Controlling for general intelligence only slightly attenuated the correlations. The partial association between musical aptitude and verbal ability, corrected for general intelligence, was mostly explained by shared genetic factors (50%) and non-shared environmental influences (35%). The co-twin-control-analysis gave no support for causal effects of early music training on verbal ability at age 18. Overall, our findings in a sizeable population sample converge with known associations between the music and language domains, while results from twin modelling suggested that this reflected a shared underlying aetiology rather than causal transfer.
Collapse
|
6
|
Is song processing distinct and special in the auditory cortex? Nat Rev Neurosci 2023; 24:711-722. [PMID: 37783820 DOI: 10.1038/s41583-023-00743-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Collapse
|
7
|
Brain-imaging evidence for compression of binary sound sequences in human memory. eLife 2023; 12:e84376. [PMID: 37910588 PMCID: PMC10619979 DOI: 10.7554/elife.84376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 10/14/2023] [Indexed: 11/03/2023] Open
Abstract
According to the language-of-thought hypothesis, regular sequences are compressed in human memory using recursive loops akin to a mental program that predicts future items. We tested this theory by probing memory for 16-item sequences made of two sounds. We recorded brain activity with functional MRI and magneto-encephalography (MEG) while participants listened to a hierarchy of sequences of variable complexity, whose minimal description required transition probabilities, chunking, or nested structures. Occasional deviant sounds probed the participants' knowledge of the sequence. We predicted that task difficulty and brain activity would be proportional to the complexity derived from the minimal description length in our formal language. Furthermore, activity should increase with complexity for learned sequences, and decrease with complexity for deviants. These predictions were upheld in both fMRI and MEG, indicating that sequence predictions are highly dependent on sequence structure and become weaker and delayed as complexity increases. The proposed language recruited bilateral superior temporal, precentral, anterior intraparietal, and cerebellar cortices. These regions overlapped extensively with a localizer for mathematical calculation, and much less with spoken or written language processing. We propose that these areas collectively encode regular sequences as repetitions with variations and their recursive composition into nested structures.
Collapse
|
8
|
Encoding of melody in the human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.17.562771. [PMID: 37905047 PMCID: PMC10614915 DOI: 10.1101/2023.10.17.562771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception of melody varies along several pitch-based dimensions: (1) the absolute pitch of notes, (2) the difference in pitch between successive notes, and (3) the higher-order statistical expectation of each note conditioned on its prior context. While humans readily perceive melody, how these dimensions are collectively represented in the brain and whether their encoding is specialized for music remains unknown. Here, we recorded high-density neurophysiological activity directly from the surface of human auditory cortex while Western participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial code for representing distinct dimensions of melody. The same participants listened to spoken English, and we compared evoked responses to music and speech. Cortical sites selective for music were systematically driven by the encoding of expectation. In contrast, sites that encoded pitch and pitch-change used the same neural code to represent equivalent properties of speech. These findings reveal the multidimensional nature of melody encoding, consisting of both music-specific and domain-general sound representations in auditory cortex. Teaser The human brain contains both general-purpose and music-specific neural populations for processing distinct attributes of melody.
Collapse
|
9
|
Differential effects of ageing on the neural processing of speech and singing production. Front Aging Neurosci 2023; 15:1236971. [PMID: 37731954 PMCID: PMC10507273 DOI: 10.3389/fnagi.2023.1236971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 08/21/2023] [Indexed: 09/22/2023] Open
Abstract
Background Understanding healthy brain ageing has become vital as populations are ageing rapidly and age-related brain diseases are becoming more common. In normal brain ageing, speech processing undergoes functional reorganisation involving reductions of hemispheric asymmetry and overactivation in the prefrontal regions. However, little is known about how these changes generalise to other vocal production, such as singing, and how they are affected by associated cognitive demands. Methods The present cross-sectional fMRI study systematically maps the neural correlates of vocal production across adulthood (N=100, age 21-88 years) using a balanced 2x3 design where tasks varied in modality (speech: proverbs / singing: song phrases) and cognitive demand (repetition / completion from memory / improvisation). Results In speech production, ageing was associated with decreased left pre- and postcentral activation across tasks and increased bilateral angular and right inferior temporal and fusiform activation in the improvisation task. In singing production, ageing was associated with increased activation in medial and bilateral prefrontal and parietal regions in the completion task, whereas other tasks showed no ageing effects. Direct comparisons between the modalities showed larger age-related activation changes in speech than singing across tasks, including a larger left-to-right shift in lateral prefrontal regions in the improvisation task. Conclusion The present results suggest that the brains' singing network undergoes differential functional reorganisation in normal ageing compared to the speech network, particularly during a task with high executive demand. These findings are relevant for understanding the effects of ageing on vocal production as well as how singing can support communication in healthy ageing and neurological rehabilitation.
Collapse
|
10
|
Music can be reconstructed from human auditory cortex activity using nonlinear decoding models. PLoS Biol 2023; 21:e3002176. [PMID: 37582062 PMCID: PMC10427021 DOI: 10.1371/journal.pbio.3002176] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 05/30/2023] [Indexed: 08/17/2023] Open
Abstract
Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain. We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining encoding and decoding analyses, we found a right-hemisphere dominance for music perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior-posterior STG organization exhibiting sustained and onset responses to musical elements. Our findings show the feasibility of applying predictive modeling on short datasets acquired in single patients, paving the way for adding musical elements to brain-computer interface (BCI) applications.
Collapse
|
11
|
Music as a window into real-world communication. Front Psychol 2023; 14:1012839. [PMID: 37496799 PMCID: PMC10368476 DOI: 10.3389/fpsyg.2023.1012839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 06/06/2023] [Indexed: 07/28/2023] Open
Abstract
Communication has been studied extensively in the context of speech and language. While speech is tremendously effective at transferring ideas between people, music is another communicative mode that has a unique power to bring people together and transmit a rich tapestry of emotions, through joint music-making and listening in a variety of everyday contexts. Research has begun to examine the behavioral and neural correlates of the joint action required for successful musical interactions, but it has yet to fully account for the rich, dynamic, multimodal nature of musical communication. We review the current literature in this area and propose that naturalistic musical paradigms will open up new ways to study communication more broadly.
Collapse
|
12
|
The human language system, including its inferior frontal component in "Broca's area," does not support music perception. Cereb Cortex 2023; 33:7904-7929. [PMID: 37005063 PMCID: PMC10505454 DOI: 10.1093/cercor/bhad087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 04/04/2023] Open
Abstract
Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially for structure processing. Such claims often concern the inferior frontal component of the language system located within "Broca's area." However, others have failed to find overlap. Using a robust individual-subject fMRI approach, we examined the responses of language brain regions to music stimuli, and probed the musical abilities of individuals with severe aphasia. Across 4 experiments, we obtained a clear answer: music perception does not engage the language system, and judgments about music structure are possible even in the presence of severe damage to the language network. In particular, the language regions' responses to music are generally low, often below the fixation baseline, and never exceed responses elicited by nonmusic auditory conditions, like animal sounds. Furthermore, the language regions are not sensitive to music structure: they show low responses to both intact and structure-scrambled music, and to melodies with vs. without structural violations. Finally, in line with past patient investigations, individuals with aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, the mechanisms that process structure in language do not appear to process music, including music syntax.
Collapse
|
13
|
Language experience predicts music processing in a half-million speakers of fifty-four languages. Curr Biol 2023; 33:1916-1925.e4. [PMID: 37105166 PMCID: PMC10306420 DOI: 10.1016/j.cub.2023.03.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 02/08/2023] [Accepted: 03/23/2023] [Indexed: 04/29/2023]
Abstract
Tonal languages differ from other languages in their use of pitch (tones) to distinguish words. Lifelong experience speaking and hearing tonal languages has been argued to shape auditory processing in ways that generalize beyond the perception of linguistic pitch to the perception of pitch in other domains like music. We conducted a meta-analysis of prior studies testing this idea, finding moderate evidence supporting it. But prior studies were limited by mostly small sample sizes representing a small number of languages and countries, making it challenging to disentangle the effects of linguistic experience from variability in music training, cultural differences, and other potential confounds. To address these issues, we used web-based citizen science to assess music perception skill on a global scale in 34,034 native speakers of 19 tonal languages (e.g., Mandarin, Yoruba). We compared their performance to 459,066 native speakers of other languages, including 6 pitch-accented (e.g., Japanese) and 29 non-tonal languages (e.g., Hungarian). Whether or not participants had taken music lessons, native speakers of all 19 tonal languages had an improved ability to discriminate musical melodies on average, relative to speakers of non-tonal languages. But this improvement came with a trade-off: tonal language speakers were also worse at processing the musical beat. The results, which held across native speakers of many diverse languages and were robust to geographic and demographic variation, demonstrate that linguistic experience shapes music perception, with implications for relations between music, language, and culture in the human mind.
Collapse
|
14
|
Language and music: Singing voices and music talent. Curr Biol 2023; 33:R418-R420. [PMID: 37220737 DOI: 10.1016/j.cub.2023.03.086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Native speakers of tonal languages show enhanced musical melody perception but diminished rhythm abilities. This effect has now been rigorously demonstrated in a new study that tested the musical IQ of half a million human participants across the globe.
Collapse
|
15
|
Sounds Pleasantness Ratings in Autism: Interaction Between Social Information and Acoustical Noise Level. J Autism Dev Disord 2023:10.1007/s10803-023-05989-6. [PMID: 37118645 DOI: 10.1007/s10803-023-05989-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2023] [Indexed: 04/30/2023]
Abstract
A lack of response to voices, and a great interest for music are part of the behavioral expressions, commonly (self-)reported in Autism Spectrum Disorder (ASD). These atypical interests for vocal and musical sounds could be attributable to different levels of acoustical noise, quantified in the harmonic-to-noise ratio (HNR). No previous study has investigated explicit auditory pleasantness in ASD comparing vocal and non-vocal sounds, in relation to acoustic noise level. The aim of this study is to objectively evaluate auditory pleasantness. 16 adults on the autism spectrum and 16 neuro-typical (NT) matched adults rated the likeability of vocal and non-vocal sounds, with varying harmonic-to-noise ratio levels. A group by category interaction in pleasantness judgements revealed that participants on the autism spectrum judged vocal sounds as less pleasant than non-vocal sounds; an effect not found for NT participants. A category by HNR level interaction revealed that participants of both groups rated sounds with a high HNR as more pleasant for non-vocal sounds. A significant group by HNR interaction revealed that people on the autism spectrum tended to judge as less pleasant sounds with high HNR and more pleasant those with low HNR than NT participants. Acoustical noise level of sounds alone does not appear to explain atypical interest for voices and greater interest in music in ASD.
Collapse
|
16
|
Auditory Electrophysiological and Perceptual Measures in Student Musicians with High Sound Exposure. Diagnostics (Basel) 2023; 13:diagnostics13050934. [PMID: 36900080 PMCID: PMC10000734 DOI: 10.3390/diagnostics13050934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 12/05/2022] [Accepted: 02/26/2023] [Indexed: 03/06/2023] Open
Abstract
This study aimed to determine (a) the influence of noise exposure background (NEB) on the peripheral and central auditory system functioning and (b) the influence of NEB on speech recognition in noise abilities in student musicians. Twenty non-musician students with self-reported low NEB and 18 student musicians with self-reported high NEB completed a battery of tests that consisted of physiological measures, including auditory brainstem responses (ABRs) at three different stimulus rates (11.3 Hz, 51.3 Hz, and 81.3 Hz), and P300, and behavioral measures including conventional and extended high-frequency audiometry, consonant-vowel nucleus-consonant (CNC) word test and AzBio sentence test for assessing speech perception in noise abilities at -9, -6, -3, 0, and +3 dB signal to noise ratios (SNRs). The NEB was negatively associated with performance on the CNC test at all five SNRs. A negative association was found between NEB and performance on the AzBio test at 0 dB SNR. No effect of NEB was found on the amplitude and latency of P300 and the ABR wave I amplitude. More investigations of larger datasets with different NEB and longitudinal measurements are needed to investigate the influence of NEB on word recognition in noise and to understand the specific cognitive processes contributing to the impact of NEB on word recognition in noise.
Collapse
|
17
|
Development and validation of the first adaptive test of emotion perception in music. Cogn Emot 2023; 37:284-302. [PMID: 36592153 DOI: 10.1080/02699931.2022.2162003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
ABSTRACTThe Musical Emotion Discrimination Task (MEDT) is a short, non-adaptive test of the ability to discriminate emotions in music. Test-takers hear two performances of the same melody, both played by the same performer but each trying to communicate a different basic emotion, and are asked to determine which one is "happier", for example. The goal of the current study was to construct a new version of the MEDT using a larger set of shorter, more diverse music clips and an adaptive framework to expand the ability range for which the test can deliver measurements. The first study analysed responses from a large sample of participants (N = 624) to determine how musical features contributed to item difficulty, which resulted in a quantitative model of musical emotion discrimination ability rooted in Item Response Theory (IRT). This model informed the construction of the adaptive MEDT. A second study contributed preliminary evidence for the validity and reliability of the adaptive MEDT, and demonstrated that the new version of the test is suitable for a wider range of abilities. This paper therefore presents the first adaptive musical emotion discrimination test, a new resource for investigating emotion processing which is freely available for research use.
Collapse
|
18
|
Auditory rhythm discrimination in adults who stutter: An fMRI study. BRAIN AND LANGUAGE 2023; 236:105219. [PMID: 36577315 DOI: 10.1016/j.bandl.2022.105219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 11/09/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
Rhythm perception deficits have been linked to neurodevelopmental disorders affecting speech and language. Children who stutter have shown poorer rhythm discrimination and attenuated functional connectivity in rhythm-related brain areas, which may negatively impact timing control required for speech. It is unclear whether adults who stutter (AWS), who are likely to have acquired compensatory adaptations in response to rhythm processing/timing deficits, are similarly affected. We compared rhythm discrimination in AWS and controls (total n = 36) during fMRI in two matched conditions: simple rhythms that consistently reinforced a periodic beat, and complex rhythms that did not (requiring greater reliance on internal timing). Consistent with an internal beat deficit hypothesis, behavioral results showed poorer complex rhythm discrimination for AWS than controls. In AWS, greater stuttering severity was associated with poorer rhythm discrimination. AWS showed increased activity within beat-based timing regions and increased functional connectivity between putamen and cerebellum (supporting interval-based timing) for simple rhythms.
Collapse
|
19
|
Dissociating Language and Thought in Human Reasoning. Brain Sci 2022; 13:brainsci13010067. [PMID: 36672048 PMCID: PMC9856203 DOI: 10.3390/brainsci13010067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 12/22/2022] [Accepted: 12/23/2022] [Indexed: 01/01/2023] Open
Abstract
What is the relationship between language and complex thought? In the context of deductive reasoning there are two main views. Under the first, which we label here the language-centric view, language is central to the syntax-like combinatorial operations of complex reasoning. Under the second, which we label here the language-independent view, these operations are dissociable from the mechanisms of natural language. We applied continuous theta burst stimulation (cTBS), a form of noninvasive neuromodulation, to healthy adult participants to transiently inhibit a subregion of Broca's area (left BA44) associated in prior work with parsing the syntactic relations of natural language. We similarly inhibited a subregion of dorsomedial frontal cortex (left medial BA8) which has been associated with core features of logical reasoning. There was a significant interaction between task and stimulation site. Post hoc tests revealed that performance on a linguistic reasoning task, but not deductive reasoning task, was significantly impaired after inhibition of left BA44, and performance on a deductive reasoning task, but not linguistic reasoning task, was decreased after inhibition of left medial BA8 (however not significantly). Subsequent linear contrasts supported this pattern. These novel results suggest that deductive reasoning may be dissociable from linguistic processes in the adult human brain, consistent with the language-independent view.
Collapse
|
20
|
The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) Framework for Understanding Musicality-Language Links Across the Lifespan. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:615-664. [PMID: 36742012 PMCID: PMC9893227 DOI: 10.1162/nol_a_00079] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 08/08/2022] [Indexed: 04/18/2023]
Abstract
Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
Collapse
|
21
|
Does music training enhance auditory and linguistic processing? A systematic review and meta-analysis of behavioral and brain evidence. Neurosci Biobehav Rev 2022; 140:104777. [PMID: 35843347 DOI: 10.1016/j.neubiorev.2022.104777] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 07/11/2022] [Accepted: 07/12/2022] [Indexed: 02/02/2023]
Abstract
It is often claimed that music training improves auditory and linguistic skills. Results of individual studies are mixed, however, and most evidence is correlational, precluding inferences of causation. Here, we evaluated data from 62 longitudinal studies that examined whether music training programs affect behavioral and brain measures of auditory and linguistic processing (N = 3928). For the behavioral data, a multivariate meta-analysis revealed a small positive effect of music training on both auditory and linguistic measures, regardless of the type of assignment (random vs. non-random), training (instrumental vs. non-instrumental), and control group (active vs. passive). The trim-and-fill method provided suggestive evidence of publication bias, but meta-regression methods (PET-PEESE) did not. For the brain data, a narrative synthesis also documented benefits of music training, namely for measures of auditory processing and for measures of speech and prosody processing. Thus, the available literature provides evidence that music training produces small neurobehavioral enhancements in auditory and linguistic processing, although future studies are needed to confirm that such enhancements are not due to publication bias.
Collapse
|
22
|
Neural processing of poems and songs is based on melodic properties. Neuroimage 2022; 257:119310. [PMID: 35569784 DOI: 10.1016/j.neuroimage.2022.119310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 04/26/2022] [Accepted: 05/11/2022] [Indexed: 11/30/2022] Open
Abstract
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Collapse
|
23
|
Neural responses in human superior temporal cortex support coding of voice representations. PLoS Biol 2022; 20:e3001675. [PMID: 35900975 PMCID: PMC9333263 DOI: 10.1371/journal.pbio.3001675] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 05/13/2022] [Indexed: 11/19/2022] Open
Abstract
The ability to recognize abstract features of voice during auditory perception is an intricate feat of human audition. For the listener, this occurs in near-automatic fashion to seamlessly extract complex cues from a highly variable auditory signal. Voice perception depends on specialized regions of auditory cortex, including superior temporal gyrus (STG) and superior temporal sulcus (STS). However, the nature of voice encoding at the cortical level remains poorly understood. We leverage intracerebral recordings across human auditory cortex during presentation of voice and nonvoice acoustic stimuli to examine voice encoding at the cortical level in 8 patient-participants undergoing epilepsy surgery evaluation. We show that voice selectivity increases along the auditory hierarchy from supratemporal plane (STP) to the STG and STS. Results show accurate decoding of vocalizations from human auditory cortical activity even in the complete absence of linguistic content. These findings show an early, less-selective temporal window of neural activity in the STG and STS followed by a sustained, strongly voice-selective window. Encoding models demonstrate divergence in the encoding of acoustic features along the auditory hierarchy, wherein STG/STS responses are best explained by voice category and acoustics, as opposed to acoustic features of voice stimuli alone. This is in contrast to neural activity recorded from STP, in which responses were accounted for by acoustic features. These findings support a model of voice perception that engages categorical encoding mechanisms within STG and STS to facilitate feature extraction. Voice perception occurs via specialized networks in higher order auditory cortex, but how voice features are encoded remains a central unanswered question. Using human intracerebral recordings of auditory cortex, this study provides evidence for categorical encoding of voice.
Collapse
|
24
|
Mental representations of speech and musical pitch contours reveal a diversity of profiles in autism spectrum disorder. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2022; 27:629-646. [PMID: 35848413 PMCID: PMC10074762 DOI: 10.1177/13623613221111207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
LAY ABSTRACT As a key auditory attribute of sounds, pitch is ubiquitous in our everyday listening experience involving language, music and environmental sounds. Given its critical role in auditory processing related to communication, numerous studies have investigated pitch processing in autism spectrum disorder. However, the findings have been mixed, reporting either enhanced, typical or impaired performance among autistic individuals. By investigating top-down comparisons of internal mental representations of pitch contours in speech and music, this study shows for the first time that, while autistic individuals exhibit diverse profiles of pitch processing compared to non-autistic individuals, their mental representations of pitch contours are typical across domains. These findings suggest that pitch-processing mechanisms are shared across domains in autism spectrum disorder and provide theoretical implications for using music to improve speech for those autistic individuals who have language problems.
Collapse
|
25
|
Abstract
Through long-term training, music experts acquire complex and specialized sensorimotor skills, which are paralleled by continuous neuro-anatomical and -functional adaptations. The underlying neuroplasticity mechanisms have been extensively explored in decades of research in music, cognitive, and translational neuroscience. However, the absence of a comprehensive review and quantitative meta-analysis prevents the plethora of variegated findings to ultimately converge into a unified picture of the neuroanatomy of musical expertise. Here, we performed a comprehensive neuroimaging meta-analysis of publications investigating neuro-anatomical and -functional differences between musicians (M) and non-musicians (NM). Eighty-four studies were included in the qualitative synthesis. From these, 58 publications were included in coordinate-based meta-analyses using the anatomic/activation likelihood estimation (ALE) method. This comprehensive approach delivers a coherent cortico-subcortical network encompassing sensorimotor and limbic regions bilaterally. Particularly, M exhibited higher volume/activity in auditory, sensorimotor, interoceptive, and limbic brain areas and lower volume/activity in parietal areas as opposed to NM. Notably, we reveal topographical (dis-)similarities between the identified functional and anatomical networks and characterize their link to various cognitive functions by means of meta-analytic connectivity modelling. Overall, we effectively synthesized decades of research in the field and provide a consistent and controversies-free picture of the neuroanatomy of musical expertise.
Collapse
|
26
|
A neural population selective for song in human auditory cortex. Curr Biol 2022; 32:1470-1484.e12. [PMID: 35196507 PMCID: PMC9092957 DOI: 10.1016/j.cub.2022.01.069] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 10/26/2021] [Accepted: 01/24/2022] [Indexed: 12/18/2022]
Abstract
How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.
Collapse
|
27
|
Pitch Discrimination Testing in Patients with a Voice Disorder. J Clin Med 2022; 11:jcm11030584. [PMID: 35160036 PMCID: PMC8836960 DOI: 10.3390/jcm11030584] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 01/17/2022] [Accepted: 01/18/2022] [Indexed: 02/01/2023] Open
Abstract
Auditory perception plays an important role in voice control. Pitch discrimination (PD) is a key index of auditory perception and is influenced by a variety of factors. Little is known about the potential effects of voice disorders on PD and whether PD testing can differentiate people with and without a voice disorder. We thus evaluated PD in a voice-disordered group (n = 71) and a non-voice-disordered control group (n = 80). The voice disorders included muscle tension dysphonia and neurological voice disorders and all participants underwent PD testing as part of a comprehensive voice assessment. Percentage of accurate responses and PD threshold were compared across groups. The PD percentage accuracy was significantly lower in the voice-disordered group than the control group, irrespective of musical background. Participants with voice disorders also required a larger PD threshold to correctly discriminate pitch differences. The mean PD threshold significantly discriminated the voice-disordered groups from the control group. These results have implications for the voice control and pathogenesis of voice disorders. They support the inclusion of PD testing during comprehensive voice assessment and throughout the treatment process for patients with voice disorders.
Collapse
|
28
|
Musical Sophistication and Speech Auditory-Motor Coupling: Easy Tests for Quick Answers. Front Neurosci 2022; 15:764342. [PMID: 35058741 PMCID: PMC8763673 DOI: 10.3389/fnins.2021.764342] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/22/2021] [Indexed: 12/05/2022] Open
Abstract
Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported German-speaking sample, HIGHs overall scored higher and LOWs lower. Compared to an estimated distribution of the English-speaking general population, our sample overall scored lower, with the scores of LOWs significantly differing from the normal distribution, with scores in the ∼30th percentile. While HIGHs more often reported musical training compared to LOWs, the distribution of training instruments did not vary across groups. Importantly, even after the highly correlated subscores of the Gold-MSI were decorrelated, particularly the subscales Musical Perception and Musical Training allowed to infer the speech-to-speech synchronization behavior. The differential effects of musical perception and training were observed, with training predicting audio-motor synchronization in both groups, but perception only in the HIGHs. Our findings suggest that speech auditory-motor cortex coupling strength can be inferred from training and perceptual aspects of musical sophistication, suggesting shared mechanisms involved in speech and music perception.
Collapse
|
29
|
High-Order Areas and Auditory Cortex Both Represent the High-Level Event Structure of Music. J Cogn Neurosci 2022; 34:699-714. [PMID: 35015874 DOI: 10.1162/jocn_a_01815] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, medial pFC, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.
Collapse
|
30
|
Engagement and Arousal effects in predicting the increase of cognitive functioning following a neuromodulation program. ACTA BIO-MEDICA : ATENEI PARMENSIS 2022; 93:e2022248. [PMID: 35775751 PMCID: PMC9335441 DOI: 10.23750/abm.v93i3.13145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 04/22/2022] [Indexed: 11/17/2022]
Abstract
BACKGROUND AND AIM Research in the field of Brain-Computer Interfaces (BCIs) has increased exponentially over the past few years, demonstrating their effectiveness and application in several areas. The main purpose of the present paper was to explore the relevance of user engagement during interaction with a BCI prototype (Neuro-Upper, NU), which aimed at brainwave synchronization through audio-visual entrainment, in the improvement of cognitive performance. METHODS This paper presents findings on data collected from a sample of 18 subjects with clinical disorders who completed about 55 consecutive sessions of 30 min of audio-visual stimulation. The relationship between engagement and improvement of cognitive function (measured through the Intelligence Quotient - IQ) during NU neuromodulation was evaluated through the Index of Cognitive Engagement (ICE) measured by the Pope ratio (Beta / (Alpha + Theta), and Arousal [(High Beta + Low Beta) / (High Alpha + Low Alpha)]. RESULTS A significant correlation between engagement and IQ improvement, but no correlation between arousal and IQ improvement emerged, as expected. CONCLUSIONS Future research aiming at clarifying the role of arousal in psychological disorders and related symptoms will be essential.
Collapse
|
31
|
The influence of memory on the speech-to-song illusion. Mem Cognit 2022; 50:1804-1815. [PMID: 35083717 PMCID: PMC9767999 DOI: 10.3758/s13421-021-01269-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/17/2021] [Indexed: 12/30/2022]
Abstract
In the speech-to-song illusion a spoken phrase is presented repeatedly and begins to sound as if it is being sung. Anecdotal reports suggest that subsequent presentations of a previously heard phrase enhance the illusion, even if several hours or days have elapsed between presentations. In Experiment 1, we examined in a controlled laboratory setting whether memory traces for a previously heard phrase would influence song-like ratings to a subsequent presentation of that phrase. The results showed that word lists that were played several times throughout the experimental session were rated as being more song-like at the end of the experiment than word lists that were played only once in the experimental session. In Experiment 2, we examined if the memory traces that influenced the speech-to-song illusion were abstract in nature or exemplar-based by playing some word lists several times during the experiment in the same voice and playing other word lists several times during the experiment but in different voices. The results showed that word lists played in the same voice were rated as more song-like at the end of the experiment than word lists played in different voices. Many previous studies have examined how various aspects of the stimulus itself influences the perception of the speech-to-song illusion. The results of the present experiments demonstrate that memory traces of the stimulus also influence the speech-to-song illusion.
Collapse
|
32
|
Beyond the Language Module: Musicality as a Stepping Stone Towards Language Acquisition. EVOLUTIONARY PSYCHOLOGY 2022. [DOI: 10.1007/978-3-030-76000-7_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
33
|
Auditory-Motor Mapping Training Facilitates Speech and Word Learning in Tone Language-Speaking Children With Autism: An Early Efficacy Study. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4664-4681. [PMID: 34705567 DOI: 10.1044/2021_jslhr-21-00029] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE It has been reported that tone language-speaking children with autism demonstrate speech-specific lexical tone processing difficulty, although they have intact or even better-than-normal processing of nonspeech/melodic pitch analogues. In this early efficacy study, we evaluated the therapeutic potential of Auditory-Motor Mapping Training (AMMT) in facilitating speech and word output for Mandarin-speaking nonverbal and low-verbal children with autism, in comparison with a matched non-AMMT-based control treatment. METHOD Fifteen Mandarin-speaking nonverbal and low-verbal children with autism spectrum disorder participated and completed all the AMMT-based treatment sessions by intoning (singing) and tapping the target words delivered via an app, whereas another 15 participants received control treatment. Generalized linear mixed-effects models were created to evaluate speech production accuracy and word production intelligibility across different groups and conditions. RESULTS Results showed that the AMMT-based treatment provided a more effective training approach in accelerating the rate of speech (especially lexical tone) and word learning in the trained items. More importantly, the enhanced training efficacy on lexical tone acquisition remained at 2 weeks after therapy and generalized to untrained tones that were not practiced. Furthermore, the low-verbal participants showed higher improvement compared to the nonverbal participants. CONCLUSIONS These data provide the first empirical evidence for adopting the AMMT-based training to facilitate speech and word learning in Mandarin-speaking nonverbal and low-verbal children with autism. This early efficacy study holds promise for improving lexical tone production in Mandarin-speaking children with autism but should be further replicated in larger scale randomized studies. Supplemental Material https://doi.org/10.23641/asha.16834627.
Collapse
|
34
|
EEG Correlates of Middle Eastern Music Improvisations on the Ney Instrument. Front Psychol 2021; 12:701761. [PMID: 34671287 PMCID: PMC8520950 DOI: 10.3389/fpsyg.2021.701761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 09/14/2021] [Indexed: 11/27/2022] Open
Abstract
The cognitive sciences have witnessed a growing interest in cognitive and neural basis of human creativity. Music improvisations constitute an ideal paradigm to study creativity, but the underlying cognitive processes remain poorly understood. In addition, studies on music improvisations using scales other than the major and minor chords are scarce. Middle Eastern Music is characterized by the additional use of microtones, resulting in a tonal–spatial system called Maqam. No EEG correlates have been proposed yet for the eight most commonly used maqams. The Ney, an end-blown flute that is popular and widely used in the Middle East was used by a professional musician to perform 24 improvisations at low, medium, and high tempos. Using the EMOTIV EPOC+, a 14-channel wireless EEG headset, brainwaves were recorded and quantified before and during improvisations. Pairwise comparisons were calculated using IBM-SPSS and a principal component analysis was used to evaluate the variability between the maqams. A significant increase of low frequency bands theta power and alpha power were observed at the frontal left and temporal left area as well as a significant increase in higher frequency bands beta-high bands and gamma at the right temporal and left parietal area. This study reveals the first EEG observations of the eight most commonly used maqam and is proposing EEG signatures for various maqams.
Collapse
|
35
|
The extended present: an informational context for perception. Acta Psychol (Amst) 2021; 220:103403. [PMID: 34454251 DOI: 10.1016/j.actpsy.2021.103403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 08/04/2021] [Accepted: 08/19/2021] [Indexed: 01/29/2023] Open
Abstract
Several previous authors have proposed a kind of specious or subjective present moment that covers a few seconds of recent information. This article proposes a new hypothesis about the subjective present, renamed the extended present, defined not in terms of time covered but as a thematically connected information structure held in working memory and in transiently accessible form in long-term memory. The three key features of the extended present are that information in it is thematically connected, both internally and to current attended perceptual input, it is organised in a hierarchical structure, and all information in it is marked with temporal information, specifically ordinal and duration information. Temporal boundaries to the information structure are determined by hierarchical structure processing and by limits on processing and storage capacity. Supporting evidence for the importance of hierarchical structure analysis is found in the domains of music perception, speech and language processing, perception and production of goal-directed action, and exact arithmetical calculation. Temporal information marking is also discussed and a possible mechanism for representing ordinal and duration information on the time scale of the extended present is proposed. It is hypothesised that the extended present functions primarily as an informational context for making sense of current perceptual input, and as an enabler for perception and generation of complex structures and operations in language, action, music, exact calculation, and other domains.
Collapse
|
36
|
The origins of music in (musi)language. Behav Brain Sci 2021; 44:e104. [PMID: 34590552 DOI: 10.1017/s0140525x20000813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
The view of music as a byproduct of other cognitive functions has been deemed incomplete or incorrect. Revisiting the six lines of evidence that support this conclusion, it is argued that it is unclear how the hypothesis that music has its origins in (musi)language is discarded. Two additional promising research lines that can support or discard the byproduct hypothesis are presented.
Collapse
|
37
|
Neural Tracking of Speech: Top-Down and Bottom-Up Influences in the Musician's Brain. J Neurosci 2021; 41:6579-6581. [PMID: 34348984 PMCID: PMC8336707 DOI: 10.1523/jneurosci.0756-21.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 05/16/2021] [Accepted: 05/24/2021] [Indexed: 11/21/2022] Open
|
38
|
Hierarchical control as a shared neurocognitive mechanism for language and music. Cognition 2021; 216:104847. [PMID: 34311153 DOI: 10.1016/j.cognition.2021.104847] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 05/14/2021] [Accepted: 07/11/2021] [Indexed: 12/16/2022]
Abstract
Although comparative research has made substantial progress in clarifying the relationship between language and music as neurocognitive systems from both a theoretical and empirical perspective, there is still no consensus about which mechanisms, if any, are shared and how they bring about different neurocognitive systems. In this paper, we tackle these two questions by focusing on hierarchical control as a neurocognitive mechanism underlying syntax in language and music. We put forward the Coordinated Hierarchical Control (CHC) hypothesis: linguistic and musical syntax rely on hierarchical control, but engage this shared mechanism differently depending on the current control demand. While linguistic syntax preferably engages the abstract rule-based control circuit, musical syntax rather employs the coordination of the abstract rule-based and the more concrete motor-based control circuits. We provide evidence for our hypothesis by reviewing neuroimaging as well as neuropsychological studies on linguistic and musical syntax. The CHC hypothesis makes a set of novel testable predictions to guide future work on the relationship between language and music.
Collapse
|
39
|
Improved Auditory Function Caused by Music Versus Foreign Language Training at School Age: Is There a Difference? Cereb Cortex 2021; 32:63-75. [PMID: 34265850 PMCID: PMC8634570 DOI: 10.1093/cercor/bhab194] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 05/28/2021] [Accepted: 05/28/2021] [Indexed: 12/03/2022] Open
Abstract
In adults, music and speech share many neurocognitive functions, but how do they interact in a developing brain? We compared the effects of music and foreign language training on auditory neurocognition in Chinese children aged 8–11 years. We delivered group-based training programs in music and foreign language using a randomized controlled trial. A passive control group was also included. Before and after these year-long extracurricular programs, auditory event-related potentials were recorded (n = 123 and 85 before and after the program, respectively). Through these recordings, we probed early auditory predictive brain processes. To our surprise, the language program facilitated the children’s early auditory predictive brain processes significantly more than did the music program. This facilitation was most evident in pitch encoding when the experimental paradigm was musically relevant. When these processes were probed by a paradigm more focused on basic sound features, we found early predictive pitch encoding to be facilitated by music training. Thus, a foreign language program is able to foster auditory and music neurocognition, at least in tonal language speakers, in a manner comparable to that by a music program. Our results support the tight coupling of musical and linguistic brain functions also in the developing brain.
Collapse
|
40
|
Individuals with autism spectrum disorder are impaired in absolute but not relative pitch and duration matching in speech and song imitation. Autism Res 2021; 14:2355-2372. [PMID: 34214243 DOI: 10.1002/aur.2569] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 05/03/2021] [Accepted: 06/22/2021] [Indexed: 11/08/2022]
Abstract
Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation. However, few studies have identified clear quantitative characteristics of vocal imitation in ASD. This study investigated imitation of speech and song in English-speaking individuals with and without ASD and its modulation by age. Participants consisted of 25 autistic children and 19 autistic adults, who were compared to 25 children and 19 adults with typical development matched on age, gender, musical training, and cognitive abilities. The task required participants to imitate speech and song stimuli with varying pitch and duration patterns. Acoustic analyses of the imitation performance suggested that individuals with ASD were worse than controls on absolute pitch and duration matching for both speech and song imitation, although they performed as well as controls on relative pitch and duration matching. Furthermore, the two groups produced similar numbers of pitch contour, pitch interval-, and time errors. Across both groups, sung pitch was imitated more accurately than spoken pitch, whereas spoken duration was imitated more accurately than sung duration. Children imitated spoken pitch more accurately than adults when it came to speech stimuli, whereas age showed no significant relationship to song imitation. These results reveal a vocal imitation deficit across speech and music domains in ASD that is specific to absolute pitch and duration matching. This finding provides evidence for shared mechanisms between speech and song imitation, which involves independent implementation of relative versus absolute features. LAY SUMMARY: Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation of actions and gestures. Characteristics of vocal imitation in ASD remain unclear. By comparing speech and song imitation, this study shows that individuals with ASD have a vocal imitative deficit that is specific to absolute pitch and duration matching, while performing as well as controls on relative pitch and duration matching, across speech and music domains.
Collapse
|
41
|
Speech Perception under the Tent: A Domain-general Predictive Role for the Cerebellum. J Cogn Neurosci 2021; 33:1517-1534. [PMID: 34496370 DOI: 10.1162/jocn_a_01729] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The role of the cerebellum in speech perception remains a mystery. Given its uniform architecture, we tested the hypothesis that it implements a domain-general predictive mechanism whose role in speech is determined by connectivity. We collated all neuroimaging studies reporting cerebellar activity in the Neurosynth database (n = 8206). From this set, we found all studies involving passive speech and sound perception (n = 72, 64% speech, 12.5% sounds, 12.5% music, and 11% tones) and speech production and articulation (n = 175). Standard and coactivation neuroimaging meta-analyses were used to compare cerebellar and associated cortical activations between passive perception and production. We found distinct regions of perception- and production-related activity in the cerebellum and regions of perception-production overlap. Each of these regions had distinct patterns of cortico-cerebellar connectivity. To test for domain-generality versus specificity, we identified all psychological and task-related terms in the Neurosynth database that predicted activity in cerebellar regions associated with passive perception and production. Regions in the cerebellum activated by speech perception were associated with domain-general terms related to prediction. One hallmark of predictive processing is metabolic savings (i.e., decreases in neural activity when events are predicted). To test the hypothesis that the cerebellum plays a predictive role in speech perception, we examined cortical activation between studies reporting cerebellar activation and those without cerebellar activation during speech perception. When the cerebellum was active during speech perception, there was far less cortical activation than when it was inactive. The results suggest that the cerebellum implements a domain-general mechanism related to prediction during speech perception.
Collapse
|
42
|
Abstract
Abstract
The aim of this paper is to review recent hypotheses on the evolutionary origins of music in Homo sapiens, taking into account the most influential traditional hypotheses. To date, theories derived from evolution have focused primarily on the importance that music carries in solving detailed adaptive problems. The three most influential theoretical concepts have described the evolution of human music in terms of 1) sexual selection, 2) the formation of social bonds, or treated it 3) as a byproduct. According to recent proposals, traditional hypotheses are flawed or insufficient in fully explaining the complexity of music in Homo sapiens. This paper will critically discuss three traditional hypotheses of music evolution (music as an effect of sexual selection, a mechanism of social bonding, and a byproduct), as well as and two recent concepts of music evolution - music as a credible signal and Music and Social Bonding (MSB) hypothesis.
Collapse
|
43
|
Effects of intention in the imitation of sung and spoken pitch. PSYCHOLOGICAL RESEARCH 2021; 86:792-807. [PMID: 34014375 DOI: 10.1007/s00426-021-01527-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Accepted: 05/03/2021] [Indexed: 11/29/2022]
Abstract
Pitch content is an important component of song and speech. Previous studies have shown a pronounced advantage for imitation of sung pitch over spoken pitch. However, it is not clear to what extent matching of pitch in production depends on one's intention to imitate pitch. We measured the effects of intention to imitate on matching of produced pitch in both vocal domains. Participants imitated pitch content in speech and song stimuli intentionally ("imitate the pitch") and incidentally ("repeat the words"). Our results suggest that the song advantage exists independently of whether participants explicitly intend to imitate pitch. This result supports the notion that the song advantage reflects pitch salience in the stimulus. On the other hand, participants were more effective at suppressing the imitation of pitch for song than for speech. This second result suggests that it is easier to dissociate phonetic content from pitch in the context of song than in speech. Analyses of individual differences showed that intention to imitate pitch had larger effects for individuals who tended to match pitch overall in production, independent of intentions. Taken together, the results help to illuminate the psychological processes underlying intentional and automatic vocal imitation processes.
Collapse
|
44
|
Deficits in musical rhythm perception in children with specific learning disabilities. NeuroRehabilitation 2021; 48:187-193. [PMID: 33664156 DOI: 10.3233/nre-208013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND A specific learning disability comes with a cluster of deficits in the neurocognitive domain. Phonological processing deficits have been the core of different types of specific learning disabilities. In addition to difficulties in phonological processing and cognitive deficits, children with specific learning disability (SLD) are known to have deficits in more innate non-language-based skills like musical rhythm processing. OBJECTIVES This paper reviews studies in the area of musical rhythm perception in children with SLD. An attempt was made to throw light on beneficial effects of music and rhythm-based intervention and their underlying mechanism. METHODS A hypothesis-driven review of research in the domain of rhythm deficits and rhythm-based intervention in children with SLD was carried out. RESULTS A summary of the reviewed literature highlights that music and language processing have shared neural underpinnings. Children with SLD in addition to difficulties in language processing and other neurocognitive deficits are known to have deficits in music and rhythm perception. This is explained in the background of deficits in auditory skills, perceptuo-motor skills and timing skills. Attempt has been made in the field to understand the effect of music training on the children's auditory processing and language development. Music and rhythm-based intervention emerges as a powerful intervention method to target language processing and other neurocognitive functions. Future studies in this direction are highly underscored. CONCLUSIONS Suggestions for future research on music-based interventions have been discussed.
Collapse
|
45
|
The Neurophysiological Processing of Music in Children: A Systematic Review With Narrative Synthesis and Considerations for Clinical Practice in Music Therapy. Front Psychol 2021; 12:615209. [PMID: 33935868 PMCID: PMC8081903 DOI: 10.3389/fpsyg.2021.615209] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 03/10/2021] [Indexed: 11/17/2022] Open
Abstract
Introduction: Evidence supporting the use of music interventions to maximize arousal and awareness in adults presenting with a disorder of consciousness continues to grow. However, the brain of a child is not simply a small adult brain, and therefore adult theories are not directly translatable to the pediatric population. The present study aims to synthesize brain imaging data about the neural processing of music in children aged 0-18 years, to form a theoretical basis for music interventions with children presenting with a disorder of consciousness following acquired brain injury. Methods: We conducted a systematic review with narrative synthesis utilizing an adaptation of the methodology developed by Popay and colleagues. Following the development of the narrative that answered the central question "what does brain imaging data reveal about the receptive processing of music in children?", discussion was centered around the clinical implications of music therapy with children following acquired brain injury. Results: The narrative synthesis included 46 studies that utilized EEG, MEG, fMRI, and fNIRS scanning techniques in children aged 0-18 years. From birth, musical stimuli elicit distinct but immature electrical responses, with components of the auditory evoked response having longer latencies and variable amplitudes compared to their adult counterparts. Hemodynamic responses are observed throughout cortical and subcortical structures however cortical immaturity impacts musical processing and the localization of function in infants and young children. The processing of complex musical stimuli continues to mature into late adolescence. Conclusion: While the ability to process fundamental musical elements is present from birth, infants and children process music more slowly and utilize different cortical areas compared to adults. Brain injury in childhood occurs in a period of rapid development and the ability to process music following brain injury will likely depend on pre-morbid musical processing. Further, a significant brain injury may disrupt the developmental trajectory of complex music processing. However, complex music processing may emerge earlier than comparative language processing, and occur throughout a more global circuitry.
Collapse
|
46
|
The evolution of hierarchical structure building capacity for language and music: a bottom-up perspective. Primates 2021; 63:417-428. [PMID: 33839984 PMCID: PMC9463250 DOI: 10.1007/s10329-021-00905-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 03/26/2021] [Indexed: 12/27/2022]
Abstract
A central property of human language is its hierarchical structure. Humans can flexibly combine elements to build a hierarchical structure expressing rich semantics. A hierarchical structure is also considered as playing a key role in many other human cognitive domains. In music, auditory-motor events are combined into hierarchical pitch and/or rhythm structure expressing affect. How did such a hierarchical structure building capacity evolve? This paper investigates this question from a bottom-up perspective based on a set of action-related components as a shared basis underlying cognitive capacities of nonhuman primates and humans. Especially, I argue that the evolution of hierarchical structure building capacity for language and music is tractable for comparative evolutionary study once we focus on the gradual elaboration of shared brain architecture: the cortico-basal ganglia-thalamocortical circuits for hierarchical control of goal-directed action and the dorsal pathways for hierarchical internal models. I suggest that this gradual elaboration of the action-related brain architecture in the context of vocal control and tool-making went hand in hand with amplification of working memory, and made the brain ready for hierarchical structure building in language and music.
Collapse
|
47
|
Changes in Spoken and Sung Productions Following Adaptation to Pitch-shifted Auditory Feedback. J Voice 2021; 37:466.e1-466.e15. [PMID: 33745802 DOI: 10.1016/j.jvoice.2021.02.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 02/09/2021] [Accepted: 02/11/2021] [Indexed: 10/21/2022]
Abstract
OBJECTIVE Using voice to speak or to sing is made possible by remarkably complex sensorimotor processes. Like any other sensorimotor system, the speech motor controller guides its actions with maximum performance at minimum cost, using available sources of information, among which, auditory feedback plays a major role. Manipulation of this feedback forces the speech monitoring system to refine its expectations for further actions. The present study hypothesizes that the duration of this refinement and the weight applied on different feedbacks loops would depend on the intended sounds to be produced, namely reading aloud versus singing. MATERIAL AND METHODS We asked participants to sing "Happy Birthday" and read a paragraph of Harry Potter before and after experiencing pitch-shifted feedback. A detailed fundamental frequency (F0) analysis was conducted for each note in the song and each segment in the paragraph (at the level of a sentence, a word, or a vowel) to determine whether some aspects of F0 production changed in response to the pitch perturbations experienced during the adaptation paradigm. RESULTS Our results showed that changes in the degree of F0-drift across the song or the paragraph was the metric that was the most consistent with a carry-over effect of adaptation, and in this regard, reading new material was more influenced by recent remapping than singing. CONCLUSION The motor commands used by (normally-hearing) speakers are malleable via altered-feedback paradigms, perhaps more so when reading aloud than when singing. But these effects are not revealed through simple indicators such as an overall change in mean F0 or F0 range, but rather through subtle metrics, such as a drift of the voice pitch across the recordings.
Collapse
|
48
|
Comorbidity and cognitive overlap between developmental dyslexia and congenital amusia in children. Neuropsychologia 2021; 155:107811. [PMID: 33647287 DOI: 10.1016/j.neuropsychologia.2021.107811] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 01/19/2021] [Accepted: 02/21/2021] [Indexed: 11/24/2022]
Abstract
Developmental dyslexia and congenital amusia are two specific neurodevelopmental disorders that affect reading and music perception, respectively. Similarities at perceptual, cognitive, and anatomical levels raise the possibility that a common factor is at play in their emergence, albeit in different domains. However, little consideration has been given to what extent they can co-occur. A first adult study suggested a 30% amusia rate in dyslexia and a 25% dyslexia rate in amusia (Couvignou et al., Cognitive Neuropsychology 2019). We present newly acquired data from 38 dyslexic and 38 typically developing children. These were assessed with literacy and phonological tests, as well as with three musical tests: the Montreal Battery of Evaluation of Musical Abilities, a pitch and time change detection task, and a singing task. Overall, about 34% of the dyslexic children were musically impaired, a proportion that is significantly higher than both the estimated 1.5-4% prevalence of congenital amusia in the general population and the rate of 5% observed within the control group. They were mostly affected in the pitch dimension, both in terms of perception and production. Correlations and prediction links were found between pitch processing skills and language measures after partialing out confounding factors. These findings are discussed with regard to cognitive and neural explanatory hypotheses of a comorbidity between dyslexia and amusia.
Collapse
|
49
|
Abstract
Song in oscine birds (as in human speech and song) relies upon the rare capacity of vocal learning. Transmission can be vertical, horizontal, or oblique. As a rule, memorization and production by a naïve bird are not simultaneous: the long-term storage of song phrases precedes their first vocal rehearsal by months. While a wealth of detail regarding songbird enculturation has been uncovered by focusing on the apprentice, whether observational learning can fully account for the ontogeny of birdsong, or whether there could also be an element of active teaching involved, has remained an open question. Given the paucity of knowledge on animal cultures, I argue for the utility of an inclusive definition of teaching that encourages data be collected across a wide range of taxa. Borrowing insights from musicology, I introduce the Australian pied butcherbird (Cracticus nigrogularis) into the debate surrounding mechanisms of cultural transmission. I probe the relevance and utility of mentalistic, culture-based, and functionalist approaches to teaching in this species. Sonographic analysis of birdsong recordings and observational data (including photographs) of pied butcherbird behavior at one field site provide evidence that I assess based on criteria laid down by Caro and Hauser, along with later refinements to their functionalist definition. The candidate case of teaching reviewed here adds to a limited but growing body of reports supporting the notion that teaching may be more widespread than is currently realized. Nonetheless, I describe the challenges of confirming that learning has occurred in songbird pupils, given the delay between vocal instruction and production, as well as the low status accorded to anecdote and other observational evidence commonly mustered in instances of purported teaching. As a corrective, I press for an emphasis on biodiversity that will guide the study of teaching beyond human accounts and intractable discipline-specific burdens of proof.
Collapse
|
50
|
Musical experience may help the brain respond to second language reading. Neuropsychologia 2020; 148:107655. [DOI: 10.1016/j.neuropsychologia.2020.107655] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 10/02/2020] [Accepted: 10/12/2020] [Indexed: 02/05/2023]
|