1
|
Kachlicka M, Patel AD, Liu F, Tierney A. Weighting of cues to categorization of song versus speech in tone-language and non-tone-language speakers. Cognition 2024; 246:105757. [PMID: 38442588 DOI: 10.1016/j.cognition.2024.105757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 02/09/2024] [Accepted: 02/20/2024] [Indexed: 03/07/2024]
Abstract
One of the most important auditory categorization tasks a listener faces is determining a sound's domain, a process which is a prerequisite for successful within-domain categorization tasks such as recognizing different speech sounds or musical tones. Speech and song are universal in human cultures: how do listeners categorize a sequence of words as belonging to one or the other of these domains? There is growing interest in the acoustic cues that distinguish speech and song, but it remains unclear whether there are cross-cultural differences in the evidence upon which listeners rely when making this fundamental perceptual categorization. Here we use the speech-to-song illusion, in which some spoken phrases perceptually transform into song when repeated, to investigate cues to this domain-level categorization in native speakers of tone languages (Mandarin and Cantonese speakers residing in the United Kingdom and China) and in native speakers of a non-tone language (English). We find that native tone-language and non-tone-language listeners largely agree on which spoken phrases sound like song after repetition, and we also find that the strength of this transformation is not significantly different across language backgrounds or countries of residence. Furthermore, we find a striking similarity in the cues upon which listeners rely when perceiving word sequences as singing versus speech, including small pitch intervals, flat within-syllable pitch contours, and steady beats. These findings support the view that there are certain widespread cross-cultural similarities in the mechanisms by which listeners judge if a word sequence is spoken or sung.
Collapse
Affiliation(s)
- Magdalena Kachlicka
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, United Kingdom
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, 419 Boston Ave, Medford, USA; Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research, 661 University Avenue, Toronto, Canada
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Whiteknights, Reading, United Kingdom
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, United Kingdom.
| |
Collapse
|
2
|
Zhao C, Ong JH, Veic A, Patel AD, Jiang C, Fogel AR, Wang L, Hou Q, Das D, Crasto C, Chakrabarti B, Williams TI, Loutrari A, Liu F. Predictive processing of music and language in autism: Evidence from Mandarin and English speakers. Autism Res 2024. [PMID: 38651566 DOI: 10.1002/aur.3133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 04/01/2024] [Indexed: 04/25/2024]
Abstract
Atypical predictive processing has been associated with autism across multiple domains, based mainly on artificial antecedents and consequents. As structured sequences where expectations derive from implicit learning of combinatorial principles, language and music provide naturalistic stimuli for investigating predictive processing. In this study, we matched melodic and sentence stimuli in cloze probabilities and examined musical and linguistic prediction in Mandarin- (Experiment 1) and English-speaking (Experiment 2) autistic and non-autistic individuals using both production and perception tasks. In the production tasks, participants listened to unfinished melodies/sentences and then produced the final notes/words to complete these items. In the perception tasks, participants provided expectedness ratings of the completed melodies/sentences based on the most frequent notes/words in the norms. While Experiment 1 showed intact musical prediction but atypical linguistic prediction in autism in the Mandarin sample that demonstrated imbalanced musical training experience and receptive vocabulary skills between groups, the group difference disappeared in a more closely matched sample of English speakers in Experiment 2. These findings suggest the importance of taking an individual differences approach when investigating predictive processing in music and language in autism, as the difficulty in prediction in autism may not be due to generalized problems with prediction in any type of complex sequence processing.
Collapse
Affiliation(s)
- Chen Zhao
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Jia Hoong Ong
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Anamarija Veic
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, Massachusetts, USA
- Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Allison R Fogel
- Department of Psychology, Tufts University, Medford, Massachusetts, USA
| | - Li Wang
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Qingqi Hou
- Department of Music and Dance, Nanjing Normal University of Special Education, Nanjing, China
| | - Dipsikha Das
- School of Psychology, Keele University, Staffordshire, UK
| | - Cara Crasto
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Bhismadev Chakrabarti
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Tim I Williams
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Ariadne Loutrari
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
3
|
Kathios N, Patel AD, Loui P. Musical anhedonia, timbre, and the rewards of music listening. Cognition 2024; 243:105672. [PMID: 38086279 DOI: 10.1016/j.cognition.2023.105672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/18/2023] [Accepted: 11/21/2023] [Indexed: 12/22/2023]
Abstract
Pleasure in music has been linked to predictive coding of melodic and rhythmic patterns, subserved by connectivity between regions in the brain's auditory and reward networks. Specific musical anhedonics derive little pleasure from music and have altered auditory-reward connectivity, but no difficulties with music perception abilities and no generalized physical anhedonia. Recent research suggests that specific musical anhedonics experience pleasure in nonmusical sounds, suggesting that the implicated brain pathways may be specific to music reward. However, this work used sounds with clear real-world sources (e.g., babies laughing, crowds cheering), so positive hedonic responses could be based on the referents of these sounds rather than the sounds themselves. We presented specific musical anhedonics and matched controls with isolated short pleasing and displeasing synthesized sounds of varying timbres with no clear real-world referents. While the two groups found displeasing sounds equally displeasing, the musical anhedonics gave substantially lower pleasure ratings to the pleasing sounds, indicating that their sonic anhedonia is not limited to musical rhythms and melodies. Furthermore, across a large sample of participants, mean pleasure ratings for pleasing synthesized sounds predicted significant and similar variance in six dimensions of musical reward considered to be relatively independent, suggesting that pleasure in sonic timbres play a role in eliciting reward-related responses to music. We replicate the earlier findings of preserved pleasure ratings for semantically referential sounds in musical anhedonics and find that pleasure ratings of semantic referents, when presented without sounds, correlated with ratings for the sounds themselves. This association was stronger in musical anhedonics than in controls, suggesting the use of semantic knowledge as a compensatory mechanism for affective sound processing. Our results indicate that specific musical anhedonia is not entirely specific to melodic and rhythmic processing, and suggest that timbre merits further research as a source of pleasure in music.
Collapse
Affiliation(s)
- Nicholas Kathios
- Dept. of Psychology, Northeastern University, United States of America
| | - Aniruddh D Patel
- Dept. of Psychology, Tufts University, United States of America; Program in Brain Mind and Consciousness, Canadian Institute for Advanced Research, Canada
| | - Psyche Loui
- Dept. of Psychology, Northeastern University, United States of America; Dept. of Music, Northeastern University, United States of America.
| |
Collapse
|
4
|
Rouse AA, Patel AD, Wainapel S, Kao MH. Sex differences in vocal learning ability in songbirds are linked with differences in flexible rhythm pattern perception. Anim Behav 2023; 203:193-206. [PMID: 37842009 PMCID: PMC10569135 DOI: 10.1016/j.anbehav.2023.05.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2023]
Abstract
Humans readily recognize familiar rhythmic patterns, such as isochrony (equal timing between events) across a wide range of rates. This reflects a facility with perceiving the relative timing of events, not just absolute interval durations. Several lines of evidence suggest this ability is supported by precise temporal predictions arising from forebrain auditory-motor interactions. We have shown previously that male zebra finches, Taeniopygia guttata, which possess specialized auditory-motor networks and communicate with rhythmically patterned sequences, share our ability to flexibly recognize isochrony across rates. To test the hypothesis that flexible rhythm pattern perception is linked to vocal learning, we ask whether female zebra finches, which do not learn to sing, can also recognize global temporal patterns. We find that females can flexibly recognize isochrony across a wide range of rates but perform slightly worse than males on average. These findings are consistent with recent work showing that while females have reduced forebrain song regions, the overall network connectivity of vocal premotor regions is similar to males and may support predictions of upcoming events. Comparative studies of male and female songbirds thus offer an opportunity to study how individual differences in auditory-motor connectivity influence perception of relative timing, a hallmark of human music perception.
Collapse
Affiliation(s)
- Andrew A. Rouse
- Department of Psychology, Tufts University, Medford, MA, U.S.A
| | - Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, MA, U.S.A
- Program in Brain, Mind and Consciousness, Canadian Institute for Advanced Research, Toronto, ON, Canada
| | | | - Mimi H. Kao
- Department of Biology, Tufts University, Medford, MA, U.S.A
- Graduate Program in Neuroscience, Graduate School of Biomedical Sciences, Tufts University School of Medicine, Boston, MA, U.S.A
| |
Collapse
|
5
|
Abstract
Language and music rely on complex sequences organized according to syntactic principles that are implicitly understood by enculturated listeners. Across both domains, syntactic processing involves predicting and integrating incoming elements into higher-order structures. According to the Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, 2003), musical and linguistic syntactic processing rely on shared resources for integrating incoming elements (e.g., chords, words) into unfolding sequences. One prediction of the SSIRH is that people with agrammatic aphasia (whose deficits are due to syntactic integration problems) should present with deficits in processing musical syntax. We report the first neural study to test this prediction: event-related potentials (ERPs) were measured in response to musical and linguistic syntactic violations in a group of people with agrammatic aphasia (n=7) compared to a group of healthy controls (n=14) using an acceptability judgement task. The groups were matched with respect to age, education, and extent of musical training. Violations were based on morpho-syntactic relations in sentences and harmonic relations in chord sequences. Both groups presented with a significant P600 response to syntactic violations across both domains. The aphasic participants presented with a reduced-amplitude posterior P600 compared to the healthy adults in response to linguistic, but not musical, violations. Participants with aphasia did however present with larger frontal positivities in response to violations in both domains. Intriguingly, extent of musical training was associated with larger posterior P600 responses to syntactic violations of language and music in both groups. Overall, these findings are not consistent with the predictions of the SSIRH, and instead suggest that linguistic, but not musical, syntactic processing may be selectively impaired in stroke-induced agrammatic aphasia. However, the findings also suggest a relationship between musical training and linguistic syntactic processing, which may have clinical implications for people with aphasia, and motivates more research on the relationship between these two domains.
Collapse
Affiliation(s)
- Brianne Chiappetta
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, MA, USA
- Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, ON, CA
| | - Cynthia K. Thompson
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
- Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Northwestern University, Chicago, IL, USA
- Department of Neurology, Northwestern University, Chicago, IL, USA
| |
Collapse
|
6
|
Tierney A, Patel AD, Jasmin K, Breen M. Individual differences in perception of the speech-to-song illusion are linked to musical aptitude but not musical training. J Exp Psychol Hum Percept Perform 2021; 47:1681-1697. [PMID: 34881953 DOI: 10.1037/xhp0000968] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In the speech-to-song illusion, certain spoken phrases are perceived as sung after repetition. One possible explanation for this increase in musicality is that, as phrases are repeated, lexical activation dies off, enabling listeners to focus on the melodic and rhythmic characteristics of stimuli and assess them for the presence of musical structure. Here we tested the idea that perception of the illusion requires implicit assessment of melodic and rhythmic structure by presenting individuals with phrases that tend to be perceived as song when repeated, as well as phrases that tend to continue to be perceived as speech when repeated, measuring the strength of the illusion as the rating difference between these two stimulus categories after repetition. Illusion strength varied widely and stably between listeners, with large individual differences and high split-half reliability, suggesting that not all listeners are equally able to detect musical structure in speech. Although variability in illusion strength was unrelated to degree of musical training, participants who perceived the illusion more strongly were proficient in several musical skills, including beat perception, tonality perception, and selective attention to pitch. These findings support models of the speech-to-song illusion in which experience of the illusion is based on detection of musical characteristics latent in spoken phrases. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London
| | | | - Kyle Jasmin
- Department of Psychological Sciences, Birkbeck, University of London
| | - Mara Breen
- Department of Psychology, Mount Holyoke College
| |
Collapse
|
7
|
Abstract
The human capacity to synchronize movements to an auditory beat is central to musical behaviour and to debates over the evolution of human musicality. Have humans evolved any neural specializations for music processing, or does music rely entirely on brain circuits that evolved for other reasons? The vocal learning and rhythmic synchronization hypothesis proposes that our ability to move in time with an auditory beat in a precise, predictive and tempo-flexible manner originated in the neural circuitry for complex vocal learning. In the 15 years, since the hypothesis was proposed a variety of studies have supported it. However, one study has provided a significant challenge to the hypothesis. Furthermore, it is increasingly clear that vocal learning is not a binary trait animals have or lack, but varies more continuously across species. In the light of these developments and of recent progress in the neurobiology of beat processing and of vocal learning, the current paper revises the vocal learning hypothesis. It argues that an advanced form of vocal learning acts as a preadaptation for sporadic beat perception and synchronization (BPS), providing intrinsic rewards for predicting the temporal structure of complex acoustic sequences. It further proposes that in humans, mechanisms of gene-culture coevolution transformed this preadaptation into a genuine neural adaptation for sustained BPS. The larger significance of this proposal is that it outlines a hypothesis of cognitive gene-culture coevolution which makes testable predictions for neuroscience, cross-species studies and genetics. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, MA, USA
- Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research, Toronto, Canada
| |
Collapse
|
8
|
Sims BM, Patel AD, Garnica BG, Faraj MT, Tang A, Parsons T, Hoegler JJ, Day CS. Effect of elective surgery cancellations during the COVID-19 pandemic on patients' activity, anxiety and pain. Br J Surg 2021; 108:e392-e393. [PMID: 34611698 PMCID: PMC8500095 DOI: 10.1093/bjs/znab318] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 08/20/2021] [Indexed: 11/12/2022]
Affiliation(s)
- B M Sims
- Wayne State University School of Medicine, Detroit, Michigan, USA
| | - A D Patel
- Wayne State University School of Medicine, Detroit, Michigan, USA
| | - B G Garnica
- Wayne State University School of Medicine, Detroit, Michigan, USA
| | - M T Faraj
- Oakland University William Beaumont School of Medicine, Rochester, Michigan, USA
| | - A Tang
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, USA
| | - T Parsons
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, USA
| | - J J Hoegler
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, USA
| | - C S Day
- Wayne State University School of Medicine, Detroit, Michigan, USA.,Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, USA
| |
Collapse
|
9
|
Regev M, Halpern AR, Owen AM, Patel AD, Zatorre RJ. Mapping Specific Mental Content during Musical Imagery. Cereb Cortex 2021; 31:3622-3640. [PMID: 33749742 DOI: 10.1093/cercor/bhab036] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Revised: 02/02/2021] [Accepted: 02/05/2021] [Indexed: 11/12/2022] Open
Abstract
Humans can mentally represent auditory information without an external stimulus, but the specificity of these internal representations remains unclear. Here, we asked how similar the temporally unfolding neural representations of imagined music are compared to those during the original perceived experience. We also tested whether rhythmic motion can influence the neural representation of music during imagery as during perception. Participants first memorized six 1-min-long instrumental musical pieces with high accuracy. Functional MRI data were collected during: 1) silent imagery of melodies to the beat of a visual metronome; 2) same but while tapping to the beat; and 3) passive listening. During imagery, inter-subject correlation analysis showed that melody-specific temporal response patterns were reinstated in right associative auditory cortices. When tapping accompanied imagery, the melody-specific neural patterns were reinstated in more extensive temporal-lobe regions bilaterally. These results indicate that the specific contents of conscious experience are encoded similarly during imagery and perception in the dynamic activity of auditory cortices. Furthermore, rhythmic motion can enhance the reinstatement of neural patterns associated with the experience of complex sounds, in keeping with models of motor to sensory influences in auditory processing.
Collapse
Affiliation(s)
- Mor Regev
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2J2, Canada.,Centre for Research in Language, Brain, and Music, Montreal, QC H3A 1E3, Canada
| | - Andrea R Halpern
- Department of Psychology, Bucknell University, Lewisburg, PA 17837, USA
| | - Adrian M Owen
- Brain and Mind Institute, Department of Psychology and Department of Physiology and Pharmacology, Western University, London, ON N6A 5B7, Canada.,Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program
| | - Aniruddh D Patel
- Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program.,Department of Psychology, Tufts University, Medford, MA 02155, USA
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2J2, Canada.,Centre for Research in Language, Brain, and Music, Montreal, QC H3A 1E3, Canada.,Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program
| |
Collapse
|
10
|
Cannon JJ, Patel AD. How Beat Perception Co-opts Motor Neurophysiology. Trends Cogn Sci 2020; 25:137-150. [PMID: 33353800 DOI: 10.1016/j.tics.2020.11.002] [Citation(s) in RCA: 70] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 11/06/2020] [Accepted: 11/12/2020] [Indexed: 02/08/2023]
Abstract
Beat perception offers cognitive scientists an exciting opportunity to explore how cognition and action are intertwined in the brain even in the absence of movement. Many believe the motor system predicts the timing of beats, yet current models of beat perception do not specify how this is neurally implemented. Drawing on recent insights into the neurocomputational properties of the motor system, we propose that beat anticipation relies on action-like processes consisting of precisely patterned neural time-keeping activity in the supplementary motor area (SMA), orchestrated and sequenced by activity in the dorsal striatum. In addition to synthesizing recent advances in cognitive science and motor neuroscience, our framework provides testable predictions to guide future work.
Collapse
Affiliation(s)
- Jonathan J Cannon
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, MA, USA; Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, CA.
| |
Collapse
|
11
|
Hickey P, Barnett-Young A, Patel AD, Race E. Environmental rhythms orchestrate neural activity at multiple stages of processing during memory encoding: Evidence from event-related potentials. PLoS One 2020; 15:e0234668. [PMID: 33206657 PMCID: PMC7673489 DOI: 10.1371/journal.pone.0234668] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 11/03/2020] [Indexed: 11/19/2022] Open
Abstract
Accumulating evidence suggests that rhythmic temporal structures in the environment influence memory formation. For example, stimuli that appear in synchrony with the beat of background, environmental rhythms are better remembered than stimuli that appear out-of-synchrony with the beat. This rhythmic modulation of memory has been linked to entrained neural oscillations which are proposed to act as a mechanism of selective attention that prioritize processing of events that coincide with the beat. However, it is currently unclear whether rhythm influences memory formation by influencing early (sensory) or late (post-perceptual) processing of stimuli. The current study used stimulus-locked event-related potentials (ERPs) to investigate the locus of stimulus processing at which rhythm temporal cues operate in the service of memory formation. Participants viewed a series of visual objects that either appeared in-synchrony or out-of-synchrony with the beat of background music and made a semantic classification (living/non-living) for each object. Participants’ memory for the objects was then tested (in silence). The timing of stimulus presentation during encoding (in-synchrony or out-of-synchrony with the background beat) influenced later ERPs associated with post-perceptual selection and orienting attention in time rather than earlier ERPs associated with sensory processing. The magnitude of post-perceptual ERPs also differed according to whether or not participants demonstrated a mnemonic benefit for in-synchrony compared to out-of-synchrony stimuli, and was related to the magnitude of the rhythmic modulation of memory performance across participants. These results support two prominent theories in the field, the Dynamic Attending Theory and the Oscillation Selection Hypothesis, which propose that neural responses to rhythm act as a core mechanism of selective attention that optimize processing at specific moments in time. Furthermore, they reveal that in addition to acting as a mechanism of early attentional selection, rhythm influences later, post-perceptual cognitive processes as events are transformed into memory.
Collapse
Affiliation(s)
- Paige Hickey
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
- * E-mail:
| | - Annie Barnett-Young
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
| | - Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
- Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Ontario, Canada
| | - Elizabeth Race
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
| |
Collapse
|
12
|
Abstract
Spontaneous movement to music occurs in every human culture and is a foundation of dance [1]. This response to music is absent in most species (including monkeys), yet it occurs in parrots, perhaps because they (like humans, and unlike monkeys) are vocal learners whose brains contain strong auditory-motor connections, conferring sophisticated audiomotor processing abilities [2,3]. Previous research has shown that parrots can bob their heads or lift their feet in synchrony with a musical beat [2,3], but humans move to music using a wide variety of movements and body parts. Is this also true of parrots? If so, it would constrain theories of how movement to music is controlled by parrot brains. Specifically, as head bobbing is part of parrot courtship displays [4] and foot lifting is part of locomotion, these may be innate movements controlled by central pattern generators which become entrained by auditory rhythms, without the involvement of complex motor planning. This would be unlike humans, where movement to music engages cortical networks including frontal and parietal areas [5]. Rich diversity in parrot movement to music would suggest a strong contribution of forebrain regions to this behavior, perhaps including motor learning regions abutting the complex vocal-learning 'shell' regions that are unique to parrots among vocal learning birds [6]. Here we report that a sulphur-crested cockatoo (Cacatua galerita eleonora) responds to music with remarkably diverse spontaneous movements employing a variety of body parts, and suggest why parrots share this response with humans.
Collapse
Affiliation(s)
- R Joanne Jao Keehn
- Brain Development Imaging Labs, Department of Psychology, San Diego State University, 6363 Alvarado Ct. #200, San Diego, CA 92120, USA
| | - John R Iversen
- University of California San Diego, Institute for Neural Computation, 9500 Gilman Dr. #0559, La Jolla, CA 92093, USA
| | - Irena Schulz
- Bird Lovers Only Rescue Service Inc., Duncan, SC 29334, USA
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, 490 Boston Ave., Medford, MA 02155, USA; Azrieli Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), MaRS Centre, West Tower, 661 University Ave., Suite 505, Toronto, ON, MG5 1M1, Canada; Radcliffe Institute for Advanced Study, Harvard University, 10 Garden St., Cambridge, MA 02138, USA.
| |
Collapse
|
13
|
Hickey P, Merseal H, Patel AD, Race E. Memory in time: Neural tracking of low-frequency rhythm dynamically modulates memory formation. Neuroimage 2020; 213:116693. [DOI: 10.1016/j.neuroimage.2020.116693] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 02/18/2020] [Accepted: 02/26/2020] [Indexed: 12/12/2022] Open
|
14
|
Johndro H, Jacobs L, Patel AD, Race E. Temporal predictions provided by musical rhythm influence visual memory encoding. Acta Psychol (Amst) 2019; 200:102923. [PMID: 31759191 DOI: 10.1016/j.actpsy.2019.102923] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 06/14/2019] [Accepted: 08/14/2019] [Indexed: 11/18/2022] Open
Abstract
Selective attention plays a key role in determining what aspects of our environment are encoded into long-term memory. Auditory rhythms with a regular beat provide temporal expectations that entrain attention and facilitate perception of visual stimuli aligned with the beat. The current study investigated whether entrainment to background auditory rhythms also facilitates higher-level cognitive functions such as episodic memory. In a series of experiments, we manipulated temporal attention through the use of rhythmic, instrumental music. In Experiment 1A and 1B, we found that background musical rhythm influenced the encoding of visual targets into memory, evident in enhanced subsequent memory for targets that appeared in-synchrony compared to out-of-synchrony with the background beat. Response times at encoding did not differ for in-synchrony compared to out-of-synchrony stimuli, suggesting that the rhythmic modulation of memory does not simply reflect rhythmic effects on perception and action. Experiment 2 investigated whether rhythmic effects on response times emerge when task procedures more closely match prior studies that have demonstrated significant auditory entrainment effects. Responses were faster for in-synchrony compared to out-of-synchrony stimuli when participants performed a more perceptually-oriented task that did not contain intervening recognition memory tests, suggesting that rhythmic effects on perception and action depend on the nature of the task demands. Together, these results support the hypothesis that rhythmic temporal regularities provided by background music can entrain attention and influence the encoding of visual stimuli into memory.
Collapse
Affiliation(s)
| | | | - Aniruddh D Patel
- Tufts University, United States of America; Azrieli Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Canada
| | | |
Collapse
|
15
|
Cheever T, Taylor A, Finkelstein R, Edwards E, Thomas L, Bradt J, Holochwost SJ, Johnson JK, Limb C, Patel AD, Tottenham N, Iyengar S, Rutter D, Fleming R, Collins FS. NIH/Kennedy Center Workshop on Music and the Brain: Finding Harmony. Neuron 2019; 97:1214-1218. [PMID: 29566791 DOI: 10.1016/j.neuron.2018.02.004] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Revised: 01/30/2018] [Accepted: 02/01/2018] [Indexed: 01/01/2023]
Abstract
The National Institutes of Health and John F. Kennedy Center for the Performing Arts convened a panel of experts to discuss the current state of research on music and the brain. The panel generated research recommendations to accelerate the study of music's effects on the brain and the implications for human health.
Collapse
Affiliation(s)
| | - Anna Taylor
- National Institutes of Health, Bethesda, MD, USA
| | | | | | - Laura Thomas
- National Institutes of Health, Bethesda, MD, USA
| | - Joke Bradt
- Department of Creative Arts Therapies, Drexel University, Philadelphia, PA, USA
| | | | - Julene K Johnson
- Institute for Health & Aging, University of California, San Francisco, CA, USA
| | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, CA, USA
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, MA, USA; Azrieli Program in Brain, Mind, & Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| | - Nim Tottenham
- Department of Psychology, Columbia University, NY, USA
| | - Sunil Iyengar
- National Endowment for the Arts, Washington, DC, USA
| | - Deborah Rutter
- John. F. Kennedy Center for the Performing Arts, Washington, DC, USA
| | - Renée Fleming
- John. F. Kennedy Center for the Performing Arts, Washington, DC, USA
| | | |
Collapse
|
16
|
Abstract
In the "speech-to-song illusion," certain spoken phrases are heard as highly song-like when isolated from context and repeated. This phenomenon occurs to a greater degree for some stimuli than for others, suggesting that particular cues prompt listeners to perceive a spoken phrase as song. Here we investigated the nature of these cues across four experiments. In Experiment 1, participants were asked to rate how song-like spoken phrases were after each of eight repetitions. Initial ratings were correlated with the consistency of an underlying beat and within-syllable pitch slope, while rating change was linked to beat consistency, within-syllable pitch slope, and melodic structure. In Experiment 2, the within-syllable pitch slope of the stimuli was manipulated, and this manipulation changed the extent to which participants heard certain stimuli as more musical than others. In Experiment 3, the extent to which the pitch sequences of a phrase fit a computational model of melodic structure was altered, but this manipulation did not have a significant effect on musicality ratings. In Experiment 4, the consistency of intersyllable timing was manipulated, but this manipulation did not have an effect on the change in perceived musicality after repetition. Our methods provide a new way of studying the causal role of specific acoustic features in the speech-to-song illusion via subtle acoustic manipulations of speech, and show that listeners can rapidly (and implicitly) assess the degree to which nonmusical stimuli contain musical structure. (PsycINFO Database Record
Collapse
Affiliation(s)
- Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London
| | | | - Mara Breen
- Department of Psychology, Mount Holyoke College
| |
Collapse
|
17
|
Morgan E, Fogel A, Nair A, Patel AD. Statistical learning and Gestalt-like principles predict melodic expectations. Cognition 2019; 189:23-34. [PMID: 30913527 DOI: 10.1016/j.cognition.2018.12.015] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 12/21/2018] [Accepted: 12/28/2018] [Indexed: 10/27/2022]
Abstract
Expectation, or prediction, has become a major theme in cognitive science. Music offers a powerful system for studying how expectations are formed and deployed in the processing of richly structured sequences that unfold rapidly in time. We ask to what extent expectations about an upcoming note in a melody are driven by two distinct factors: Gestalt-like principles grounded in the auditory system (e.g.a preference for subsequent notes to move in small intervals), and statistical learning of melodic structure. We use multinomial regression modeling to evaluate the predictions of computationally implemented models of melodic expectation against behavioral data from a musical cloze task, in which participants hear a novel melodic opening and are asked to sing the note they expect to come next. We demonstrate that both Gestalt-like principles and statistical learning contribute to listeners' online expectations. In conjunction with results in the domain of language, our results point to a larger-than-previously-assumed role for statistical learning in predictive processing across cognitive domains, even in cases that seem potentially governed by a smaller set of theoretically motivated rules. However, we also find that both of the models tested here leave much variance in the human data unexplained, pointing to a need for models of melodic expectation that incorporate underlying hierarchical and/or harmonic structure. We propose that our combined behavioral (melodic cloze) and modeling (multinomial regression) approach provides a powerful method for further testing and development of models of melodic expectation.
Collapse
Affiliation(s)
- Emily Morgan
- Department of Psychology, Tufts University, 490 Boston Ave, Medford, MA 02155, United States; Department of Linguistics, University of California, Davis, United States.
| | - Allison Fogel
- Department of Psychology, Tufts University, 490 Boston Ave, Medford, MA 02155, United States
| | - Anjali Nair
- Department of Psychology, Tufts University, 490 Boston Ave, Medford, MA 02155, United States
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, 490 Boston Ave, Medford, MA 02155, United States; Azrieli Program in Brain, Mind, & Consciousness, Canadian Institute for Advanced Research (CIFAR), Canada; Radcliffe Institute for Advanced Studies, Harvard University, United States
| |
Collapse
|
18
|
Ozernov-Palchik O, Wolf M, Patel AD. Relationships between early literacy and nonlinguistic rhythmic processes in kindergarteners. J Exp Child Psychol 2019; 167:354-368. [PMID: 29227852 DOI: 10.1016/j.jecp.2017.11.009] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 11/04/2017] [Accepted: 11/17/2017] [Indexed: 10/18/2022]
Abstract
A growing number of studies report links between nonlinguistic rhythmic abilities and certain linguistic abilities, particularly phonological skills. The current study investigated the relationship between nonlinguistic rhythmic processing, phonological abilities, and early literacy abilities in kindergarteners. A distinctive aspect of the current work was the exploration of whether processing of different types of rhythmic patterns is differentially related to kindergarteners' phonological and reading-related abilities. Specifically, we examined the processing of metrical versus nonmetrical rhythmic patterns, that is, patterns capable of being subdivided into equal temporal intervals or not (Povel & Essens, 1985). This is an important comparison because most music involves metrical sequences, in which rhythm often has an underlying temporal grid of isochronous units. In contrast, nonmetrical sequences are arguably more typical to speech rhythm, which is temporally structured but does not involve an underlying grid of equal temporal units. A rhythm discrimination app with metrical and nonmetrical patterns was administered to 74 kindergarteners in conjunction with cognitive and preliteracy measures. Findings support a relationship among rhythm perception, phonological awareness, and letter-sound knowledge (an essential precursor of reading). A mediation analysis revealed that the association between rhythm perception and letter-sound knowledge is mediated through phonological awareness. Furthermore, metrical perception accounted for unique variance in letter-sound knowledge above all other language and cognitive measures. These results point to a unique role for temporal regularity processing in the association between musical rhythm and literacy in young children.
Collapse
Affiliation(s)
- Ola Ozernov-Palchik
- Eliot Pearson Department of Child Study and Human Development, Tufts University, Medford, MA 02155, USA.
| | - Maryanne Wolf
- Eliot Pearson Department of Child Study and Human Development, Tufts University, Medford, MA 02155, USA
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, MA 02155, USA; Azrieli Program in Brain, Mind & Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| |
Collapse
|
19
|
Takeya R, Patel AD, Tanaka M. Temporal Generalization of Synchronized Saccades Beyond the Trained Range in Monkeys. Front Psychol 2018; 9:2172. [PMID: 30459693 PMCID: PMC6232453 DOI: 10.3389/fpsyg.2018.02172] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Accepted: 10/22/2018] [Indexed: 11/22/2022] Open
Abstract
Synchronized movements with external periodic rhythms, such as dancing to a beat, are commonly observed in daily life. Although it has been well established that some vocal learning species (including parrots and humans) spontaneously develop this ability, it has only recently been shown that monkeys are also capable of predictive and tempo-flexible synchronization to periodic stimuli. In our previous study, monkeys were trained to make predictive saccades for alternately presented visual stimuli at fixed stimulus onset asynchronies (SOAs) to obtain a liquid reward. The monkeys generalized predictive synchronization to novel SOAs in the middle of trained range, suggesting a capacity for tempo-flexible synchronization. However, it is possible that when encountering a novel tempo, the monkeys might sample learned saccade sequences from those for the short and long SOAs so that the mean saccade interval matched the untrained SOA. To eliminate this possibility, in the current study we tested monkeys on novel SOAs outside the trained range. Animals were trained to generate synchronized eye movements for 600 and 900-ms SOAs for a few weeks, and then were tested for longer SOAs. The accuracy and precision of predictive saccades for one untrained SOA (1200 ms) were comparable to those for the trained conditions. On the other hand, the variance of predictive saccade latency and the proportion of reactive saccades increased significantly in the longer SOA conditions (1800 and 2400 ms), indicating that temporal prediction of periodic stimuli was difficult in this range, similar to previous results on synchronized tapping in humans. Our results suggest that monkeys might share similar synchronization mechanisms with humans, which can be subject to physiological examination in future studies.
Collapse
Affiliation(s)
- Ryuji Takeya
- Department of Physiology, Hokkaido University School of Medicine, Sapporo, Japan
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, MA, United States.,Azrieli Program in Brain, Mind & Consciousness, Canadian Institute for Advanced Research, Toronto, ON, Canada
| | - Masaki Tanaka
- Department of Physiology, Hokkaido University School of Medicine, Sapporo, Japan
| |
Collapse
|
20
|
Ozernov-Palchik O, Patel AD. Musical rhythm and reading development: does beat processing matter? Ann N Y Acad Sci 2018; 1423:166-175. [PMID: 29781084 DOI: 10.1111/nyas.13853] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Revised: 04/13/2018] [Accepted: 04/23/2018] [Indexed: 01/24/2023]
Abstract
There is mounting evidence for links between musical rhythm processing and reading-related cognitive skills, such as phonological awareness. This may be because music and speech are rhythmic: both involve processing complex sound sequences with systematic patterns of timing, accent, and grouping. Yet, there is a salient difference between musical and speech rhythm: musical rhythm is often beat-based (based on an underlying grid of equal time intervals), while speech rhythm is not. Thus, the role of beat-based processing in the reading-rhythm relationship is not clear. Is there is a distinct relation between beat-based processing mechanisms and reading-related language skills, or is the rhythm-reading link entirely due to shared mechanisms for processing nonbeat-based aspects of temporal structure? We discuss recent evidence for a distinct link between beat-based processing and early reading abilities in young children, and suggest experimental designs that would allow one to further methodically investigate this relationship. We propose that beat-based processing taps into a listener's ability to use rich contextual regularities to form predictions, a skill important for reading development.
Collapse
Affiliation(s)
- Ola Ozernov-Palchik
- Eliot Pearson Department of Child Study and Human Development, Tufts University, Medford, Massachusetts
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, Massachusetts
- Azrieli Program in Brain, Mind and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Ontario, Canada
| |
Collapse
|
21
|
Patel AD, Sharma M, Ramasubramanian N, Ganesh R, Chattopadhyay PK. A new multi-line cusp magnetic field plasma device (MPD) with variable magnetic field. Rev Sci Instrum 2018; 89:043510. [PMID: 29716311 DOI: 10.1063/1.5007142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
A new multi-line cusp magnetic field plasma device consisting of electromagnets with core material has been constructed with a capability to experimentally control the relative volume fractions of magnetized to unmagnetized plasma volume as well as accurate control on the gradient length scales of mean density and temperature profiles. Argon plasma has been produced using a hot tungsten cathode over a wide range of pressures 5 × 10-5 -1 × 10-3 mbar, achieving plasma densities ranging from 109 to 1011 cm-3 and the electron temperature in the range 1-8 eV. The radial profiles of plasma parameters measured along the non-cusp region (in between two consecutive magnets) show a finite region with uniform and quiescent plasma, where the magnetic field is very low such that the ions are unmagnetized. Beyond that region, both plasma species are magnetized and the profiles show gradients both in temperature and density. The electrostatic fluctuation measured using a Langmuir probe radially along the non-cusp region shows less than 1% (δIisat/Iisat < 1%). The plasma thus produced will be used to study new and hitherto unexplored physics parameter space relevant to both laboratory multi-scale plasmas and astrophysical plasmas.
Collapse
Affiliation(s)
- A D Patel
- Institute for Plasma Research, HBNI, Bhat, Gandhinagar, Gujarat 382428, India
| | - M Sharma
- Institute for Plasma Research, HBNI, Bhat, Gandhinagar, Gujarat 382428, India
| | - N Ramasubramanian
- Institute for Plasma Research, HBNI, Bhat, Gandhinagar, Gujarat 382428, India
| | - R Ganesh
- Institute for Plasma Research, HBNI, Bhat, Gandhinagar, Gujarat 382428, India
| | - P K Chattopadhyay
- Institute for Plasma Research, HBNI, Bhat, Gandhinagar, Gujarat 382428, India
| |
Collapse
|
22
|
Anderson DE, Patel AD. Infants born preterm, stress, and neurodevelopment in the neonatal intensive care unit: might music have an impact? Dev Med Child Neurol 2018; 60:256-266. [PMID: 29363098 DOI: 10.1111/dmcn.13663] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/09/2017] [Indexed: 11/27/2022]
Abstract
AIM The neonatal intensive care unit (NICU) provides life-saving medical care for an increasing number of newborn infants each year. NICU care, while lifesaving, does have attendant consequences which can include repeated activation of the stress response and reduced maternal interaction, with possible negative long-term impacts on brain development. Here we present a neuroscientific framework for considering the impact of music on neurodevelopment in the NICU of infants born preterm and evaluate current literature on the use of music with this population to determine what is most reliably known of the physiological effects of music interventions. METHOD Using online academic databases we collected relevant, experimental studies aimed at determining effects of music listening in infants in the NICU. These articles were evaluated for methodological rigor, ranking the 10 most experimentally stringent as a representative sample. RESULTS The selected literature seems to indicate that effects are present on the cardio-pulmonary system and behavior of neonates, although the relative effect size remains unclear. INTERPRETATION These findings indicate a need for more standardized longitudinal studies aimed at determining not only whether NICU music exposure has beneficial effects on the cardio-pulmonary system, but also on the hypothalamic-pituitary-adrenal axis, brain structures, and cognitive behavioral status of these children as well. WHAT THIS PAPER ADDS Provides a neuroscience framework for considering how music might attenuate stress in neonatal intensive care unit (NICU) infants. Considers how repeated stress may cause negative neurodevelopmental impacts in infants born preterm. Posits epigenetics can serve as a mechanistic pathway for music moderating the stress response.
Collapse
Affiliation(s)
- Dane E Anderson
- SDSU Brain Development Imaging Laboratory, San Diego, CA, USA
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, MA, USA.,Azrieli Program in Brain, Mind, & Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, ON, Canada
| |
Collapse
|
23
|
Ding N, Patel AD, Chen L, Butler H, Luo C, Poeppel D. Temporal modulations in speech and music. Neurosci Biobehav Rev 2017; 81:181-187. [PMID: 28212857 DOI: 10.1016/j.neubiorev.2017.02.011] [Citation(s) in RCA: 218] [Impact Index Per Article: 31.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 02/09/2017] [Accepted: 02/10/2017] [Indexed: 10/20/2022]
Abstract
Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing.
Collapse
Affiliation(s)
- Nai Ding
- College of Biomedical Engineering and Instrument Sciences, Zhejiang University, China; Department of Psychology, New York University, New York, NY, United States; Interdisciplinary Center for Social Sciences, Zhejiang University, China; Neuro and Behavior EconLab, Zhejiang University of Finance and Economics, China.
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, MA, United States; Azrieli Program in Brain, Mind, & Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| | - Lin Chen
- Department of Psychology, New York University, New York, NY, United States; College of Biomedical Engineering and Instrument Sciences, Zhejiang University, China
| | - Henry Butler
- Department of Psychology, Tufts University, Medford, MA, United States
| | - Cheng Luo
- College of Biomedical Engineering and Instrument Sciences, Zhejiang University, China
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, United States; Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
24
|
Patel AD. Why Doesn't a Songbird (the European Starling) Use Pitch to Recognize Tone Sequences? The Informational Independence Hypothesis. CCBR 2017. [DOI: 10.3819/ccbr.2017.120003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
|
25
|
Patel AD, Morgan E. Exploring Cognitive Relations Between Prediction in Language and Music. Cogn Sci 2016; 41 Suppl 2:303-320. [DOI: 10.1111/cogs.12411] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2015] [Revised: 06/05/2016] [Accepted: 06/14/2016] [Indexed: 02/04/2023]
Affiliation(s)
- Aniruddh D. Patel
- Department of Psychology; Tufts University
- Azrieli Program in Brain, Mind, & Consciousness; Canadian Institute for Advanced Research (CIFAR); Toronto
| | | |
Collapse
|
26
|
Clayton KK, Swaminathan J, Yazdanbakhsh A, Zuk J, Patel AD, Kidd G. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians. PLoS One 2016; 11:e0157638. [PMID: 27384330 PMCID: PMC4934907 DOI: 10.1371/journal.pone.0157638] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Accepted: 06/02/2016] [Indexed: 11/24/2022] Open
Abstract
The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”.
Collapse
Affiliation(s)
- Kameron K. Clayton
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States of America
| | - Jayaganesh Swaminathan
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States of America
- * E-mail:
| | - Arash Yazdanbakhsh
- Department for Psychological and Brain Sciences, Boston University, Boston, MA, United States of America
- Center for Computational Neuroscience and Neural Technology (CompNet), Boston University, Boston, MA, United States of America
| | - Jennifer Zuk
- Harvard Medical School, Harvard University, Boston, MA, United States of America
| | - Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, MA, United States of America
| | - Gerald Kidd
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States of America
| |
Collapse
|
27
|
Abstract
BACKGROUND Radiotherapy as an adjuvant to mastectomy is integral to the treatment of breast cancer, but can result in skin ulceration. Skin ulceration following radiotherapy is traditionally managed by removing the implant and allowing the skin to heal by secondary intention. CASE REPORT A 42-year-old woman underwent radiotherapy following a breast reconstruction. She developed a 2 x 3cm radiation ulcer. The ulcer was managed by removing the implant and performing capsulectomy. A Beckers 50 expander was placed and reinforced with acellular dermal matrix inferolaterally. At follow-up the patient had a good cosmetic outcome. CONCLUSION Post-radiation skin ulcers present a challenge to treat with no current standardised management. The use of acellular dermal matrix may present a new technique to promote healing in these testing cases.
Collapse
|
28
|
Fogel AR, Rosenberg JC, Lehman FM, Kuperberg GR, Patel AD. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method. Front Psychol 2015; 6:1718. [PMID: 26617548 PMCID: PMC4641899 DOI: 10.3389/fpsyg.2015.01718] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Accepted: 10/26/2015] [Indexed: 11/13/2022] Open
Abstract
Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in music and language.
Collapse
Affiliation(s)
| | - Jason C Rosenberg
- Department of Arts and Humanities, Yale-NUS College Singapore, Singapore
| | | | - Gina R Kuperberg
- Department of Psychology, Tufts University, Medford MA, USA ; MGH/HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown MA, USA ; Department of Psychiatry, Massachusetts General Hospital, Charlestown MA, USA
| | | |
Collapse
|
29
|
Kunert R, Willems RM, Casasanto D, Patel AD, Hagoort P. Music and Language Syntax Interact in Broca's Area: An fMRI Study. PLoS One 2015; 10:e0141069. [PMID: 26536026 PMCID: PMC4633113 DOI: 10.1371/journal.pone.0141069] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2014] [Accepted: 09/17/2015] [Indexed: 12/31/2022] Open
Abstract
Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
Collapse
Affiliation(s)
- Richard Kunert
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
- * E-mail:
| | - Roel M. Willems
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
| | - Daniel Casasanto
- Psychology Department, University of Chicago, Chicago, Illinois, United States of America
| | | | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
| |
Collapse
|
30
|
Swaminathan J, Mason CR, Streeter TM, Best V, Kidd G, Patel AD. Musical training, individual differences and the cocktail party problem. Sci Rep 2015; 5:11628. [PMID: 26112910 PMCID: PMC4481518 DOI: 10.1038/srep11628] [Citation(s) in RCA: 88] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Accepted: 06/02/2015] [Indexed: 11/09/2022] Open
Abstract
Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical 'cocktail party problem' in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6 dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of 'informational masking' (IM) while keeping the amount of 'energetic masking' (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker "cocktail party" environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced "speech-in-noise" perception by musicians.
Collapse
Affiliation(s)
| | - Christine R Mason
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Timothy M Streeter
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Gerald Kidd
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | | |
Collapse
|
31
|
Liu F, Jiang C, Wang B, Xu Y, Patel AD. A music perception disorder (congenital amusia) influences speech comprehension. Neuropsychologia 2015; 66:111-8. [DOI: 10.1016/j.neuropsychologia.2014.11.001] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2014] [Revised: 10/27/2014] [Accepted: 11/04/2014] [Indexed: 10/24/2022]
|
32
|
Iversen JR, Patel AD, Nicodemus B, Emmorey K. Synchronization to auditory and visual rhythms in hearing and deaf individuals. Cognition 2014; 134:232-44. [PMID: 25460395 DOI: 10.1016/j.cognition.2014.10.018] [Citation(s) in RCA: 95] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2012] [Revised: 09/22/2014] [Accepted: 10/31/2014] [Indexed: 11/17/2022]
Abstract
A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported.
Collapse
Affiliation(s)
- John R Iversen
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California, San Diego, 9500 Gilman Drive # 0559, La Jolla, CA 92093, USA.
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, 490 Boston Ave., Medford, MA 02155, USA
| | - Brenda Nicodemus
- Department of Interpretation, Gallaudet University, 800 Florida Avenue, NE, Washington, DC 20002, USA
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Road, Suite 200, San Diego, CA 92120, USA
| |
Collapse
|
33
|
|
34
|
Patel AD, Iversen JR. The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis. Front Syst Neurosci 2014; 8:57. [PMID: 24860439 PMCID: PMC4026735 DOI: 10.3389/fnsys.2014.00057] [Citation(s) in RCA: 207] [Impact Index Per Article: 20.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2013] [Accepted: 03/25/2014] [Indexed: 11/17/2022] Open
Abstract
EVERY HUMAN CULTURE HAS SOME FORM OF MUSIC WITH A BEAT a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This "action simulation for auditory prediction" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.
Collapse
Affiliation(s)
| | - John R. Iversen
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San DiegoLa Jolla, CA, USA
| |
Collapse
|
35
|
Abstract
Are other species able to process basic musical rhythm in the same way that humans do? Darwin supported this intuitive idea, but it is being challenged by new cross-species research. In The Descent of Man, Darwin speculated that our capacity for musical rhythm reflects basic aspects of brain function broadly shared among animals. Although this remains an appealing idea, it is being challenged by modern cross-species research. This research hints that our capacity to synchronize to a beat, i.e., to move in time with a perceived pulse in a manner that is predictive and flexible across a broad range of tempi, may be shared by only a few other species. Is this really the case? If so, it would have important implications for our understanding of the evolution of human musicality.
Collapse
Affiliation(s)
- Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
36
|
Patel AD. Can nonlinguistic musical training change the way the brain processes speech? The expanded OPERA hypothesis. Hear Res 2013; 308:98-108. [PMID: 24055761 DOI: 10.1016/j.heares.2013.08.011] [Citation(s) in RCA: 154] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2013] [Revised: 08/18/2013] [Accepted: 08/26/2013] [Indexed: 10/26/2022]
Abstract
A growing body of research suggests that musical training has a beneficial impact on speech processing (e.g., hearing of speech in noise and prosody perception). As this research moves forward two key questions need to be addressed: 1) Can purely instrumental musical training have such effects? 2) If so, how and why would such effects occur? The current paper offers a conceptual framework for understanding such effects based on mechanisms of neural plasticity. The expanded OPERA hypothesis proposes that when music and speech share sensory or cognitive processing mechanisms in the brain, and music places higher demands on these mechanisms than speech does, this sets the stage for musical training to enhance speech processing. When these higher demands are combined with the emotional rewards of music, the frequent repetition that musical training engenders, and the focused attention that it requires, neural plasticity is activated and makes lasting changes in brain structure and function which impact speech processing. Initial data from a new study motivated by the OPERA hypothesis is presented, focusing on the impact of musical training on speech perception in cochlear-implant users. Suggestions for the development of animal models to test OPERA are also presented, to help motivate neurophysiological studies of how auditory training using non-biological sounds can impact the brain's perceptual processing of species-specific vocalizations. This article is part of a Special Issue entitled <Music: A window into the hearing brain>.
Collapse
Affiliation(s)
- Aniruddh D Patel
- Dept. of Psychology, Tufts University, 490 Boston Ave., Medford, MA 02155, USA.
| |
Collapse
|
37
|
Liu F, Xu Y, Patel AD, Francart T, Jiang C. Differential recognition of pitch patterns in discrete and gliding stimuli in congenital amusia: evidence from Mandarin speakers. Brain Cogn 2012; 79:209-15. [PMID: 22546729 DOI: 10.1016/j.bandc.2012.03.008] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2011] [Revised: 03/21/2012] [Accepted: 03/28/2012] [Indexed: 10/28/2022]
Abstract
This study examined whether "melodic contour deafness" (insensitivity to the direction of pitch movement) in congenital amusia is associated with specific types of pitch patterns (discrete versus gliding pitches) or stimulus types (speech syllables versus complex tones). Thresholds for identification of pitch direction were obtained using discrete or gliding pitches in the syllable /ma/ or its complex tone analog, from nineteen amusics and nineteen controls, all healthy university students with Mandarin Chinese as their native language. Amusics, unlike controls, had more difficulty recognizing pitch direction in discrete than in gliding pitches, for both speech and non-speech stimuli. Also, amusic thresholds were not significantly affected by stimulus types (speech versus non-speech), whereas controls showed lower thresholds for tones than for speech. These findings help explain why amusics have greater difficulty with discrete musical pitch perception than with speech perception, in which continuously changing pitch movements are prevalent.
Collapse
Affiliation(s)
- Fang Liu
- Center for the Study of Language and Information, Stanford University, Stanford, CA 94305-4101, USA
| | | | | | | | | |
Collapse
|
38
|
|
39
|
Shang SL, Wang WY, Wang Y, Du Y, Zhang JX, Patel AD, Liu ZK. Temperature-dependent ideal strength and stacking fault energy of fcc Ni: a first-principles study of shear deformation. J Phys Condens Matter 2012; 24:155402. [PMID: 22436671 DOI: 10.1088/0953-8984/24/15/155402] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Variations of energy, stress, and magnetic moment of fcc Ni as a response to shear deformation and the associated ideal shear strength (τ(IS)), intrinsic (γ(SF)) and unstable (γ(US)) stacking fault energies have been studied in terms of first-principles calculations under both the alias and affine shear regimes within the {111} slip plane along the <112> and <110> directions. It is found that (i) the intrinsic stacking fault energy γ(SF) is nearly independent of the shear deformation regimes used, albeit a slightly smaller value is predicted by pure shear (with relaxation) compared to the one from simple shear (without relaxation); (ii) the minimum ideal shear strength τ(IS) is obtained by pure alias shear of {111}<112>; and (iii) the dissociation of the 1/2[110] dislocation into two partial Shockley dislocations (1/6[211] + 1/6[121]) is observed under pure alias shear of {111}<110>. Based on the quasiharmonic approach from first-principles phonon calculations, the predicted γ(SF) has been extended to finite temperatures. In particular, using a proposed quasistatic approach on the basis of the predicted volume versus temperature relation, the temperature dependence of τ(IS) is also obtained. Both the γ(SF) and the τ(IS) of fcc Ni decrease with increasing temperature. The computed ideal shear strengths as well as the intrinsic and unstable stacking fault energies are in favorable accord with experiments and other predictions in the literature.
Collapse
Affiliation(s)
- S L Shang
- Department of Materials Science and Engineering, The Pennsylvania State University, University Park, PA 16802, USA.
| | | | | | | | | | | | | |
Collapse
|
40
|
Bregman MR, Patel AD, Gentner TQ. Stimulus-dependent flexibility in non-human auditory pitch processing. Cognition 2012; 122:51-60. [PMID: 21911217 PMCID: PMC3215778 DOI: 10.1016/j.cognition.2011.08.008] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2011] [Revised: 06/09/2011] [Accepted: 08/08/2011] [Indexed: 11/20/2022]
Abstract
Songbirds and humans share many parallels in vocal learning and auditory sequence processing. However, the two groups differ notably in their abilities to recognize acoustic sequences shifted in absolute pitch (pitch height). Whereas humans maintain accurate recognition of words or melodies over large pitch height changes, songbirds are comparatively much poorer at recognizing pitch-shifted tone sequences. This apparent disparity may reflect fundamental differences in the neural mechanisms underlying the representation of sound in songbirds. Alternatively, because non-human studies have used sine-tone stimuli almost exclusively, tolerance to pitch height changes in the context of natural signals may be underestimated. Here, we show that European starlings, a species of songbird, can maintain accurate recognition of the songs of other starlings when the pitch of those songs is shifted by as much as ±40%. We observed accurate recognition even for songs pitch-shifted well outside the range of frequencies used during training, and even though much smaller pitch shifts in conspecific songs are easily detected. With similar training using human piano melodies, recognition of the pitch-shifted melodies is very limited. These results demonstrate that non-human pitch processing is more flexible than previously thought and that the flexibility in pitch processing strategy is stimulus dependent.
Collapse
Affiliation(s)
- Micah R Bregman
- Department of Cognitive Science, UC San Diego, La Jolla, CA, United States
| | | | | |
Collapse
|
41
|
Abstract
Mounting evidence suggests that musical training benefits the neural encoding of speech. This paper offers a hypothesis specifying why such benefits occur. The "OPERA" hypothesis proposes that such benefits are driven by adaptive plasticity in speech-processing networks, and that this plasticity occurs when five conditions are met. These are: (1) Overlap: there is anatomical overlap in the brain networks that process an acoustic feature used in both music and speech (e.g., waveform periodicity, amplitude envelope), (2) Precision: music places higher demands on these shared networks than does speech, in terms of the precision of processing, (3) Emotion: the musical activities that engage this network elicit strong positive emotion, (4) Repetition: the musical activities that engage this network are frequently repeated, and (5) Attention: the musical activities that engage this network are associated with focused attention. According to the OPERA hypothesis, when these conditions are met neural plasticity drives the networks in question to function with higher precision than needed for ordinary speech communication. Yet since speech shares these networks with music, speech processing benefits. The OPERA hypothesis is used to account for the observed superior subcortical encoding of speech in musically trained individuals, and to suggest mechanisms by which musical training might improve linguistic reading abilities.
Collapse
Affiliation(s)
- Aniruddh D. Patel
- Department of Theoretical Neurobiology, The Neurosciences InstituteSan Diego, CA, USA
| |
Collapse
|
42
|
Slevc LR, Patel AD. Meaning in music and language: Three key differences: Comment on "Towards a neural basis of processing musical semantics" by Stefan Koelsch. Phys Life Rev 2011; 8:110-1; discussion 125-8. [PMID: 21570367 DOI: 10.1016/j.plrev.2011.05.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2011] [Accepted: 05/06/2011] [Indexed: 11/17/2022]
Affiliation(s)
- L Robert Slevc
- Department of Psychology, University of Maryland, College Park, MD 20742, USA.
| | | |
Collapse
|
43
|
Liu F, Patel AD, Fourcin A, Stewart L. Intonation processing in congenital amusia: discrimination, identification and imitation. ACTA ACUST UNITED AC 2010; 133:1682-93. [PMID: 20418275 DOI: 10.1093/brain/awq089] [Citation(s) in RCA: 141] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
This study investigated whether congenital amusia, a neuro-developmental disorder of musical perception, also has implications for speech intonation processing. In total, 16 British amusics and 16 matched controls completed five intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on discrimination, identification and imitation of statements and questions that were characterized primarily by pitch direction differences in the final word. This intonation-processing deficit in amusia was largely associated with a psychophysical pitch direction discrimination deficit. These findings suggest that amusia impacts upon one's language abilities in subtle ways, and support previous evidence that pitch processing in language and music involves shared mechanisms.
Collapse
Affiliation(s)
- Fang Liu
- Department of Psychology, Goldsmiths, University of London, New Cross, London, SE14 6NW, UK.
| | | | | | | |
Collapse
|
44
|
Yoshida KA, Iversen JR, Patel AD, Mazuka R, Nito H, Gervain J, Werker JF. The development of perceptual grouping biases in infancy: a Japanese-English cross-linguistic study. Cognition 2010; 115:356-61. [PMID: 20144456 DOI: 10.1016/j.cognition.2010.01.005] [Citation(s) in RCA: 91] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2009] [Revised: 01/14/2010] [Accepted: 01/16/2010] [Indexed: 11/29/2022]
Abstract
Perceptual grouping has traditionally been thought to be governed by innate, universal principles. However, recent work has found differences in Japanese and English speakers' non-linguistic perceptual grouping, implicating language in non-linguistic perceptual processes (Iversen, Patel, & Ohgushi, 2008). Two experiments test Japanese- and English-learning infants of 5-6 and 7-8 months of age to explore the development of grouping preferences. At 5-6 months, neither the Japanese nor the English infants revealed any systematic perceptual biases. However, by 7-8 months, the same age as when linguistic phrasal grouping develops, infants developed non-linguistic grouping preferences consistent with their language's structure (and the grouping biases found in adulthood). These results reveal an early difference in non-linguistic perception between infants growing up in different language environments. The possibility that infants' linguistic phrasal grouping is bootstrapped by abstract perceptual principles is discussed.
Collapse
Affiliation(s)
- Katherine A Yoshida
- New York University, Department of Psychology, 6 Washington Place, New York NY 10003 USA.
| | | | | | | | | | | | | |
Collapse
|
45
|
Abstract
The recent discovery of spontaneous synchronization to music in a nonhuman animal (the sulphur-crested cockatoo Cacatua galerita eleonora) raises several questions. How does this behavior differ from nonmusical synchronization abilities in other species, such as synchronized frog calls or firefly flashes? What significance does the behavior have for debates over the evolution of human music? What kinds of animals can synchronize to musical rhythms, and what are the key methodological issues for research in this area? This paper addresses these questions and proposes some refinements to the "vocal learning and rhythmic synchronization hypothesis."
Collapse
Affiliation(s)
- Aniruddh D Patel
- The Neurosciences Institute, 10640 John Jay Hopkins Drive, San Diego, CA 92121, USA.
| | | | | | | |
Collapse
|
46
|
|
47
|
|
48
|
Patel AD, Iversen JR, Bregman MR, Schulz I. Experimental evidence for synchronization to a musical beat in a nonhuman animal. Curr Biol 2009; 19:827-30. [PMID: 19409790 DOI: 10.1016/j.cub.2009.03.038] [Citation(s) in RCA: 228] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2009] [Revised: 02/25/2009] [Accepted: 03/12/2009] [Indexed: 10/20/2022]
Abstract
The tendency to move in rhythmic synchrony with a musical beat (e.g., via head bobbing, foot tapping, or dance) is a human universal [1] yet is not commonly observed in other species [2]. Does this ability reflect a brain specialization for music cognition, or does it build on neural circuitry that ordinarily serves other functions? According to the "vocal learning and rhythmic synchronization" hypothesis [3], entrainment to a musical beat relies on the neural circuitry for complex vocal learning, an ability that requires a tight link between auditory and motor circuits in the brain [4, 5]. This hypothesis predicts that only vocal learning species (such as humans and some birds, cetaceans, and pinnipeds, but not nonhuman primates) are capable of synchronizing movements to a musical beat. Here we report experimental evidence for synchronization to a beat in a sulphur-crested cockatoo (Cacatua galerita eleonora). By manipulating the tempo of a musical excerpt across a wide range, we show that the animal spontaneously adjusts the tempo of its rhythmic movements to stay synchronized with the beat. These findings indicate that synchronization to a musical beat is not uniquely human and suggest that animal models can provide insights into the neurobiology and evolution of human music [6].
Collapse
Affiliation(s)
- Aniruddh D Patel
- The Neurosciences Institute, 10640 John Jay Hopkins Drive, San Diego, CA 92121, USA.
| | | | | | | |
Collapse
|
49
|
Abstract
To the best of our knowledge, this is the first clinical report of skin puckering associated with fractured neck of humerus. Its significance may vary in different locations and its presence should be added to other physical and radiological signs to aid decision making.
Collapse
Affiliation(s)
- S J Alshryda
- Department of Trauma and Orthopaedics, Royal Hospital of Sunderland, Sunderland, UK.
| | | | | |
Collapse
|
50
|
Abstract
Many aspects of perception are known to be shaped by experience, but others are thought to be innate universal properties of the brain. A specific example comes from rhythm perception, where one of the fundamental perceptual operations is the grouping of successive events into higher-level patterns, an operation critical to the perception of language and music. Grouping has long been thought to be governed by innate perceptual principles established a century ago. The current work demonstrates instead that grouping can be strongly dependent on culture. Native English and Japanese speakers were tested for their perception of grouping of simple rhythmic sequences of tones. Members of the two cultures showed different patterns of perceptual grouping, demonstrating that these basic auditory processes are not universal but are shaped by experience. It is suggested that the observed perceptual differences reflect the rhythms of the two languages, and that native language can exert an influence on general auditory perception at a basic level.
Collapse
Affiliation(s)
- John R Iversen
- The Neurosciences Institute, 10640 John Jay Hopkins Drive, San Diego, California 92121, USA.
| | | | | |
Collapse
|