1
|
Bürgel M, Mares D, Siedenburg K. Enhanced salience of edge frequencies in auditory pattern recognition. Atten Percept Psychophys 2024; 86:2811-2820. [PMID: 39461935 DOI: 10.3758/s13414-024-02971-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/20/2024] [Indexed: 10/28/2024]
Abstract
Within musical scenes or textures, sounds from certain instruments capture attention more prominently than others, hinting at biases in the perception of multisource mixtures. Besides musical factors, these effects might be related to frequency biases in auditory perception. Using an auditory pattern-recognition task, we studied the existence of such frequency biases. Mixtures of pure tone melodies were presented in six frequency bands. Listeners were instructed to assess whether the target melody was part of the mixture or not, with the target melody presented either before or after the mixture. In Experiment 1, the mixture always contained melodies in five out of the six bands. In Experiment 2, the mixture contained three bands that stemmed from the lower or the higher part of the range. As expected, Experiments 1 and 2 both highlighted strong effects of presentation order, with higher accuracies for the target presented before the mixture. Notably, Experiment 1 showed that edge frequencies yielded superior accuracies compared with center frequencies. Experiment 2 corroborated this finding by yielding enhanced accuracies for edge frequencies irrespective of the absolute frequency region. Our results highlight the salience of sound elements located at spectral edges within complex musical scenes. Overall, this implies that neither the high voice superiority effect nor the insensitivity to bass instruments observed by previous research can be explained by absolute frequency biases in auditory perception.
Collapse
Affiliation(s)
- Michel Bürgel
- Dept. of Medical Physics and Acoustics, Carl Von Ossietzy University of Oldenburg, 26129, Oldenburg, Germany.
| | - Diana Mares
- Dept. of Medical Physics and Acoustics, Carl Von Ossietzy University of Oldenburg, 26129, Oldenburg, Germany.
| | - Kai Siedenburg
- Dept. of Medical Physics and Acoustics, Carl Von Ossietzy University of Oldenburg, 26129, Oldenburg, Germany
- Signal Processing and Speech Communication Laboratory, Graz University of Technology, 8010, Graz, Austria
| |
Collapse
|
2
|
Liu J, Jiang H. Exploring the Effects of Online Physician Voice Pitch Range and Filled Pauses on Patient Satisfaction in Mobile Health Communication. HEALTH COMMUNICATION 2024; 39:3258-3271. [PMID: 38314782 DOI: 10.1080/10410236.2024.2313791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
The convenience of mobile devices has driven the widespread use of voice technology in mobile health communication, significantly improving the timeliness of online service. However, the issue of listening to therapeutic content, which requires great cognitive effort and may exceed the patient's information processing capacity (i.e., information overload), is of concern. Based on information processing theory, this study reports how online physicians' voice characteristics (pitch range and filled pauses) affect patient satisfaction. We obtained 10,585 mobile voice consultation records of 1,416 doctors from China's largest mHealth platform and analyzed them using audio mining and empirical methods. Results showed that pitch range (β = 0.0539, p < .01) and filled pauses (β = 0.0365, p < .01) in doctors' voice positively influenced online patient satisfaction. However, the effect of filled pauses becomes weaker for patients with higher health literacy and higher disease risk. This suggests that there is heterogeneity in the way different patients process audio information. This study provides important insights for guiding online physician behaviors, enhancing patient satisfaction, and improving mobile health platform management.
Collapse
|
3
|
Wienke AS, Mathes B. Socioeconomic Inequalities Affect Brain Responses of Infants Growing Up in Germany. Brain Sci 2024; 14:560. [PMID: 38928558 PMCID: PMC11201481 DOI: 10.3390/brainsci14060560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 05/24/2024] [Accepted: 05/24/2024] [Indexed: 06/28/2024] Open
Abstract
Developmental changes in functional neural networks are sensitive to environmental influences. This EEG study investigated how infant brain responses relate to the social context that their families live in. Event-related potentials of 255 healthy, awake infants between six and fourteen months were measured during a passive auditory oddball paradigm. Infants were presented with 200 standard tones and 48 randomly distributed deviants. All infants are part of a longitudinal study focusing on families with socioeconomic and/or cultural challenges (Bremen Initiative to Foster Early Childhood Development; BRISE; Germany). As part of their familial socioeconomic status (SES), parental level of education and infant's migration background were assessed with questionnaires. For 30.6% of the infants both parents had a low level of education (≤10 years of schooling) and for 43.1% of the infants at least one parent was born abroad. The N2-P3a complex is associated with unintentional directing of attention to deviant stimuli and was analysed in frontocentral brain regions. Age was utilised as a control variable. Our results show that tone deviations in infants trigger an immature N2-P3a complex. Contrary to studies with older children or adults, the N2 amplitude was more positive for deviants than for standards. This may be related to an immature superposition of the N2 with the P3a. For infants whose parents had no high-school degree and were born abroad, this tendency was increased, indicating that facing multiple challenges as a young family impacts on the infant's early neural development. As such, attending to unexpected stimulus changes may be important for early learning processes. Variations of the infant N2-P3a complex may, thus, relate to early changes in attentional capacity and learning experiences due to familial challenges. This points towards the importance of early prevention programs.
Collapse
Affiliation(s)
| | - Birgit Mathes
- Bremer Initiative to Foster Early Childhood Development (BRISE), Faculty for Human and Health Sciences, University of Bremen, 28359 Bremen, Germany;
| |
Collapse
|
4
|
Calcus A. Development of auditory scene analysis: a mini-review. Front Hum Neurosci 2024; 18:1352247. [PMID: 38532788 PMCID: PMC10963424 DOI: 10.3389/fnhum.2024.1352247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 02/22/2024] [Indexed: 03/28/2024] Open
Abstract
Most auditory environments contain multiple sound waves that are mixed before reaching the ears. In such situations, listeners must disentangle individual sounds from the mixture, performing the auditory scene analysis. Analyzing complex auditory scenes relies on listeners ability to segregate acoustic events into different streams, and to selectively attend to the stream of interest. Both segregation and selective attention are known to be challenging for adults with normal hearing, and seem to be even more difficult for children. Here, we review the recent literature on the development of auditory scene analysis, presenting behavioral and neurophysiological results. In short, cognitive and neural mechanisms supporting stream segregation are functional from birth but keep developing until adolescence. Similarly, from 6 months of age, infants can orient their attention toward a target in the presence of distractors. However, selective auditory attention in the presence of interfering streams only reaches maturity in late childhood at the earliest. Methodological limitations are discussed, and a new paradigm is proposed to clarify the relationship between auditory scene analysis and speech perception in noise throughout development.
Collapse
Affiliation(s)
- Axelle Calcus
- Center for Research in Cognitive Neuroscience (CRCN), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles, Brussels, Belgium
| |
Collapse
|
5
|
Themas L, Lippus P, Padrik M, Kask L, Kreegipuu K. Maturation of the mismatch response in pre-school children: Systematic literature review and meta-analysis. Neurosci Biobehav Rev 2023; 153:105366. [PMID: 37633625 DOI: 10.1016/j.neubiorev.2023.105366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 06/28/2023] [Accepted: 08/20/2023] [Indexed: 08/28/2023]
Abstract
Event-related potentials (ERPs), specifically the Mismatch Response (MMR), holds promise for investigating auditory maturation in children. It has the potential to predict language development and distinguish between language-impaired and typically developing groups. However, summarizing the MMR's developmental trajectory in typically developing children remains challenging despite numerous studies. This pioneering meta-analysis outlines changes in MMR amplitude among typically developing children, while offering methodological best-practices. Our search identified 51 articles for methodology analysis and 21 for meta-analysis, involving 0-8-year-old participants from 2000 to 2022. Risk of Bias assessment and methodology analysis revealed shortcomings in control condition usage and reporting of study confounders. The meta-analysis results were inconsistent, indicating large effect sizes in some conditions and no effect sizes in others. Subgroup analysis revealed the main effects of age and brain region, as well as an interaction of age and time-window of the MMR. Future research requires a specific protocol, larger samples, and replication studies to deepen the understanding of the auditory discrimination maturation process in children.
Collapse
Affiliation(s)
- Liis Themas
- University of Tartu, Institute of Estonian and General Linguistics, Jakobi 2, 51005 Tartu, Estonia; University of Tartu, Institute of Psychology, Näituse 2, 50409 Tartu, Estonia
| | - Pärtel Lippus
- University of Tartu, Institute of Estonian and General Linguistics, Jakobi 2, 51005 Tartu, Estonia
| | - Marika Padrik
- University of Tartu, Institute of Education, Jakobi 5, 51005 Tartu, Estonia
| | - Liis Kask
- University of Tartu, Institute of Psychology, Näituse 2, 50409 Tartu, Estonia
| | - Kairi Kreegipuu
- University of Tartu, Institute of Psychology, Näituse 2, 50409 Tartu, Estonia.
| |
Collapse
|
6
|
Nguyen T, Flaten E, Trainor LJ, Novembre G. Early social communication through music: State of the art and future perspectives. Dev Cogn Neurosci 2023; 63:101279. [PMID: 37515832 PMCID: PMC10407289 DOI: 10.1016/j.dcn.2023.101279] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 07/31/2023] Open
Abstract
A growing body of research shows that the universal capacity for music perception and production emerges early in development. Possibly building on this predisposition, caregivers around the world often communicate with infants using songs or speech entailing song-like characteristics. This suggests that music might be one of the earliest developing and most accessible forms of interpersonal communication, providing a platform for studying early communicative behavior. However, little research has examined music in truly communicative contexts. The current work aims to facilitate the development of experimental approaches that rely on dynamic and naturalistic social interactions. We first review two longstanding lines of research that examine musical interactions by focusing either on the caregiver or the infant. These include defining the acoustic and non-acoustic features that characterize infant-directed (ID) music, as well as behavioral and neurophysiological research examining infants' processing of musical timing and pitch. Next, we review recent studies looking at early musical interactions holistically. This research focuses on how caregivers and infants interact using music to achieve co-regulation, mutual engagement, and increase affiliation and prosocial behavior. We conclude by discussing methodological, technological, and analytical advances that might empower a comprehensive study of musical communication in early childhood.
Collapse
Affiliation(s)
- Trinh Nguyen
- Neuroscience of Perception and Action Lab, Italian Institute of Technology, Rome, Italy.
| | - Erica Flaten
- Department of Psychology, Neuroscience and Behavior, McMaster University, Hamilton, Canada
| | - Laurel J Trainor
- Department of Psychology, Neuroscience and Behavior, McMaster University, Hamilton, Canada; McMaster Institute for Music and the Mind, McMaster University, Hamilton, Canada; Rotman Research Institute, Baycrest Hospital, Toronto, Canada
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology, Rome, Italy
| |
Collapse
|
7
|
Lenc T, Peter V, Hooper C, Keller PE, Burnham D, Nozaradan S. Infants show enhanced neural responses to musical meter frequencies beyond low-level features. Dev Sci 2023; 26:e13353. [PMID: 36415027 DOI: 10.1111/desc.13353] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 10/20/2022] [Accepted: 11/16/2022] [Indexed: 11/24/2022]
Abstract
Music listening often entails spontaneous perception and body movement to a periodic pulse-like meter. There is increasing evidence that this cross-cultural ability relates to neural processes that selectively enhance metric periodicities, even when these periodicities are not prominent in the acoustic stimulus. However, whether these neural processes emerge early in development remains largely unknown. Here, we recorded the electroencephalogram (EEG) of 20 healthy 5- to 6-month-old infants, while they were exposed to two rhythms known to induce the perception of meter consistently across Western adults. One rhythm contained prominent acoustic periodicities corresponding to the meter, whereas the other rhythm did not. Infants showed significantly enhanced representations of meter periodicities in their EEG responses to both rhythms. This effect is unlikely to reflect the tracking of salient acoustic features in the stimulus, as it was observed irrespective of the prominence of meter periodicities in the audio signals. Moreover, as previously observed in adults, the neural enhancement of meter was greater when the rhythm was delivered by low-pitched sounds. Together, these findings indicate that the endogenous enhancement of metric periodicities beyond low-level acoustic features is a neural property that is already present soon after birth. These high-level neural processes could set the stage for internal representations of musical meter that are critical for human movement coordination during rhythmic musical behavior. RESEARCH HIGHLIGHTS: 5- to 6-month-old infants were presented with auditory rhythms that induce the perception of a periodic pulse-like meter in adults. Infants showed selective enhancement of EEG activity at meter-related frequencies irrespective of the prominence of these frequencies in the stimulus. Responses at meter-related frequencies were boosted when the rhythm was conveyed by bass sounds. High-level neural processes that transform rhythmic auditory stimuli into internal meter templates emerge early after birth.
Collapse
Affiliation(s)
- Tomas Lenc
- Institute of Neuroscience (IONS), Université catholique de Louvain (UCL), Brussels, Belgium
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| | - Varghese Peter
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- School of Health and Behavioural Sciences, University of the Sunshine Coast, Queensland, Australia
| | - Caitlin Hooper
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| | - Peter E Keller
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- Center for Music in the Brain & Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Denis Burnham
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université catholique de Louvain (UCL), Brussels, Belgium
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
| |
Collapse
|
8
|
Carter JA, Buder EH, Bidelman GM. Nonlinear dynamics in auditory cortical activity reveal the neural basis of perceptual warping in speech categorization. JASA EXPRESS LETTERS 2022; 2:045201. [PMID: 35434716 PMCID: PMC8984957 DOI: 10.1121/10.0009896] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
Surrounding context influences speech listening, resulting in dynamic shifts to category percepts. To examine its neural basis, event-related potentials (ERPs) were recorded during vowel identification with continua presented in random, forward, and backward orders to induce perceptual warping. Behaviorally, sequential order shifted individual listeners' categorical boundary, versus random delivery, revealing perceptual warping (biasing) of the heard phonetic category dependent on recent stimulus history. ERPs revealed later (∼300 ms) activity localized to superior temporal and middle/inferior frontal gyri that predicted listeners' hysteresis/enhanced contrast magnitudes. Findings demonstrate that interactions between frontotemporal brain regions govern top-down, stimulus history effects on speech categorization.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, Tennessee 38152, USA
| | - Eugene H Buder
- School of Communication Sciences and Disorders, University of Memphis, Memphis, Tennessee 38152, USA
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, , Bloomington, Indiana 47408, USA , ,
| |
Collapse
|
9
|
Bürgel M, Picinali L, Siedenburg K. Listening in the Mix: Lead Vocals Robustly Attract Auditory Attention in Popular Music. Front Psychol 2021; 12:769663. [PMID: 35024038 PMCID: PMC8744650 DOI: 10.3389/fpsyg.2021.769663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 12/02/2021] [Indexed: 11/13/2022] Open
Abstract
Listeners can attend to and track instruments or singing voices in complex musical mixtures, even though the acoustical energy of sounds from individual instruments may overlap in time and frequency. In popular music, lead vocals are often accompanied by sound mixtures from a variety of instruments, such as drums, bass, keyboards, and guitars. However, little is known about how the perceptual organization of such musical scenes is affected by selective attention, and which acoustic features play the most important role. To investigate these questions, we explored the role of auditory attention in a realistic musical scenario. We conducted three online experiments in which participants detected single cued instruments or voices in multi-track musical mixtures. Stimuli consisted of 2-s multi-track excerpts of popular music. In one condition, the target cue preceded the mixture, allowing listeners to selectively attend to the target. In another condition, the target was presented after the mixture, requiring a more “global” mode of listening. Performance differences between these two conditions were interpreted as effects of selective attention. In Experiment 1, results showed that detection performance was generally dependent on the target’s instrument category, but listeners were more accurate when the target was presented prior to the mixture rather than the opposite. Lead vocals appeared to be nearly unaffected by this change in presentation order and achieved the highest accuracy compared with the other instruments, which suggested a particular salience of vocal signals in musical mixtures. In Experiment 2, filtering was used to avoid potential spectral masking of target sounds. Although detection accuracy increased for all instruments, a similar pattern of results was observed regarding the instrument-specific differences between presentation orders. In Experiment 3, adjusting the sound level differences between the targets reduced the effect of presentation order, but did not affect the differences between instruments. While both acoustic manipulations facilitated the detection of targets, vocal signals remained particularly salient, which suggest that the manipulated features did not contribute to vocal salience. These findings demonstrate that lead vocals serve as robust attractor points of auditory attention regardless of the manipulation of low-level acoustical cues.
Collapse
Affiliation(s)
- Michel Bürgel
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
- *Correspondence: Michel Bürgel,
| | - Lorenzo Picinali
- Dyson School of Design Engineering, Imperial College London, London, United Kingdom
| | - Kai Siedenburg
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
10
|
Bower J, Magee WL, Catroppa C, Baker FA. The Neurophysiological Processing of Music in Children: A Systematic Review With Narrative Synthesis and Considerations for Clinical Practice in Music Therapy. Front Psychol 2021; 12:615209. [PMID: 33935868 PMCID: PMC8081903 DOI: 10.3389/fpsyg.2021.615209] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 03/10/2021] [Indexed: 11/17/2022] Open
Abstract
Introduction: Evidence supporting the use of music interventions to maximize arousal and awareness in adults presenting with a disorder of consciousness continues to grow. However, the brain of a child is not simply a small adult brain, and therefore adult theories are not directly translatable to the pediatric population. The present study aims to synthesize brain imaging data about the neural processing of music in children aged 0-18 years, to form a theoretical basis for music interventions with children presenting with a disorder of consciousness following acquired brain injury. Methods: We conducted a systematic review with narrative synthesis utilizing an adaptation of the methodology developed by Popay and colleagues. Following the development of the narrative that answered the central question "what does brain imaging data reveal about the receptive processing of music in children?", discussion was centered around the clinical implications of music therapy with children following acquired brain injury. Results: The narrative synthesis included 46 studies that utilized EEG, MEG, fMRI, and fNIRS scanning techniques in children aged 0-18 years. From birth, musical stimuli elicit distinct but immature electrical responses, with components of the auditory evoked response having longer latencies and variable amplitudes compared to their adult counterparts. Hemodynamic responses are observed throughout cortical and subcortical structures however cortical immaturity impacts musical processing and the localization of function in infants and young children. The processing of complex musical stimuli continues to mature into late adolescence. Conclusion: While the ability to process fundamental musical elements is present from birth, infants and children process music more slowly and utilize different cortical areas compared to adults. Brain injury in childhood occurs in a period of rapid development and the ability to process music following brain injury will likely depend on pre-morbid musical processing. Further, a significant brain injury may disrupt the developmental trajectory of complex music processing. However, complex music processing may emerge earlier than comparative language processing, and occur throughout a more global circuitry.
Collapse
Affiliation(s)
- Janeen Bower
- Faculty of Fine Arts and Music, The University of Melbourne, Melbourne, VIC, Australia
- Brain and Mind, Clinical Sciences, The Murdoch Children's Research Institute, Melbourne, VIC, Australia
- Music Therapy Department, The Royal Children's Hospital Melbourne, Melbourne, VIC, Australia
| | - Wendy L. Magee
- Boyer College of Music and Dance, Temple University, Philadelphia, PA, United States
| | - Cathy Catroppa
- Brain and Mind, Clinical Sciences, The Murdoch Children's Research Institute, Melbourne, VIC, Australia
- Melbourne School of Psychological Sciences and The Department of Paediatrics, The University of Melbourne, Melbourne, VIC, Australia
- Psychology Department, The Royal Children's Hospital Melbourne, Melbourne, VIC, Australia
| | - Felicity Anne Baker
- Faculty of Fine Arts and Music, The University of Melbourne, Melbourne, VIC, Australia
- Centre of Research in Music and Health, Norwegian Academy of Music, Oslo, Norway
| |
Collapse
|
11
|
Musical playschool activities are linked to faster auditory development during preschool-age: a longitudinal ERP study. Sci Rep 2019; 9:11310. [PMID: 31383938 PMCID: PMC6683192 DOI: 10.1038/s41598-019-47467-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Accepted: 06/17/2019] [Indexed: 01/20/2023] Open
Abstract
The influence of musical experience on brain development has been mostly studied in school-aged children with formal musical training while little is known about the possible effects of less formal musical activities typical for preschool-aged children (e.g., before the age of seven). In the current study, we investigated whether the amount of musical group activities is reflected in the maturation of neural sound discrimination from toddler to preschool-age. Specifically, we recorded event-related potentials longitudinally (84 recordings from 33 children) in a mismatch negativity (MMN) paradigm to different musically relevant sound changes at ages 2–3, 4–5 and 6–7 years from children who attended a musical playschool throughout the follow-up period and children with shorter attendance to the same playschool. In the first group, we found a gradual positive to negative shift in the polarities of the mismatch responses while the latter group showed little evidence of age-related changes in neural sound discrimination. The current study indicates that the maturation of sound encoding indexed by the MMN may be more protracted than once thought and provides first longitudinal evidence that even quite informal musical group activities facilitate the development of neural sound discrimination during early childhood.
Collapse
|
12
|
Varlet M, Williams R, Keller PE. Effects of pitch and tempo of auditory rhythms on spontaneous movement entrainment and stabilisation. PSYCHOLOGICAL RESEARCH 2018; 84:568-584. [PMID: 30116886 DOI: 10.1007/s00426-018-1074-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Accepted: 08/09/2018] [Indexed: 10/28/2022]
Abstract
Human movements spontaneously entrain to auditory rhythms, which can help to stabilise movements in time and space. The properties of auditory rhythms supporting the occurrence of this phenomenon, however, remain largely unclear. Here, we investigate in two experiments the effects of pitch and tempo on spontaneous movement entrainment and stabilisation. We examined spontaneous entrainment of hand-held pendulum swinging in time with low-pitched (100 Hz) and high-pitched (1600 Hz) metronomes to test whether low pitch favours movement entrainment and stabilisation. To investigate whether stimulation and movement tempi moderate these effects of pitch, we manipulated (1) participants' preferred movement tempo by varying pendulum mechanical constraints (Experiment 1) and (2) stimulation tempo, which was either equal to, or slightly slower or faster (± 10%) than the participant's preferred movement tempo (Experiment 2). The results showed that participants' movements spontaneously entrained to auditory rhythms, and that this effect was stronger with low-pitched rhythms independently of stimulation and movement tempi. Results also indicated that auditory rhythms can lead to increased movement amplitude and stabilisation of movement tempo and amplitude, particularly when low-pitched. However, stabilisation effects were found to depend on intrinsic movement variability. Auditory rhythms decreased movement variability of individuals with higher intrinsic variability but increased movement variability of individuals with lower intrinsic variability. These findings provide new insights into factors that influence auditory-motor entrainment and how they may be optimised to enhance movement efficiency.
Collapse
Affiliation(s)
- Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia.
| | - Rohan Williams
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia
| | - Peter E Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia
| |
Collapse
|
13
|
Hutchison JL, Hubbard TL, Hubbard NA, Rypma B. Ear Advantage for Musical Location and Relative Pitch: Effects of Musical Training and Attention. Perception 2017; 46:745-762. [PMID: 28523983 DOI: 10.1177/0301006616684238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Trained musicians have been found to exhibit a right-ear advantage for high tones and a left-ear advantage for low tones. We investigated whether this right/high, left/low pattern of musical processing advantage exists in listeners who had varying levels of musical experience, and whether such a pattern might be modulated by attentional strategy. A dichotic listening paradigm was used in which different melodic sequences were presented to each ear, and listeners attended to (a) the left ear or the right ear or (b) the higher pitched tones or the lower pitched tones. Listeners judged whether tone-to-tone transitions within each melodic sequence moved upward or downward in pitch. Only musically experienced listeners could adequately judge the direction of successive pitch transitions when attending to a specific ear; however, all listeners could judge the direction of successive pitch transitions within a high-tone stream or a low-tone stream. Overall, listeners exhibited greater accuracy when attending to relatively higher pitches, but there was no evidence to support a right/high, left/low bias. Results were consistent with effects of attentional strategy rather than an ear advantage for high or low tones. Implications for a potential performer/audience paradox in listening space are considered.
Collapse
Affiliation(s)
- Joanna L Hutchison
- Department of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, USA
| | | | - Nicholas A Hubbard
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Bart Rypma
- Department of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, USA
| |
Collapse
|
14
|
Huberth M, Fujioka T. Neural representation of a melodic motif: Effects of polyphonic contexts. Brain Cogn 2017; 111:144-155. [DOI: 10.1016/j.bandc.2016.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Revised: 09/26/2016] [Accepted: 11/11/2016] [Indexed: 11/28/2022]
|
15
|
Rhythm judgments reveal a frequency asymmetry in the perception and neural coding of sound synchrony. Proc Natl Acad Sci U S A 2017; 114:1201-1206. [PMID: 28096408 DOI: 10.1073/pnas.1615669114] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In modern Western music, melody is commonly conveyed by pitch changes in the highest-register voice, whereas meter or rhythm is often carried by instruments with lower pitches. An intriguing and recently suggested possibility is that the custom of assigning rhythmic functions to lower-pitch instruments may have emerged because of fundamental properties of the auditory system that result in superior time encoding for low pitches. Here we compare rhythm and synchrony perception between low- and high-frequency tones, using both behavioral and EEG techniques. Both methods were consistent in showing no superiority in time encoding for low over high frequencies. However, listeners were consistently more sensitive to timing differences between two nearly synchronous tones when the high-frequency tone followed the low-frequency tone than vice versa. The results demonstrate no superiority of low frequencies in timing judgments but reveal a robust asymmetry in the perception and neural coding of synchrony that reflects greater tolerance for delays of low- relative to high-frequency sounds than vice versa. We propose that this asymmetry exists to compensate for inherent and variable time delays in cochlear processing, as well as the acoustical properties of sound sources in the natural environment, thereby providing veridical perceptual experiences of simultaneity.
Collapse
|
16
|
Daikoku T, Takahashi Y, Futagami H, Tarumoto N, Yasuda H. Physical fitness modulates incidental but not intentional statistical learning of simultaneous auditory sequences during concurrent physical exercise. Neurol Res 2016; 39:107-116. [PMID: 28034012 DOI: 10.1080/01616412.2016.1273571] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
In real-world auditory environments, humans are exposed to overlapping auditory information such as those made by human voices and musical instruments even during routine physical activities such as walking and cycling. The present study investigated how concurrent physical exercise affects performance of incidental and intentional learning of overlapping auditory streams, and whether physical fitness modulates the performances of learning. Participants were grouped with 11 participants with lower and higher fitness each, based on their Vo2max value. They were presented simultaneous auditory sequences with a distinct statistical regularity each other (i.e. statistical learning), while they were pedaling on the bike and seating on a bike at rest. In experiment 1, they were instructed to attend to one of the two sequences and ignore to the other sequence. In experiment 2, they were instructed to attend to both of the two sequences. After exposure to the sequences, learning effects were evaluated by familiarity test. In the experiment 1, performance of statistical learning of ignored sequences during concurrent pedaling could be higher in the participants with high than low physical fitness, whereas in attended sequence, there was no significant difference in performance of statistical learning between high than low physical fitness. Furthermore, there was no significant effect of physical fitness on learning while resting. In the experiment 2, the both participants with high and low physical fitness could perform intentional statistical learning of two simultaneous sequences in the both exercise and rest sessions. The improvement in physical fitness might facilitate incidental but not intentional statistical learning of simultaneous auditory sequences during concurrent physical exercise.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- a Department of Experimental Psychology , University of Oxford , Oxford , UK
| | - Yuji Takahashi
- b Faculty of Community Health Care , Teikyo Heisei University , Chiba , Japan
| | - Hiroko Futagami
- c Faculty of Medical Sciences , Teikyo University of Science , Tokyo , Japan
| | | | - Hideki Yasuda
- b Faculty of Community Health Care , Teikyo Heisei University , Chiba , Japan
| |
Collapse
|
17
|
Harris R, van Kranenburg P, de Jong BM. Behavioral Quantification of Audiomotor Transformations in Improvising and Score-Dependent Musicians. PLoS One 2016; 11:e0166033. [PMID: 27835631 PMCID: PMC5105996 DOI: 10.1371/journal.pone.0166033] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2016] [Accepted: 10/22/2016] [Indexed: 11/18/2022] Open
Abstract
The historically developed practice of learning to play a music instrument from notes instead of by imitation or improvisation makes it possible to contrast two types of skilled musicians characterized not only by dissimilar performance practices, but also disparate methods of audiomotor learning. In a recent fMRI study comparing these two groups of musicians while they either imagined playing along with a recording or covertly assessed the quality of the performance, we observed activation of a right-hemisphere network of posterior superior parietal and dorsal premotor cortices in improvising musicians, indicating more efficient audiomotor transformation. In the present study, we investigated the detailed performance characteristics underlying the ability of both groups of musicians to replicate music on the basis of aural perception alone. Twenty-two classically-trained improvising and score-dependent musicians listened to short, unfamiliar two-part excerpts presented with headphones. They played along or replicated the excerpts by ear on a digital piano, either with or without aural feedback. In addition, they were asked to harmonize or transpose some of the excerpts either to a different key or to the relative minor. MIDI recordings of their performances were compared with recordings of the aural model. Concordance was expressed in an audiomotor alignment score computed with the help of music information retrieval algorithms. Significantly higher alignment scores were found when contrasting groups, voices, and tasks. The present study demonstrates the superior ability of improvising musicians to replicate both the pitch and rhythm of aurally perceived music at the keyboard, not only in the original key, but also in other tonalities. Taken together with the enhanced activation of the right dorsal frontoparietal network found in our previous fMRI study, these results underscore the conclusion that the practice of improvising music can be associated with enhanced audiomotor transformation in response to aurally perceived music.
Collapse
Affiliation(s)
- Robert Harris
- Department of Neurology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- BCN Neuroimaging Center, University of Groningen, Groningen, The Netherlands
- Prince Claus Conservatoire, Hanze University of Applied Sciences, Groningen, The Netherlands
- * E-mail:
| | | | - Bauke M. de Jong
- Department of Neurology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- BCN Neuroimaging Center, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
18
|
Nikolsky A. Evolution of Tonal Organization in Music Optimizes Neural Mechanisms in Symbolic Encoding of Perceptual Reality. Part-2: Ancient to Seventeenth Century. Front Psychol 2016; 7:211. [PMID: 27065893 PMCID: PMC4813086 DOI: 10.3389/fpsyg.2016.00211] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2015] [Accepted: 02/03/2016] [Indexed: 11/13/2022] Open
Abstract
This paper reveals the way in which musical pitch works as a peculiar form of cognition that reflects upon the organization of the surrounding world as perceived by majority of music users within a socio-cultural formation. Part-1 of this paper described the origin of tonal organization from verbal speech, its progress from indefinite to definite pitch, and the emergence of two main harmonic orders: heptatonic and pentatonic, each characterized by its own method of handling tension at both domains, of tonal and social organization. Part-2, here, completes the line of historic development from Antiquity to seventeenth century. Vast archeological data is used to identify the perception of music structures that tells apart the temple/palace music of urban civilizations and the folk music of village cultures. The “mega-pitch-set” (MPS) organization is found to constitute the principal contribution of a math-based music theory to a new diatonic order. All ramifications for psychology of music are discussed in detail. “Non-octave hypermode” is identified as a peculiar homogenous type of MPS, typical for plainchant. The origin of chromaticism is thoroughly examined as an earmark of “art-music” that opposes earlier forms of folk music. The role of aesthetic emotions in formation of chromatic alteration is defined. The development of chromatic system is traced throughout history, highlighting its modern implementation in “hemiolic modes.” The connection between tonal organization in music and spatial organization in pictorial art is established in the Baroque culture, and then tracked back to prehistoric times. Both are shown to present a form of abstraction of environmental topographic schemes, and music is proposed as the primary medium for its cultivation through the concept of pitch. The comparison of stages of tonal organization and typologies of musical texture is used to define the overall course of tonal evolution. Tonal organization of pitch reflects the culture of thinking, adopted as a standard to optimize individual perception of reality within a social group in a way optimal for one's success, thereby setting the conventions of intellectual and emotional intelligence.
Collapse
|
19
|
Hove MJ, Keller PE. Impaired movement timing in neurological disorders: rehabilitation and treatment strategies. Ann N Y Acad Sci 2015; 1337:111-7. [PMID: 25773624 DOI: 10.1111/nyas.12615] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Timing abnormalities have been reported in many neurological disorders, including Parkinson's disease (PD). In PD, motor-timing impairments are especially debilitating in gait. Despite impaired audiomotor synchronization, PD patients' gait improves when they walk with an auditory metronome or with music. Building on that research, we make recommendations for optimizing sensory cues to improve the efficacy of rhythmic cuing in gait rehabilitation. Adaptive rhythmic metronomes (that synchronize with the patient's walking) might be especially effective. In a recent study we showed that adaptive metronomes synchronized consistently with PD patients' footsteps without requiring attention; this improved stability and reinstated healthy gait dynamics. Other strategies could help optimize sensory cues for gait rehabilitation. Groove music strongly engages the motor system and induces movement; bass-frequency tones are associated with movement and provide strong timing cues. Thus, groove and bass-frequency pulses could deliver potent rhythmic cues. These strategies capitalize on the close neural connections between auditory and motor networks; and auditory cues are typically preferred. However, moving visual cues greatly improve visuomotor synchronization and could warrant examination in gait rehabilitation. Together, a treatment approach that employs groove, auditory, bass-frequency, and adaptive (GABA) cues could help optimize rhythmic sensory cues for treating motor and timing deficits.
Collapse
Affiliation(s)
- Michael J Hove
- Harvard Medical School, Massachusetts General Hospital, Boston, Massachusetts
| | | |
Collapse
|
20
|
Merchant H, Grahn J, Trainor L, Rohrmeier M, Fitch WT. Finding the beat: a neural perspective across humans and non-human primates. Philos Trans R Soc Lond B Biol Sci 2015; 370:20140093. [PMID: 25646516 PMCID: PMC4321134 DOI: 10.1098/rstb.2014.0093] [Citation(s) in RCA: 213] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
Humans possess an ability to perceive and synchronize movements to the beat in music ('beat perception and synchronization'), and recent neuroscientific data have offered new insights into this beat-finding capacity at multiple neural levels. Here, we review and compare behavioural and neural data on temporal and sequential processing during beat perception and entrainment tasks in macaques (including direct neural recording and local field potential (LFP)) and humans (including fMRI, EEG and MEG). These abilities rest upon a distributed set of circuits that include the motor cortico-basal-ganglia-thalamo-cortical (mCBGT) circuit, where the supplementary motor cortex (SMA) and the putamen are critical cortical and subcortical nodes, respectively. In addition, a cortical loop between motor and auditory areas, connected through delta and beta oscillatory activity, is deeply involved in these behaviours, with motor regions providing the predictive timing needed for the perception of, and entrainment to, musical rhythms. The neural discharge rate and the LFP oscillatory activity in the gamma- and beta-bands in the putamen and SMA of monkeys are tuned to the duration of intervals produced during a beat synchronization-continuation task (SCT). Hence, the tempo during beat synchronization is represented by different interval-tuned cells that are activated depending on the produced interval. In addition, cells in these areas are tuned to the serial-order elements of the SCT. Thus, the underpinnings of beat synchronization are intrinsically linked to the dynamics of cell populations tuned for duration and serial order throughout the mCBGT. We suggest that a cross-species comparison of behaviours and the neural circuits supporting them sets the stage for a new generation of neurally grounded computational models for beat perception and synchronization.
Collapse
Affiliation(s)
- Hugo Merchant
- Instituto de Neurobiología, UNAM, campus Juriquilla, Querétaro 76230, México
| | - Jessica Grahn
- Brain and Mind Institute, and Department of Psychology, University of Western Ontario, London, Ontario, Canada N6A 5B7
| | - Laurel Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster University, 1280 Main St. W., Hamilton, Ontario, Canada
| | - Martin Rohrmeier
- Department of Linguistics and Philosophy, MIT Intelligence Initiative, Cambridge, MA 02139, USA
| | - W Tecumseh Fitch
- Department of Cognitive Biology, University of Vienna, Althanstrasse 14, Vienna 1090, Austria
| |
Collapse
|
21
|
Trainor LJ. The origins of music in auditory scene analysis and the roles of evolution and culture in musical creation. Philos Trans R Soc Lond B Biol Sci 2015; 370:20140089. [PMID: 25646512 PMCID: PMC4321130 DOI: 10.1098/rstb.2014.0089] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Whether music was an evolutionary adaptation that conferred survival advantages or a cultural creation has generated much debate. Consistent with an evolutionary hypothesis, music is unique to humans, emerges early in development and is universal across societies. However, the adaptive benefit of music is far from obvious. Music is highly flexible, generative and changes rapidly over time, consistent with a cultural creation hypothesis. In this paper, it is proposed that much of musical pitch and timing structure adapted to preexisting features of auditory processing that evolved for auditory scene analysis (ASA). Thus, music may have emerged initially as a cultural creation made possible by preexisting adaptations for ASA. However, some aspects of music, such as its emotional and social power, may have subsequently proved beneficial for survival and led to adaptations that enhanced musical behaviour. Ontogenetic and phylogenetic evidence is considered in this regard. In particular, enhanced auditory-motor pathways in humans that enable movement entrainment to music and consequent increases in social cohesion, and pathways enabling music to affect reward centres in the brain should be investigated as possible musical adaptations. It is concluded that the origins of music are complex and probably involved exaptation, cultural creation and evolutionary adaptation.
Collapse
Affiliation(s)
- Laurel J Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada McMaster Institute for Music and the Mind, McMaster University, Hamilton, Ontario, Canada Rotman Research Institute, Baycrest Hospital, Toronto, Ontario, Canada
| |
Collapse
|
22
|
Masapollo M, Polka L, Ménard L. When infants talk, infants listen: pre-babbling infants prefer listening to speech with infant vocal properties. Dev Sci 2015; 19:318-28. [DOI: 10.1111/desc.12298] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2014] [Accepted: 12/12/2014] [Indexed: 10/23/2022]
Affiliation(s)
- Matthew Masapollo
- School of Communication Sciences & Disorders; McGill University; Canada
- Centre for Research on Brain; Language & Music; McGill University; Canada
| | - Linda Polka
- School of Communication Sciences & Disorders; McGill University; Canada
- Centre for Research on Brain; Language & Music; McGill University; Canada
| | - Lucie Ménard
- Centre for Research on Brain; Language & Music; McGill University; Canada
- Département de Linguistique; Université du Québec à Montréal; Canada
| |
Collapse
|
23
|
Superior time perception for lower musical pitch explains why bass-ranged instruments lay down musical rhythms. Proc Natl Acad Sci U S A 2014; 111:10383-8. [PMID: 24982142 DOI: 10.1073/pnas.1402039111] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The auditory environment typically contains several sound sources that overlap in time, and the auditory system parses the complex sound wave into streams or voices that represent the various sound sources. Music is also often polyphonic. Interestingly, the main melody (spectral/pitch information) is most often carried by the highest-pitched voice, and the rhythm (temporal foundation) is most often laid down by the lowest-pitched voice. Previous work using electroencephalography (EEG) demonstrated that the auditory cortex encodes pitch more robustly in the higher of two simultaneous tones or melodies, and modeling work indicated that this high-voice superiority for pitch originates in the sensory periphery. Here, we investigated the neural basis of carrying rhythmic timing information in lower-pitched voices. We presented simultaneous high-pitched and low-pitched tones in an isochronous stream and occasionally presented either the higher or the lower tone 50 ms earlier than expected, while leaving the other tone at the expected time. EEG recordings revealed that mismatch negativity responses were larger for timing deviants of the lower tones, indicating better timing encoding for lower-pitched compared with higher-pitch tones at the level of auditory cortex. A behavioral motor task revealed that tapping synchronization was more influenced by the lower-pitched stream. Results from a biologically plausible model of the auditory periphery suggest that nonlinear cochlear dynamics contribute to the observed effect. The low-voice superiority effect for encoding timing explains the widespread musical practice of carrying rhythm in bass-ranged instruments and complements previously established high-voice superiority effects for pitch and melody.
Collapse
|
24
|
Marie C, Trainor LJ. Early development of polyphonic sound encoding and the high voice superiority effect. Neuropsychologia 2014; 57:50-8. [PMID: 24613759 DOI: 10.1016/j.neuropsychologia.2014.02.023] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2013] [Revised: 02/25/2014] [Accepted: 02/26/2014] [Indexed: 11/17/2022]
Abstract
Previous research suggests that when two streams of pitched tones are presented simultaneously, adults process each stream in a separate memory trace, as reflected by mismatch negativity (MMN), a component of the event-related potential (ERP). Furthermore, a superior encoding of the higher tone or voice in polyphonic sounds has been found for 7-month-old infants and both musician and non-musician adults in terms of a larger amplitude MMN in response to pitch deviant stimuli in the higher than the lower voice. These results, in conjunction with modeling work, suggest that the high voice superiority effect might originate in characteristics of the peripheral auditory system. If this is the case, the high voice superiority effect should be present in infants younger than 7 months. In the present study we tested 3-month-old infants as there is no evidence at this age of perceptual narrowing or specialization of musical processing according to the pitch or rhythmic structure of music experienced in the infant׳s environment. We presented two simultaneous streams of tones (high and low) with 50% of trials modified by 1 semitone (up or down), either on the higher or the lower tone, leaving 50% standard trials. Results indicate that like the 7-month-olds, 3-month-old infants process each tone in a separate memory trace and show greater saliency for the higher tone. Although MMN was smaller and later in both voices for the group of sixteen 3-month-olds compared to the group of sixteen 7-month-olds, the size of the difference in MMN for the high compared to low voice was similar across ages. These results support the hypothesis of an innate peripheral origin of the high voice superiority effect.
Collapse
Affiliation(s)
- Céline Marie
- Department of Psychology, Neuroscience & Behaviour, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada L8S 4K1; McMaster Institute for Music and the Mind, Hamilton, Ontario, Canada
| | - Laurel J Trainor
- Department of Psychology, Neuroscience & Behaviour, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada L8S 4K1; McMaster Institute for Music and the Mind, Hamilton, Ontario, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada.
| |
Collapse
|
25
|
Kim CH, Lee S, Kim JS, Seol J, Yi SW, Chung CK. Melody effects on ERANm elicited by harmonic irregularity in musical syntax. Brain Res 2014; 1560:36-45. [PMID: 24607297 DOI: 10.1016/j.brainres.2014.02.045] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2013] [Revised: 02/10/2014] [Accepted: 02/25/2014] [Indexed: 10/25/2022]
Abstract
Recent studies have reported that early right anterior negativity (ERAN) and its magnetic counterpart (ERANm) are evoked by harmonic irregularity in Western tonal music; however, those studies did not control for differences of melody. Because melody and harmony have an interdependent relationship and because melody (in this study melody is represented by the highest voice part) in a chord sequence may dominate, there is controversy over whether ERAN (or ERANm) changes arise from melody or harmony differences. To separate the effects of melody differences and harmonic irregularity on ERANm responses, we designed two magnetoencephalography experiments and behavioral test. Participants were presented with three types of chord progression sequences (Expected, Intermediate, and Unexpected) with different harmonic regularities in which melody differences were or were not controlled. In the uncontrolled melody difference experiment, the unexpected chord elicited a significantly largest ERANm, but in the controlled melody difference experiment, the amplitude of the ERANm peak did not differ among the three conditions. However, ERANm peak latency was delayed more than that in the uncontrolled melody difference experiment. The behavioral results show the difference between the two experiments even if harmonic irregularity was discriminated in the uncontrolled melody difference experiment. In conclusion, our analysis reveals that there is a relationship between the effects of harmony and melody on ERANm. Hence, we suggest that a melody difference in a chord progression is largely responsible for the observed changes in ERANm, reaffirming that melody plays an important role in the processing of musical syntax.
Collapse
Affiliation(s)
- Chan Hee Kim
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Republic of Korea; MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Sojin Lee
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Republic of Korea; MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - June Sic Kim
- MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea; Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea; Sensory Organ Research Institute, Seoul National University, Seoul, Republic of Korea
| | - Jaeho Seol
- Imaging Language Group, Brain Research Unit, O. V. Lounasmaa Laboratory, Aalto University School of Science, FI-00076 Aalto, Finland
| | - Suk Won Yi
- Department of Music, The Graduate School Seoul National University, Seoul, Republic of Korea; Western Music Research Institute, Seoul National University, Seoul, Republic of Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Republic of Korea; MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea; Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea; Interdisciplinary Program in Cognitive Science, Seoul National University College of Humanities, Seoul, Republic of Korea; Department of Brain and Cognitive Science, Seoul National University College of Natural Science, Seoul, Republic of Korea.
| |
Collapse
|
26
|
Barrett KC, Ashley R, Strait DL, Kraus N. Art and science: how musical training shapes the brain. Front Psychol 2013; 4:713. [PMID: 24137142 PMCID: PMC3797461 DOI: 10.3389/fpsyg.2013.00713] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2013] [Accepted: 09/18/2013] [Indexed: 11/13/2022] Open
Abstract
What makes a musician? In this review, we discuss innate and experience-dependent factors that mold the musician brain in addition to presenting new data in children that indicate that some neural enhancements in musicians unfold with continued training over development. We begin by addressing effects of training on musical expertise, presenting neural, perceptual, and cognitive evidence to support the claim that musicians are shaped by their musical training regimes. For example, many musician-advantages in the neural encoding of sound, auditory perception, and auditory-cognitive skills correlate with their extent of musical training, are not observed in young children just initiating musical training, and differ based on the type of training pursued. Even amidst innate characteristics that contribute to the biological building blocks that make up the musician, musicians demonstrate further training-related enhancements through extensive education and practice. We conclude by reviewing evidence from neurobiological and epigenetic approaches to frame biological markers of musicianship in the context of interactions between genetic and experience-related factors.
Collapse
Affiliation(s)
- Karen Chan Barrett
- Auditory Neuroscience Laboratory, Department of Communication Science and Disorders, Northwestern University Evanston, IL, USA ; Program in Music Theory and Cognition, Bienen School of Music, Northwestern University Evanston, IL, USA ; Music Cognition Laboratory, Program in Music Theory and Cognition, Bienen School of Music, Northwestern University Evanston, IL USA
| | | | | | | |
Collapse
|
27
|
Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models. Hear Res 2013; 308:60-70. [PMID: 23916754 DOI: 10.1016/j.heares.2013.07.014] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2013] [Revised: 07/12/2013] [Accepted: 07/25/2013] [Indexed: 11/23/2022]
Abstract
Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human ERP and psychophysical music listening studies.
Collapse
|
28
|
Butler BE, Trainor LJ. Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation. Front Psychol 2012; 3:180. [PMID: 22740836 PMCID: PMC3382913 DOI: 10.3389/fpsyg.2012.00180] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Accepted: 05/16/2012] [Indexed: 11/15/2022] Open
Abstract
Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch.
Collapse
Affiliation(s)
- Blake E Butler
- Department of Psychology, Neuroscience and Behaviour, McMaster University Hamilton, ON, Canada
| | | |
Collapse
|