26
|
Liu W, Zhang C, Wang X, Xu J, Chang Y, Ristaniemi T, Cong F. Functional connectivity of major depression disorder using ongoing EEG during music perception. Clin Neurophysiol 2020; 131:2413-2422. [PMID: 32828045 DOI: 10.1016/j.clinph.2020.06.031] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 05/07/2020] [Accepted: 06/29/2020] [Indexed: 12/14/2022]
Abstract
OBJECTIVE The functional connectivity (FC) of major depression disorder (MDD) has not been well studied under naturalistic and continuous stimuli conditions. In this study, we investigated the frequency-specific FC of MDD patients exposed to conditions of music perception using ongoing electroencephalogram (EEG). METHODS First, we applied the phase lag index (PLI) method to calculate the connectivity matrices and graph theory-based methods to measure the topology of brain networks across different frequency bands. Then, classification methods were adopted to identify the most discriminate frequency band for the diagnosis of MDD. RESULTS During music perception, MDD patients exhibited a decreased connectivity pattern in the delta band but an increased connectivity pattern in the beta band. Healthy people showed a left hemisphere-dominant phenomenon, but MDD patients did not show such a lateralized effect. Support vector machine (SVM) achieved the best classification performance in the beta frequency band with an accuracy of 89.7%, sensitivity of 89.4% and specificity of 89.9%. CONCLUSIONS MDD patients exhibited an altered FC in delta and beta bands, and the beta band showed a superiority in the diagnosis of MDD. SIGNIFICANCE Our study provided a promising reference for the diagnosis of MDD, and revealed a new perspective for understanding the topology of MDD brain networks during music perception.
Collapse
|
27
|
Yüksel M, Çiprut A. Music and psychoacoustic perception abilities in cochlear implant users with auditory neuropathy spectrum disorder. Int J Pediatr Otorhinolaryngol 2020; 131:109865. [PMID: 31945735 DOI: 10.1016/j.ijporl.2020.109865] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 01/05/2020] [Accepted: 01/05/2020] [Indexed: 10/25/2022]
Abstract
OBJECTIVE Auditory neuropathy spectrum disorder (ANSD) is a condition wherein the pre-neural or cochlear outer hair cell activity is intact, but the neural activity in the auditory nerve is disrupted. Cochlear implant (CI) can be beneficial for subjects with ANSD; however, little is known about the music perception and psychoacoustic abilities of CI users with ANSD. Music perception in CI users is a multidimensional and complex ability requiring the contribution of both auditory and nonauditory abilities. Even though auditory abilities lay the foundation, the contribution of patient-related variables such as ANSD may affect the music perception. This study aimed to evaluate the psychoacoustic and music perception abilities of CI recipients with ANSD. STUDY DESIGN Twelve CI users with ANSD and twelve age- and gendermatched CI users with sensorineural hearing loss (SNHL) were evaluated. Music perception abilities were measured using the Turkish version of the Clinical Assessment of Music Perception (T-CAMP) test. Psychoacoustic abilities were measured using the spectral ripple discrimination (SRD) and temporal modulation transfer function (TMTF) tests. In addition, the age of diagnosis and implantation was recorded. RESULTS Pitch direction discrimination (PDD), timbre recognition, SRD, and TMTF performance of CI users with ANSD were concordant with those reported in previous studies, and differences between ANSD and SNHL groups were not statistically significant. However, the ANSD group performed poorly compared with SNHL group in melody recognition subtest of T-CAMP, and the difference was statistically significant. CONCLUSION CI can prove beneficial for patients with ANSD with respect to their music and psychoacoustic abilities, similar to patients with SNHL, except for melody recognition. Recognition of melodies requires both auditory and non-auditory abilities, and ANSD may have an extensive but subtle effect in the life of CI users.
Collapse
|
28
|
Abstract
Control of stimulus confounds is an ever-present, and ever-important, aspect of experimental design. Typically, researchers concern themselves with such control on a local level, ensuring that individual stimuli contain only the properties they intend for them to represent. Significantly less attention, however, is paid to stimulus properties in the aggregate, aspects that, although not present in individual stimuli, can nevertheless become emergent properties of the stimulus set when viewed in total. This paper describes two examples of such effects. The first (Case Study 1) focuses on emergent properties of pairs of to-be-performed tones on a piano keyboard, and the second (Case Study 2) focuses on emergent properties of short, atonal melodies in a perception/memory task. In both cases these sets of stimuli induced identifiable tonal influences despite being explicitly created to be devoid of musical tonality. These results highlight the importance of monitoring aggregate stimulus properties in one's research, and are discussed with reference to their implications for interpreting psychological findings quite generally.
Collapse
|
29
|
Abstract
Perception of sounds occurs in the context of surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, categorization of later sounds becomes biased through spectral contrast effects (SCEs). Past research has shown SCEs to bias categorization of speech and music alike. Recent studies have extended SCEs to naturalistic listening conditions when the inherent spectral composition of (unfiltered) sentences biased speech categorization. Here, we tested whether natural (unfiltered) music would similarly bias categorization of French horn and tenor saxophone targets. Preceding contexts were either solo performances of the French horn or tenor saxophone (unfiltered; 1 second duration in Experiment 1, or 3 seconds duration in Experiment 2) or a string quintet processed to emphasize frequencies in the horn or saxophone (filtered; 1 second duration). Both approaches produced SCEs, producing more "saxophone" responses following horn / horn-like contexts and vice versa. One-second filtered contexts produced SCEs as in previous studies, but 1-second unfiltered contexts did not. Three-second unfiltered contexts biased perception, but to a lesser degree than filtered contexts did. These results extend SCEs in musical instrument categorization to everyday listening conditions.
Collapse
|
30
|
Kaneshiro B, Nguyen DT, Norcia AM, Dmochowski JP, Berger J. Natural music evokes correlated EEG responses reflecting temporal structure and beat. Neuroimage 2020; 214:116559. [PMID: 31978543 DOI: 10.1016/j.neuroimage.2020.116559] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 12/23/2019] [Accepted: 01/14/2020] [Indexed: 11/17/2022] Open
Abstract
The brain activity of multiple subjects has been shown to synchronize during salient moments of natural stimuli, suggesting that correlation of neural responses indexes a brain state operationally termed 'engagement'. While past electroencephalography (EEG) studies have considered both auditory and visual stimuli, the extent to which these results generalize to music-a temporally structured stimulus for which the brain has evolved specialized circuitry-is less understood. Here we investigated neural correlation during natural music listening by recording EEG responses from N=48 adult listeners as they heard real-world musical works, some of which were temporally disrupted through shuffling of short-term segments (measures), reversal, or randomization of phase spectra. We measured correlation between multiple neural responses (inter-subject correlation) and between neural responses and stimulus envelope fluctuations (stimulus-response correlation) in the time and frequency domains. Stimuli retaining basic musical features, such as rhythm and melody, elicited significantly higher behavioral ratings and neural correlation than did phase-scrambled controls. However, while unedited songs were self-reported as most pleasant, time-domain correlations were highest during measure-shuffled versions. Frequency-domain measures of correlation (coherence) peaked at frequencies related to the musical beat, although the magnitudes of these spectral peaks did not explain the observed temporal correlations. Our findings show that natural music evokes significant inter-subject and stimulus-response correlations, and suggest that the neural correlates of musical 'engagement' may be distinct from those of enjoyment.
Collapse
|
31
|
Ben-Nathan M, Salti M, Algom D. The many faces of music: Attending to music and delight in the same music are governed by different rules of processing. Acta Psychol (Amst) 2019; 200:102949. [PMID: 31675619 DOI: 10.1016/j.actpsy.2019.102949] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 08/25/2019] [Accepted: 10/18/2019] [Indexed: 11/28/2022] Open
Abstract
Music generates manifold experiences in humans, some perceptual and some hedonic. Are these qualia governed by the same principles in processing? In particular, do the loudness and timbre of melodies combine to produce perception and likeability by the same rules of integration? In Experiment 1, we tested selective attention to loudness and timbre by applying Garner's speeded classification paradigm and found both to be perceptually integral dimensions. In Experiment 2, we tested liking for the same music by applying Norman Anderson's functional measurement model and found loudness and timbre to combine by an adding-type rule. In Experiment 3, we applied functional measurement for perception and found loudness and timbre to interact as in Experiment 1. These results show that people cannot or do not attend selectively or perceive separately any one music component, but that they nonetheless can isolate the components when they enjoy (or disenjoy) listening to music. We conclude that perception of the constituent components of a musical piece and the processing of the same components for liking are governed by different rules.
Collapse
|
32
|
Perception of musical pitch in developmental prosopagnosia. Neuropsychologia 2019; 124:87-97. [PMID: 30625291 DOI: 10.1016/j.neuropsychologia.2018.12.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2017] [Revised: 12/19/2018] [Accepted: 12/29/2018] [Indexed: 11/21/2022]
Abstract
Studies of developmental prosopagnosia have often shown that developmental prosopagnosia differentially affects human face processing over non-face object processing. However, little consideration has been given to whether this condition is associated with perceptual or sensorimotor impairments in other modalities. Comorbidities have played a role in theories of other developmental disorders such as dyslexia, but studies of developmental prosopagnosia have often focused on the nature of the visual recognition impairment despite evidence for widespread neural anomalies that might affect other sensorimotor systems. We studied 12 subjects with developmental prosopagnosia with a battery of auditory tests evaluating pitch and rhythm processing as well as voice perception and recognition. Overall, three subjects were impaired in fine pitch discrimination, a prevalence of 25% that is higher than the estimated 4% prevalence of congenital amusia in the general population. This was a selective deficit, as rhythm perception was unaffected in all 12 subjects. Furthermore, two of the three prosopagnosic subjects who were impaired in pitch discrimination had intact voice perception and recognition, while two of the remaining nine subjects had impaired voice recognition but intact pitch perception. These results indicate that, in some subjects with developmental prosopagnosia, the face recognition deficit is not an isolated impairment but is associated with deficits in other domains, such as auditory perception. These deficits may form part of a broader syndrome which could be due to distributed microstructural anomalies in various brain networks, possibly with a common theme of right hemispheric predominance.
Collapse
|
33
|
Whiteford KL, Oxenham AJ. Learning for pitch and melody discrimination in congenital amusia. Cortex 2018; 103:164-178. [PMID: 29655041 PMCID: PMC5988957 DOI: 10.1016/j.cortex.2018.03.012] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Revised: 12/12/2017] [Accepted: 03/08/2018] [Indexed: 11/30/2022]
Abstract
Congenital amusia is currently thought to be a life-long neurogenetic disorder in music perception, impervious to training in pitch or melody discrimination. This study provides an explicit test of whether amusic deficits can be reduced with training. Twenty amusics and 20 matched controls participated in four sessions of psychophysical training involving either pure-tone (500 Hz) pitch discrimination or a control task of lateralization (interaural level differences for bandpass white noise). Pure-tone pitch discrimination at low, medium, and high frequencies (500, 2000, and 8000 Hz) was measured before and after training (pretest and posttest) to determine the specificity of learning. Melody discrimination was also assessed before and after training using the full Montreal Battery of Evaluation of Amusia, the most widely used standardized test to diagnose amusia. Amusics performed more poorly than controls in pitch but not localization discrimination, but both groups improved with practice on the trained stimuli. Learning was broad, occurring across all three frequencies and melody discrimination for all groups, including those who trained on the non-pitch control task. Following training, 11 of 20 amusics no longer met the global diagnostic criteria for amusia. A separate group of untrained controls (n = 20), who also completed melody discrimination and pretest, improved by an equal amount as trained controls on all measures, suggesting that the bulk of learning for the control group occurred very rapidly from the pretest. Thirty-one trained participants (13 amusics) returned one year later to assess long-term maintenance of pitch and melody discrimination. On average, there was no change in performance between posttest and one-year follow-up, demonstrating that improvements on pitch- and melody-related tasks in amusics and controls can be maintained. The findings indicate that amusia is not always a life-long deficit when using the current standard diagnostic criteria.
Collapse
|
34
|
Romero-Rivas C, Vera-Constán F, Rodríguez-Cuadrado S, Puigcerver L, Fernández-Prieto I, Navarra J. Seeing music: The perception of melodic 'ups and downs' modulates the spatial processing of visual stimuli. Neuropsychologia 2018; 117:67-74. [PMID: 29753020 DOI: 10.1016/j.neuropsychologia.2018.05.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Revised: 05/07/2018] [Accepted: 05/08/2018] [Indexed: 11/19/2022]
Abstract
Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events.
Collapse
|
35
|
Walton AE, Langland-Hassan P, Chemero A, Kloos H, Richardson MJ. Creating Time: Social Collaboration in Music Improvisation. Top Cogn Sci 2018; 10:95-119. [PMID: 29152904 PMCID: PMC5939966 DOI: 10.1111/tops.12306] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 06/19/2017] [Accepted: 09/08/2017] [Indexed: 11/28/2022]
Abstract
Musical collaboration emerges from the complex interaction of environmental and informational constraints, including those of the instruments and the performance context. Music improvisation in particular is more like everyday interaction in that dynamics emerge spontaneously without a rehearsed score or script. We examined how the structure of the musical context affords and shapes interactions between improvising musicians. Six pairs of professional piano players improvised with two different backing tracks while we recorded both the music produced and the movements of their heads, left arms, and right arms. The backing tracks varied in rhythmic and harmonic information, from a chord progression to a continuous drone. Differences in movement coordination and playing behavior were evaluated using the mathematical tools of complex dynamical systems, with the aim of uncovering the multiscale dynamics that characterize musical collaboration. Collectively, the findings indicated that each backing track afforded the emergence of different patterns of coordination with respect to how the musicians played together, how they moved together, as well as their experience collaborating with each other. Additionally, listeners' experiences of the music when rating audio recordings of the improvised performances were related to the way the musicians coordinated both their playing behavior and their bodily movements. Accordingly, the study revealed how complex dynamical systems methods (namely recurrence analysis) can capture the turn-taking dynamics that characterized both the social exchange of the music improvisation and the sounds of collaboration more generally. The study also demonstrated how musical improvisation provides a way of understanding how social interaction emerges from the structure of the behavioral task context.
Collapse
|
36
|
Zhang J, Yang T, Bao Y, Li H, Pöppel E, Silveira S. Sadness and happiness are amplified in solitary listening to music. Cogn Process 2017; 19:133-139. [PMID: 28986700 DOI: 10.1007/s10339-017-0832-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Accepted: 08/22/2017] [Indexed: 10/18/2022]
Abstract
Previous studies have shown that music is a powerful means to convey affective states, but it remains unclear whether and how social context shape the intensity and quality of emotions perceived in music. Using a within-subject design, we studied this question in two experimental settings, i.e. when subjects were alone versus in company of others without direct social interaction or feedback. Non-vocal musical excerpts of the emotional qualities happiness or sadness were rated on arousal and valence dimensions. We found evidence for an amplification of perceived emotion in the solitary listening condition, i.e. happy music was rated as happier and more arousing when nobody else was around and, in an analogous manner, sad music was perceived as sadder. This difference might be explained by a shift of attention in the presence of others. The observed interaction of perceived emotion and social context did not differ for stimuli of different cultural origin.
Collapse
|
37
|
Standard-interval size affects interval-discrimination thresholds for pure-tone melodic pitch intervals. Hear Res 2017; 355:64-69. [PMID: 28935162 DOI: 10.1016/j.heares.2017.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/10/2016] [Revised: 09/12/2017] [Accepted: 09/14/2017] [Indexed: 11/20/2022]
Abstract
Our ability to discriminate between pitch intervals of different sizes is not only an important aspect of speech and music perception, but also a useful means of evaluating higher-level pitch perception. The current study examined how pitch-interval discrimination was affected by the size of the intervals being compared, and by musical training. Using an adaptive procedure, pitch-interval discrimination thresholds were measured for sequentially presented pure-tone intervals with standard intervals of 1 semitone (minor second), 6 semitones (the tri-tone), and 7 semitones (perfect fifth). Listeners were classified into three groups based on musical experience: non-musicians had less than 3 years of informal musical experience, amateur musicians had at least 10 years of experience but no formal music theory training, and expert musicians had at least 12 years of experience with 1 year of formal ear training, and were either currently pursuing or had earned a Bachelor's degree as either a music major or music minor. Consistent with previous studies, discrimination thresholds obtained from expert musicians were significantly lower than those from other listeners. Thresholds also significantly varied with the magnitude of the reference interval and were higher for conditions with a 6- or 7-semitone standard than a 1-semitone standard. These data show that interval-discrimination thresholds are strongly affected by the size of the standard interval.
Collapse
|
38
|
Mao Y, Yang J, Hahn E, Xu L. Auditory perceptual efficacy of nonlinear frequency compression used in hearing aids: A review. J Otol 2017; 12:97-111. [PMID: 29937844 PMCID: PMC5963461 DOI: 10.1016/j.joto.2017.06.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Revised: 05/31/2017] [Accepted: 06/28/2017] [Indexed: 11/30/2022] Open
Abstract
Many patients with sensorineural hearing loss have a precipitous high-frequency loss with relatively good thresholds in the low frequencies. This present paper briefly introduces and compares the basic principles of four types of frequency lowering algorithms with emphasis on nonlinear frequency compression (NLFC). A review of the effects of the NLFC algorithm on speech and music perception and sound quality appraisal is then provided. For vowel perception, it seems that the benefits provided by NLFC are limited, which are probably related to the parameter settings of the compression. For consonant perception, several studies have shown that NLFC provides improved perception of high-frequency consonants such as /s/ and /z/. However, a few other studies have demonstrated negative results in consonant perception. In terms of sentence recognition, persistent use of NLFC might provide improved performance. Compared to the conventional processing, NLFC does not alter the speech sound quality appraisal and music perception as long as the compression setting is not too aggressive. In the subsequent section, the relevant factors with regard to NLFC settings, time-course of acclimatization, listener characteristics, and perceptual tasks are discussed. Although the literature shows mixed results on the perceptual efficacy of NLFC, this technique improved certain aspects of speech understanding in certain hearing-impaired listeners. Little research is available on speech perception outcomes in languages other than English. More clinical data are needed to verify the perceptual efficacy of NLFC in patients with precipitous high-frequency hearing loss. Such knowledge will help guide clinical rehabilitation of those patients.
Collapse
|
39
|
Liu L, Kager R. Enhanced music sensitivity in 9-month-old bilingual infants. Cogn Process 2017; 18:55-65. [PMID: 27817073 PMCID: PMC5306126 DOI: 10.1007/s10339-016-0780-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2016] [Accepted: 10/01/2016] [Indexed: 11/25/2022]
Abstract
This study explores the influence of bilingualism on the cognitive processing of language and music. Specifically, we investigate how infants learning a non-tone language perceive linguistic and musical pitch and how bilingualism affects cross-domain pitch perception. Dutch monolingual and bilingual infants of 8-9 months participated in the study. All infants had Dutch as one of the first languages. The other first languages, varying among bilingual families, were not tone or pitch accent languages. In two experiments, infants were tested on the discrimination of a lexical (N = 42) or a violin (N = 48) pitch contrast via a visual habituation paradigm. The two contrasts shared identical pitch contours but differed in timbre. Non-tone language learning infants did not discriminate the lexical contrast regardless of their ambient language environment. When perceiving the violin contrast, bilingual but not monolingual infants demonstrated robust discrimination. We attribute bilingual infants' heightened sensitivity in the musical domain to the enhanced acoustic sensitivity stemming from a bilingual environment. The distinct perceptual patterns between language and music and the influence of acoustic salience on perception suggest processing diversion and association in the first year of life. Results indicate that the perception of music may entail both shared neural network with language processing, and unique neural network that is distinct from other cognitive functions.
Collapse
|
40
|
Rosemann S, Brunner F, Kastrup A, Fahle M. Musical, visual and cognitive deficits after middle cerebral artery infarction. eNeurologicalSci 2016; 6:25-32. [PMID: 29260010 PMCID: PMC5721573 DOI: 10.1016/j.ensci.2016.11.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Revised: 07/28/2016] [Accepted: 11/03/2016] [Indexed: 11/24/2022] Open
Abstract
The perception of music can be impaired after a stroke. This dysfunction is called amusia and amusia patients often also show deficits in visual abilities, language, memory, learning, and attention. The current study investigated whether deficits in music perception are selective for musical input or generalize to other perceptual abilities. Additionally, we tested the hypothesis that deficits in working memory or attention account for impairments in music perception. Twenty stroke patients with small infarctions in the supply area of the middle cerebral artery were investigated with tests for music and visual perception, categorization, neglect, working memory and attention. Two amusia patients with selective deficits in music perception and pronounced lesions were identified. Working memory and attention deficits were highly correlated across the patient group but no correlation with musical abilities was obtained. Lesion analysis revealed that lesions in small areas of the putamen and globus pallidus were connected to a rhythm perception deficit. We conclude that neither a general perceptual deficit nor a minor domain general deficit can account for impairments in the music perception task. But we find support for the modular organization of the music perception network with brain areas specialized for musical functions as musical deficits were not correlated to any other impairment.
Collapse
|
41
|
Zhang J, Jiang C, Zhou L, Yang Y. Perception of hierarchical boundaries in music and its modulation by expertise. Neuropsychologia 2016; 91:490-498. [PMID: 27659874 DOI: 10.1016/j.neuropsychologia.2016.09.013] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2016] [Revised: 09/16/2016] [Accepted: 09/17/2016] [Indexed: 10/21/2022]
Abstract
Hierarchical structure with units of different timescales is a key feature of music. For the perception of such structures, the detection of each boundary is crucial. Here, using electroencephalography (EEG), we explore the perception of hierarchical boundaries in music, and test whether musical expertise modifies such processing. Musicians and non-musicians were presented with musical excerpts containing boundaries at three hierarchical levels, including section, phrase and period boundaries. Non-boundary was chosen as a baseline condition. Recordings from musicians showed CPS (closure positive shift) was evoked at all the three boundaries, and their amplitude increased as the hierarchical level became higher, which suggest that musicians could represent music events at different timescales in a hierarchical way. For non-musicians, the CPS was only elicited at the period boundary and undistinguishable negativities were induced at all the three boundaries. The results indicate that a different and less clear way was used by non-musicians in boundary perception. Our findings reveal, for the first time, an ERP correlate of perceiving hierarchical boundaries in music, and show that the phrasing ability could be enhanced by musical expertise.
Collapse
|
42
|
Keller I, Garbacenkaite R. Neurofeedback in three patients in the state of unresponsive wakefulness. Appl Psychophysiol Biofeedback 2016; 40:349-56. [PMID: 26159769 DOI: 10.1007/s10484-015-9296-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Some severely brain injured patients remain unresponsive, only showing reflex movements without any response to command. This syndrome has been named unresponsive wakefulness syndrome (UWS). The objective of the present study was to determine whether UWS patients are able to alter their brain activity using neurofeedback (NFB) technique. A small sample of three patients received a daily session of NFB for 3 weeks. We applied the ratio of theta and beta amplitudes as a feedback variable. Using an automatic threshold function, patients heard their favourite music whenever their theta/beta ratio dropped below the threshold. Changes in awareness were assessed weekly with the JFK Coma Recovery Scale-Revised for each treatment week, as well as 3 weeks before and after NFB. Two patients showed a decrease in their theta/beta ratio and theta-amplitudes during this period. The third patient showed no systematic changes in his EEG activity. The results of our study provide the first evidence that NFB can be used in patients in a state of unresponsive wakefulness.
Collapse
|
43
|
Calvino M, Gavilán J, Sánchez-Cuadrado I, Pérez-Mora RM, Muñoz E, Díez-Sebastián J, Lassaletta L. Using the HISQUI29 to assess the sound quality levels of Spanish adults with unilateral cochlear implants and no contralateral hearing. Eur Arch Otorhinolaryngol 2016. [PMID: 26440105 DOI: 10.10007/s00405-014-2983-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
To evaluate cochlear implant (CI) users' self-reported level of sound quality and quality of life (QoL). Sound quality was self-evaluated using the hearing implant sound quality index (HISQUI29). HISQUI29 scores were further examined in three subsets. QoL was self-evaluated using the glasgow benefit inventory (GBI). GBI scores were further examined in three subsets. Possible correlations between the HISQUI29 and GBI were explored. Additional possible correlations between these scores and subjects' pure tone averages, speech perception scores, age at implantation, duration of hearing loss, duration of CI use, gender, and implant type were explored. Subjects derived a "moderate" sound quality level from their CI. Television, radio, and telephone tasks were easier in quiet than in background noise. 89 % of subjects reported their QoL benefited from having a CI. Mean total HISQUI29 score significantly correlated with all subcategories of the GBI. Age at implantation inversely correlated with the total HISQUI29 score and with television and radio understanding. Sentence in noise scores significantly correlated with all sound perception scores. Women had a better mean score in music perception and in telephone use than did men. CI users' self-reported levels of sound quality significantly correlated with their QoL. Cochlear implantation had a beneficial impact on subjects' QoL. Understanding speech is easier in quiet than in noise. Music perception remains a challenge for many CI users. The HISQUI29 and the GBI can provide useful information about the everyday effects of future treatment modalities, rehabilitation strategies, and technical developments.
Collapse
|
44
|
Munjal T, Roy AT, Carver C, Jiradejvong P, Limb CJ. Use of the Phantom Electrode strategy to improve bass frequency perception for music listening in cochlear implant users. Cochlear Implants Int 2016; 16 Suppl 3:S121-8. [PMID: 26561883 DOI: 10.1179/1467010015z.000000000270] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVES The Phantom Electrode strategy makes use of partial bipolar stimulation on the two most apical electrodes in an effort to extend the frequency range available to cochlear implant (CI) users. This study aimed to quantify the effect of the Phantom Electrode strategy on bass frequency perception in music listening in CI users. METHODS Eleven adult Advanced Bionics users with the Fidelity 120 processing strategy and 16 adult normal hearing (NH) individuals participated in the study. All subjects completed the CI-multiple stimulus with hidden reference and anchor (MUSHRA), a test of an individual's ability to make discriminations in sound quality following the removal of bass frequency information. NH participants completed the CI-MUSHRA once, whereas CI users completed the task twice - once with their baseline clinical program and once with the Phantom Electrode strategy, in random order. CI users' performance was assessed in comparison with NH performance. RESULTS The Phantom Electrode strategy improved CI users performance on the CI-MUSHRA compared with Fidelity 120. DISCUSSION Creation of a Phantom Electrode percept through partial bipolar stimulation of the two most apical electrodes appears to improve CI users' perception of bass frequency information in music, contributing to greater accuracy in the ability to detect alterations in musical sound quality. CONCLUSION The Phantom Electrode processing strategy may enhance the experience of listening to music and thus acoustic stimuli more broadly by improving perception of bass frequencies, through direction of current towards the apical portion of the cochlea beyond the termination of the electrode.
Collapse
|
45
|
Peterson N, Bergeson TR. Contribution of hearing aids to music perception by cochlear implant users. Cochlear Implants Int 2016; 16 Suppl 3:S71-8. [PMID: 26561890 DOI: 10.1179/1467010015z.000000000268] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVES Modern cochlear implant (CI) encoding strategies represent the temporal envelope of sounds well but provide limited spectral information. This deficit in spectral information has been implicated as a contributing factor to difficulty with speech perception in noisy conditions, discriminating between talkers and melody recognition. One way to supplement spectral information for CI users is by fitting a hearing aid (HA) to the non-implanted ear. METHODS In this study 14 postlingually deaf adults (half with a unilateral CI and the other half with a CI and an HA (CI + HA)) were tested on measures of music perception and familiar melody recognition. RESULTS CI + HA listeners performed significantly better than CI-only listeners on all pitch-based music perception tasks. The CI + HA group did not perform significantly better than the CI-only group in the two tasks that relied on duration cues. Recognition of familiar melodies was significantly enhanced for the group wearing an HA in addition to their CI. This advantage in melody recognition was increased when melodic sequences were presented with the addition of harmony. CONCLUSION These results show that, for CI recipients with aidable hearing in the non-implanted ear, using a HA in addition to their implant improves perception of musical pitch and recognition of real-world melodies.
Collapse
|
46
|
Lu X, Ho HT, Sun Y, Johnson BW, Thompson WF. The influence of visual information on auditory processing in individuals with congenital amusia: An ERP study. Neuroimage 2016; 135:142-51. [PMID: 27132045 DOI: 10.1016/j.neuroimage.2016.04.043] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Accepted: 04/17/2016] [Indexed: 11/15/2022] Open
Abstract
While most normal hearing individuals can readily use prosodic information in spoken language to interpret the moods and feelings of conversational partners, people with congenital amusia report that they often rely more on facial expressions and gestures, a strategy that may compensate for deficits in auditory processing. In this investigation, we used EEG to examine the extent to which individuals with congenital amusia draw upon visual information when making auditory or audio-visual judgments. Event-related potentials (ERP) were elicited by a change in pitch (up or down) between two sequential tones paired with a change in spatial position (up or down) between two visually presented dots. The change in dot position was either congruent or incongruent with the change in pitch. Participants were asked to judge (1) the direction of pitch change while ignoring the visual information (AV implicit task), and (2) whether the auditory and visual changes were congruent (AV explicit task). In the AV implicit task, amusic participants performed significantly worse in the incongruent condition than control participants. ERPs showed an enhanced N2-P3 response to incongruent AV pairings for control participants, but not for amusic participants. However when participants were explicitly directed to detect AV congruency, both groups exhibited enhanced N2-P3 responses to incongruent AV pairings. These findings indicate that amusics are capable of extracting information from both modalities in an AV task, but are biased to rely on visual information when it is available, presumably because they have learned that auditory information is unreliable. We conclude that amusic individuals implicitly draw upon visual information when judging auditory information, even though they have the capacity to explicitly recognize conflicts between these two sensory channels.
Collapse
|
47
|
Saliba J, Lorenzo-Seva U, Marco-Pallares J, Tillmann B, Zeitouni A, Lehmann A. French validation of the Barcelona Music Reward Questionnaire. PeerJ 2016; 4:e1760. [PMID: 27019776 PMCID: PMC4806630 DOI: 10.7717/peerj.1760] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2015] [Accepted: 02/13/2016] [Indexed: 11/20/2022] Open
Abstract
Background. The Barcelona Music Reward Questionnaire (BMRQ) questionnaire investigates the main facets of music experience that could explain the variance observed in how people experience reward associated with music. Currently, only English and Spanish versions of this questionnaire are available. The objective of this study is to validate a French version of the BMRQ. Methods. The original BMRQ was translated and adapted into an international French version. The questionnaire was then administered through an online survey aimed at adults aged over 18 years who were fluent in French. Statistical analyses were performed and compared to the original English and Spanish version for validation purposes. Results. A total of 1,027 participants completed the questionnaire. Most responses were obtained from France (89.4%). Analyses revealed that congruence values between the rotated loading matrix and the ideal loading matrix ranged between 0.88 and 0.96. Factor reliabilities of subscales (i.e., Musical Seeking, Emotion Evocation, Mood Regulation, Social Reward and Sensory-Motor) also ranged between 0.88 and 0.96. In addition, reliability of the overall factor score (i.e., Music reward) was 0.91. Finally, the internal consistency of the overall scale was 0.85. The factorial structure obtained in the French translation was similar to that of the original Spanish and English samples. Conclusion. The French version of the BMRQ appears valid and reliable. Potential applications of the BMRQ include its use as a valuable tool in music reward and emotion research, whether in healthy individuals or in patients suffering from a wide variety of cognitive, neurologic and auditory disorders.
Collapse
|
48
|
Seesjärvi E, Särkämö T, Vuoksimaa E, Tervaniemi M, Peretz I, Kaprio J. The Nature and Nurture of Melody: A Twin Study of Musical Pitch and Rhythm Perception. Behav Genet 2015; 46:506-15. [PMID: 26650514 DOI: 10.1007/s10519-015-9774-y] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Accepted: 11/23/2015] [Indexed: 11/24/2022]
Abstract
Both genetic and environmental factors are known to play a role in our ability to perceive music, but the degree to which they influence different aspects of music cognition is still unclear. We investigated the relative contribution of genetic and environmental effects on melody perception in 384 young adult twins [69 full monozygotic (MZ) twin pairs, 44 full dizygotic (DZ) twin pairs, 70 MZ twins without a co-twin, and 88 DZ twins without a co-twin]. The participants performed three online music tests requiring the detection of pitch changes in a two-melody comparison task (Scale) and key and rhythm incongruities in single-melody perception tasks (Out-of-key, Off-beat). The results showed predominantly additive genetic effects in the Scale task (58 %, 95 % CI 42-70 %), shared environmental effects in the Out-of-key task (61 %, 49-70 %), and non-shared environmental effects in the Off-beat task (82 %, 61-100 %). This highly different pattern of effects suggests that the contribution of genetic and environmental factors on music perception depends on the degree to which it calls for acquired knowledge of musical tonal and metric structures.
Collapse
|
49
|
Wilcox LJ, He K, Derkay CS. Identifying musical difficulties as they relate to congenital amusia in the pediatric population. Int J Pediatr Otorhinolaryngol 2015; 79:2411-5. [PMID: 26631597 DOI: 10.1016/j.ijporl.2015.11.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2015] [Revised: 11/02/2015] [Accepted: 11/03/2015] [Indexed: 10/22/2022]
Abstract
INTRODUCTIONS/OBJECTIVES Approximately 4% of the population fails to develop basic music skills and can be identified as "amusic". Congenital amusia (CA), or "tone deafness", is thought to be a hereditary disordera predominantly affecting the perception and production of music. The gold standard for diagnosis is the Montreal Battery for Evaluation of Amusia (MBEA). This study aims to pinpoint factors in the history that may help identify amusic children and to determine if amusic pediatric patients can be identified using a widely available, shorter test validated in adults. METHODS Subjects ages 7-17 years were recruited to take an online test, validated against the MBEA, for CA. The sections tested recognition of "off-beat" (OB), "mistuned" (MT), and "out-of-key" (OOK) conditions. Parents filled out a questionnaire regarding the subject's past medical, educational, musical exposure, and family history. RESULTS Of 114 subjects recruited, complete data was available on 105 with a mean age of 12.5 years. According to adult criteria, 63/105 (60%) of subjects scored in the "amusic" range. Children >10 years of age scored significantly higher on the off-beat section (p=0.001) and total scores (p=0.025). Subjects who were born prematurely scored significantly lower (p=0.045). Children whose father had difficulties with music scored significantly lower on the off-beat section (p=0.003) and total scores (p=0.008). CONCLUSIONS CA is a disorder that has implications for quality of life. Earlier identification may help elucidate the pathogenesis of the condition and, in the future, the institution of prompt treatment. Further studies are needed to identify the most appropriate and convenient tests, as well as the optimal timing of testing, for reliably diagnosing CA in children.
Collapse
|
50
|
Calvino M, Gavilán J, Sánchez-Cuadrado I, Pérez-Mora RM, Muñoz E, Díez-Sebastián J, Lassaletta L. Using the HISQUI29 to assess the sound quality levels of Spanish adults with unilateral cochlear implants and no contralateral hearing. Eur Arch Otorhinolaryngol 2015; 273:2343-53. [PMID: 26440105 DOI: 10.1007/s00405-015-3789-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2015] [Accepted: 09/23/2015] [Indexed: 10/23/2022]
Abstract
To evaluate cochlear implant (CI) users' self-reported level of sound quality and quality of life (QoL). Sound quality was self-evaluated using the hearing implant sound quality index (HISQUI29). HISQUI29 scores were further examined in three subsets. QoL was self-evaluated using the glasgow benefit inventory (GBI). GBI scores were further examined in three subsets. Possible correlations between the HISQUI29 and GBI were explored. Additional possible correlations between these scores and subjects' pure tone averages, speech perception scores, age at implantation, duration of hearing loss, duration of CI use, gender, and implant type were explored. Subjects derived a "moderate" sound quality level from their CI. Television, radio, and telephone tasks were easier in quiet than in background noise. 89 % of subjects reported their QoL benefited from having a CI. Mean total HISQUI29 score significantly correlated with all subcategories of the GBI. Age at implantation inversely correlated with the total HISQUI29 score and with television and radio understanding. Sentence in noise scores significantly correlated with all sound perception scores. Women had a better mean score in music perception and in telephone use than did men. CI users' self-reported levels of sound quality significantly correlated with their QoL. Cochlear implantation had a beneficial impact on subjects' QoL. Understanding speech is easier in quiet than in noise. Music perception remains a challenge for many CI users. The HISQUI29 and the GBI can provide useful information about the everyday effects of future treatment modalities, rehabilitation strategies, and technical developments.
Collapse
|