1
|
Strauss H, Reiche S, Dick M, Zentner M. Online assessment of musical ability in 10 minutes: Development and validation of the Micro-PROMS. Behav Res Methods 2024; 56:1968-1983. [PMID: 37221344 PMCID: PMC10991059 DOI: 10.3758/s13428-023-02130-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2023] [Indexed: 05/25/2023]
Abstract
We describe the development and validation of a test battery to assess musical ability that taps into a broad range of music perception skills and can be administered in 10 minutes or less. In Study 1, we derived four very brief versions from the Profile of Music Perception Skills (PROMS) and examined their properties in a sample of 280 participants. In Study 2 (N = 109), we administered the version retained from Study 1-termed Micro-PROMS-with the full-length PROMS, finding a short-to-long-form correlation of r = .72. In Study 3 (N = 198), we removed redundant trials and examined test-retest reliability as well as convergent, discriminant, and criterion validity. Results showed adequate internal consistency ( ω ¯ = .73) and test-retest reliability (ICC = .83). Findings supported convergent validity of the Micro-PROMS (r = .59 with the MET, p < .01) as well as discriminant validity with short-term and working memory (r ≲ .20). Criterion-related validity was evidenced by significant correlations of the Micro-PROMS with external indicators of musical proficiency ( r ¯ = .37, ps < .01), and with Gold-MSI General Musical Sophistication (r = .51, p<.01). In virtue of its brevity, psychometric qualities, and suitability for online administration, the battery fills a gap in the tools available to objectively assess musical ability.
Collapse
Affiliation(s)
- Hannah Strauss
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Stephan Reiche
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Maximilian Dick
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Innsbruck, Austria.
| |
Collapse
|
2
|
Aydın S, Onbaşı L. Graph theoretical brain connectivity measures to investigate neural correlates of music rhythms associated with fear and anger. Cogn Neurodyn 2024; 18:49-66. [PMID: 38406195 PMCID: PMC10881947 DOI: 10.1007/s11571-023-09931-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 10/19/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
The present study tests the hypothesis that emotions of fear and anger are associated with distinct psychophysiological and neural circuitry according to discrete emotion model due to contrasting neurotransmitter activities, despite being included in the same affective group in many studies due to similar arousal-valance scores of them in emotion models. EEG data is downloaded from OpenNeuro platform with access number of ds002721. Brain connectivity estimations are obtained by using both functional and effective connectivity estimators in analysis of short (2 sec) and long (6 sec) EEG segments across the cortex. In tests, discrete emotions and resting-states are identified by frequency band specific brain network measures and then contrasting emotional states are deep classified with 5-fold cross-validated Long Short Term Memory Networks. Logistic regression modeling has also been examined to provide robust performance criteria. Commonly, the best results are obtained by using Partial Directed Coherence in Gamma (31.5 - 60.5 H z ) sub-bands of short EEG segments. In particular, Fear and Anger have been classified with accuracy of 91.79%. Thus, our hypothesis is supported by overall results. In conclusion, Anger is found to be characterized by increased transitivity and decreased local efficiency in addition to lower modularity in Gamma-band in comparison to fear. Local efficiency refers functional brain segregation originated from the ability of the brain to exchange information locally. Transitivity refer the overall probability for the brain having adjacent neural populations interconnected, thus revealing the existence of tightly connected cortical regions. Modularity quantifies how well the brain can be partitioned into functional cortical regions. In conclusion, PDC is proposed to graph theoretical analysis of short EEG epochs in presenting robust emotional indicators sensitive to perception of affective sounds.
Collapse
Affiliation(s)
- Serap Aydın
- Department of Biophysics, Faculty of Medicine, Hacettepe University, Sıhhiye, Ankara, Turkey
| | - Lara Onbaşı
- School of Medicine, Hacettepe University, Sıhhiye, Ankara, Turkey
| |
Collapse
|
3
|
Dinçer D'Alessandro H, Nicastri M, Portanova G, Giallini I, Russo FY, Magliulo G, Greco A, Mancini P. Low-frequency pitch coding: relationships with speech-in-noise and music perception by pediatric populations with typical hearing and cochlear implants. Eur Arch Otorhinolaryngol 2024:10.1007/s00405-023-08445-4. [PMID: 38194096 DOI: 10.1007/s00405-023-08445-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 12/27/2023] [Indexed: 01/10/2024]
Abstract
PURPOSE This study aimed to investigate the effects of low frequency (LF) pitch perception on speech-in-noise and music perception performance by children with cochlear implants (CIC) and typical hearing (THC). Moreover, the relationships between speech-in-noise and music perception as well as the effects of demographic and audiological factors on present research outcomes were studied. METHODS The sample consisted of 22 CIC and 20 THC (7-10 years). Harmonic intonation (HI) and disharmonic intonation (DI) tests were used to assess LF pitch perception. Speech perception in quiet (WRSq)/noise (WRSn + 10) were tested with the Italian bisyllabic words for pediatric populations. The Gordon test was used to evaluate music perception (rhythm, melody, harmony, and overall). RESULTS CIC/THC performance comparisons for LF pitch, speech-in-noise, and all music measures except harmony revealed statistically significant differences with large effect sizes. For the CI group, HI showed statistically significant correlations with melody discrimination. Melody/total Gordon scores were significantly correlated with WRSn + 10. For the overall group, HI/DI showed significant correlations with all music perception measures and WRSn + 10. Hearing thresholds showed significant effects on HI/DI scores. Hearing thresholds and WRSn + 10 scores were significantly correlated; both revealed significant effects on all music perception scores. CI age had significant effects on WRSn + 10, harmony, and total Gordon scores (p < 0.05). CONCLUSION Such findings confirmed the significant effects of LF pitch perception on complex listening performance. Significant speech-in-noise and music perception correlations were as promising as results from recent studies indicating significant positive effects of music training on speech-in-noise recognition in CIC.
Collapse
Affiliation(s)
- Hilal Dinçer D'Alessandro
- Department of Audiology, Faculty of Health Sciences, Istanbul University-Cerrahpaşa, Istanbul, Turkey.
| | - Maria Nicastri
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Ginevra Portanova
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | | | - Giuseppe Magliulo
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
4
|
Hake R, Bürgel M, Nguyen NK, Greasley A, Müllensiefen D, Siedenburg K. Development of an adaptive test of musical scene analysis abilities for normal-hearing and hearing-impaired listeners. Behav Res Methods 2023:10.3758/s13428-023-02279-y. [PMID: 37957432 DOI: 10.3758/s13428-023-02279-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/21/2023] [Indexed: 11/15/2023]
Abstract
Auditory scene analysis (ASA) is the process through which the auditory system makes sense of complex acoustic environments by organising sound mixtures into meaningful events and streams. Although music psychology has acknowledged the fundamental role of ASA in shaping music perception, no efficient test to quantify listeners' ASA abilities in realistic musical scenarios has yet been published. This study presents a new tool for testing ASA abilities in the context of music, suitable for both normal-hearing (NH) and hearing-impaired (HI) individuals: the adaptive Musical Scene Analysis (MSA) test. The test uses a simple 'yes-no' task paradigm to determine whether the sound from a single target instrument is heard in a mixture of popular music. During the online calibration phase, 525 NH and 131 HI listeners were recruited. The level ratio between the target instrument and the mixture, choice of target instrument, and number of instruments in the mixture were found to be important factors affecting item difficulty, whereas the influence of the stereo width (induced by inter-aural level differences) only had a minor effect. Based on a Bayesian logistic mixed-effects model, an adaptive version of the MSA test was developed. In a subsequent validation experiment with 74 listeners (20 HI), MSA scores showed acceptable test-retest reliability and moderate correlations with other music-related tests, pure-tone-average audiograms, age, musical sophistication, and working memory capacities. The MSA test is a user-friendly and efficient open-source tool for evaluating musical ASA abilities and is suitable for profiling the effects of hearing impairment on music perception.
Collapse
Affiliation(s)
- Robin Hake
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany.
| | - Michel Bürgel
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Ninh K Nguyen
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | | | - Daniel Müllensiefen
- Department of Psychology, Goldsmiths, University of London, London, UK
- Hanover Music Lab, Hochschule Für Musik, Theater und Medien, Hannover, Germany
| | - Kai Siedenburg
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
5
|
Hofbauer LM, Rodriguez FS. Emotional valence perception in music and subjective arousal: Experimental validation of stimuli. Int J Psychol 2023; 58:465-475. [PMID: 37248624 DOI: 10.1002/ijop.12922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 04/30/2023] [Indexed: 05/31/2023]
Abstract
Musical stimuli are widely used in emotion research and intervention studies. However, reviews have repeatedly noted that a lack of pre-evaluated musical stimuli is stalling progress in our understanding of specific effects of varying music. Musical stimuli vary along a plethora of dimensions. Of particular interest are emotional valence and tempo. Thus, we aimed to evaluate the emotional valence of a set of slow and fast musical stimuli. N = 102 (mean age: 39.95, SD: 13.60, 61% female) participants rated the perceived emotional valence in 20 fast (>110 beats per minute [bmp]) and 20 slow (<90 bpm) stimuli. Moreover, we collected reports on subjective arousal for each stimulus to explore arousal's association with tempo and valence. Finally, participants completed questionnaires on demographics, mood (profile of mood states), personality (10-item personality index), musical sophistication (Gold-music sophistication index), and sound preferences and hearing habits (sound preference and hearing habits questionnaire). Using mixed-effect model estimates, we identified 19 stimuli that participants rated to have positive valence and 16 stimuli that they rated to have negative valence. Higher age predicted more positive valence ratings across stimuli. Higher tempo and more extreme valence ratings were each associated with higher arousal. Higher educational attainment was also associated with higher arousal reports. Pre-evaluated stimuli can be used in future musical research.
Collapse
Affiliation(s)
- Lena M Hofbauer
- Research Group Psychosocial Epidemiology and Public Health, German Center for Neurodegenerative Diseases (DZNE), Greifswald, Germany
| | - Francisca S Rodriguez
- Research Group Psychosocial Epidemiology and Public Health, German Center for Neurodegenerative Diseases (DZNE), Greifswald, Germany
| |
Collapse
|
6
|
Shorey AE, King CJ, Theodore RM, Stilp CE. Talker adaptation or "talker" adaptation? Musical instrument variability impedes pitch perception. Atten Percept Psychophys 2023; 85:2488-2501. [PMID: 37258892 DOI: 10.3758/s13414-023-02722-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/26/2023] [Indexed: 06/02/2023]
Abstract
Listeners show perceptual benefits (faster and/or more accurate responses) when perceiving speech spoken by a single talker versus multiple talkers, known as talker adaptation. While near-exclusively studied in speech and with talkers, some aspects of talker adaptation might reflect domain-general processes. Music, like speech, is a sound class replete with acoustic variation, such as a multitude of pitch and instrument possibilities. Thus, it was hypothesized that perceptual benefits from structure in the acoustic signal (i.e., hearing the same sound source on every trial) are not specific to speech but rather a general auditory response. Forty nonmusician participants completed a simple musical task that mirrored talker adaptation paradigms. Low- or high-pitched notes were presented in single- and mixed-instrument blocks. Reflecting both music research on pitch and timbre interdependence and mirroring traditional "talker" adaptation paradigms, listeners were faster to make their pitch judgments when presented with a single instrument timbre relative to when the timbre was selected from one of four instruments from trial to trial. A second experiment ruled out the possibility that participants were responding faster to the specific instrument chosen as the single-instrument timbre. Consistent with general theoretical approaches to perception, perceptual benefits from signal structure are not limited to speech.
Collapse
Affiliation(s)
- Anya E Shorey
- Department of Psychological and Brain Sciences, University of Louisville, 317 Life Sciences Building, Louisville, KY, 40272, USA.
| | - Caleb J King
- Department of Psychological and Brain Sciences, University of Louisville, 317 Life Sciences Building, Louisville, KY, 40272, USA
| | - Rachel M Theodore
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, 2 Alethia Drive, Unit 1085, Storrs, CT, 06269-1085, USA
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, 337 Mansfield Road, Unit 1272, Storrs, CT, 06269-1272, USA
| | - Christian E Stilp
- Department of Psychological and Brain Sciences, University of Louisville, 317 Life Sciences Building, Louisville, KY, 40272, USA
| |
Collapse
|
7
|
Senn O. A predictive coding approach to modelling the perceived complexity of popular music drum patterns. Heliyon 2023; 9:e15199. [PMID: 37123947 PMCID: PMC10130781 DOI: 10.1016/j.heliyon.2023.e15199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 03/29/2023] [Accepted: 03/29/2023] [Indexed: 05/02/2023] Open
Abstract
This study presents a method to estimate the complexity of popular music drum patterns based on a core idea from predictive coding. Specifically, it postulates that the complexity of a drum pattern depends on the quantity of surprisal it causes in the listener. Surprisal, according to predictive coding theory, is a numerical measure that takes large values when the perceiver's internal model of the surrounding world fails to predict the actual stream of sensory data (i.e. when the perception surprises the perceiver), and low values if model predictions and sensory data agree. The proposed new method first approximates a listener's internal model of a popular music drum pattern (using ideas on enculturation and a Bayesian learning process). It then quantifies the listener's surprisal evaluating the discrepancies between the predictions of the internal model and the actual drum pattern. It finally estimates drum pattern complexity from surprisal. The method was optimised and tested using a set of forty popular music drum patterns, for which empirical perceived complexity measurements are available. The new method provided complexity estimates that had a good fit with the empirical measurements ( R 2 = . 852 ). The method was implemented as an R script that can be used to estimate the complexity of popular music drum patterns in the future. Simulations indicate that we can expect the method to predict perceived complexity with a good fit ( R 2 ≥ . 709 ) in 99% of drum pattern sets randomly drawn from the Western popular music repertoire. These results suggest that surprisal indeed captures essential aspects of complexity, and that it may serve as a basis for a general theory of perceived complexity.
Collapse
|
8
|
Jo S, Yun J, Kyong JS, Shin Y, Kim J. Music Perception Abilities of the Hearing Amplification System Users. J Audiol Otol 2023; 27:78-87. [PMID: 36907203 PMCID: PMC10126585 DOI: 10.7874/jao.2022.00367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 12/02/2022] [Indexed: 03/14/2023] Open
Abstract
Background and Objectives : Recently, the improvement of music perception abilities for emotional stability and high quality of life has become important for the hearing loss group. This study aimed to examine and compare the music perception abilities of the normal hearing (NH) and hearing amplification system (HAS) groups to find the needs and methods of music rehabilitation. Subjects and Methods : The data were collected from 15 NH adults (33.1±11.4 years) and 15 HAS adults (38.7±13.4 years), of whom eight wore cochlear implant [CI] systems and seven wore CI and hearing aid systems depending on pitch, melody, rhythm, timbre, emotional reaction, and harmony perception tests. A mismatch negativity test was also conducted, and attitudes toward and satisfaction with listening to music were measured. Results : The correction percentages for the NH and HAS groups were 94.0%±6.1% and 75.3%±23.2% in the pitch test; 94.0%±7.1% and 30.3%±25.9% in the melody test; 99.3%±1.8% and 94.0%± 7.6% in the rhythm test; 78.9%±41.8% and 64.4%±48.9% in the timbre test; 96.7%±10.4% and 81.7%±16.3% in the emotional reaction test; and 85.7%±14.1% and 58.4%±13.9% in the harmony test, respectively, showing statistical significance (p<0.05). For the mismatch negativity test, the area of the waveform was smaller in the HAS groups than in the NH groups, with 70 dB of stimulation showing no statistical significance. The response rates for satisfaction with listening to music were 80% and 93.3% for the NH and HAS groups, showing no statistical significance. Conclusions : Although the HAS group showed lower music perception ability than the NH group overall, they showed a strong desire for music listening. Also, the HAS group revealed a higher degree of satisfaction even when listening to unfamiliar music played with unusual instruments. It is suggested that systematic and constant musical rehabilitation based on musical elements and different listening experiences will improve music perception qualities and abilities for HAS users.
Collapse
Affiliation(s)
- Sungmin Jo
- Department of Speech Pathology and Audiology, Graduate School, Hallym University, Chuncheon, Korea
| | - Jiyeong Yun
- Department of Speech Pathology and Audiology, Graduate School, Hallym University, Chuncheon, Korea
| | - Jeong-Sug Kyong
- Department of Audiology and Speech-Language Pathology, Hallym University of Graduate Studies, Seoul, Korea
| | - Yerim Shin
- Department of Speech Pathology and Audiology, Graduate School, Hallym University, Chuncheon, Korea
| | - Jinsook Kim
- Department of Speech Pathology and Audiology, Graduate School, Hallym University, Chuncheon, Korea.,Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Korea
| |
Collapse
|
9
|
Klarlund M, Brattico E, Pearce M, Wu Y, Vuust P, Overgaard M, Du Y. Worlds apart? Testing the cultural distance hypothesis in music perception of Chinese and Western listeners. Cognition 2023; 235:105405. [PMID: 36807031 DOI: 10.1016/j.cognition.2023.105405] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 02/02/2023] [Accepted: 02/08/2023] [Indexed: 02/21/2023]
Abstract
According to the cultural distance hypothesis (CDH), individuals learn culture-specific statistical structures in music as internal stylistic models and use these models in predictive processing of music, with musical structures closer to their home culture being easier to predict. This cultural distance effect may be affected by domain-specific (musical ability) and domain-general individual characteristics (openness, implicit cultural bias). To test the CDH and its modulation by individual characteristics, we recruited Chinese and Western adults to categorize stylistically ambiguous and unambiguous Chinese and Western melodies by cultural origin. Categorization performance was better for unambiguous (low CD) than ambiguous melodies (high CD), and for in-culture melodies regardless of ambiguity for both groups, providing evidence for CDH. Musical ability, but not other traits, correlated positively with melody categorization, suggesting that musical ability refines internal stylistic models. Therefore, both cultures show musical enculturation in their home culture with a modulatory effect of individual musical ability.
Collapse
Affiliation(s)
- Mathias Klarlund
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, Italy
| | - Marcus Pearce
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark; Music Cognition Lab, Queen Mary University of London, London, England, UK
| | - Yiyang Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Morten Overgaard
- Center for Functionally Integrative Neuroscience, Dept of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China; Chinese Institute for Brain Research, Beijing, China.
| |
Collapse
|
10
|
Kepp NE, Schiøth C, Percy-Smith L. Timbre recognition in Danish children with hearing aids, cochlear implants or normal hearing. Int J Pediatr Otorhinolaryngol 2022; 159:111186. [PMID: 35660937 DOI: 10.1016/j.ijporl.2022.111186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 03/27/2022] [Accepted: 05/20/2022] [Indexed: 11/15/2022]
Affiliation(s)
- Nille Elise Kepp
- Research Unit at Center of Balance & Hearing, Graduate School of Health and Medical Sciences University of Copenhagen, Copenhagen University Hospital Rigshospitalet, Inge Lehmanns Vej 8, Rigshospitalet, 2100, Copenhagen, Denmark.
| | - Christina Schiøth
- Patientforening Decibel, Lyngbyvej 11, 1. Sal, L 104, 2100, Coppenhagen, Denmark
| | - Lone Percy-Smith
- Research Unit at Center of Balance & Hearing, Copenhagen University Hospital Rigshospitalet, Inge Lehmanns Vej 8, 2100, Copenhagen, Denmark
| |
Collapse
|
11
|
Xu Y, Wang W, Cui H, Xu M, Li M. Paralinguistic singing attribute recognition using supervised machine learning for describing the classical tenor solo singing voice in vocal pedagogy. EURASIP J Audio Speech Music Process 2022; 2022:8. [PMID: 35440938 PMCID: PMC9011380 DOI: 10.1186/s13636-022-00240-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 03/26/2022] [Indexed: 06/14/2023]
Abstract
Humans can recognize someone's identity through their voice and describe the timbral phenomena of voices. Likewise, the singing voice also has timbral phenomena. In vocal pedagogy, vocal teachers listen and then describe the timbral phenomena of their student's singing voice. In this study, in order to enable machines to describe the singing voice from the vocal pedagogy point of view, we perform a task called paralinguistic singing attribute recognition. To achieve this goal, we first construct and publish an open source dataset named Singing Voice Quality and Technique Database (SVQTD) for supervised learning. All the audio clips in SVQTD are downloaded from YouTube and processed by music source separation and silence detection. For annotation, seven paralinguistic singing attributes commonly used in vocal pedagogy are adopted as the labels. Furthermore, to explore the different supervised machine learning algorithm for classifying each paralinguistic singing attribute, we adopt three main frameworks, namely openSMILE features with support vector machine (SF-SVM), end-to-end deep learning (E2EDL), and deep embedding with support vector machine (DE-SVM). Our methods are based on existing frameworks commonly employed in other paralinguistic speech attribute recognition tasks. In SF-SVM, we separately use the feature set of the INTERSPEECH 2009 Challenge and that of the INTERSPEECH 2016 Challenge as the SVM classifier's input. In E2EDL, the end-to-end framework separately utilizes the ResNet and transformer encoder as feature extractors. In particular, to handle two-dimensional spectrogram input for a transformer, we adopt a sliced multi-head self-attention (SMSA) mechanism. In the DE-SVM, we use the representation extracted from the E2EDL model as the input of the SVM classifier. Experimental results on SVQTD show no absolute winner between E2EDL and the DE-SVM, which means that the back-end SVM classifier with the representation learned by E2E as input does not necessarily improve the performance. However, the DE-SVM that utilizes the ResNet as the feature extractor achieves the best average UAR, with an average 16% improvement over that of the SF-SVM with INTERSPEECH's hand-crafted feature set.
Collapse
Affiliation(s)
- Yanze Xu
- Data Science Research Center, Duke Kunshan University, Kunshan, China
| | - Weiqing Wang
- Data Science Research Center, Duke Kunshan University, Kunshan, China
| | - Huahua Cui
- Advanced Computing East China Sub-Center, Suzhou, China
| | - Mingyang Xu
- Advanced Computing East China Sub-Center, Suzhou, China
| | - Ming Li
- Data Science Research Center, Duke Kunshan University, Kunshan, China
| |
Collapse
|
12
|
Simonetta F, Avanzini F, Ntalampiras S. A perceptual measure for evaluating the resynthesis of automatic music transcriptions. Multimed Tools Appl 2022; 81:32371-32391. [PMID: 35437421 PMCID: PMC9007253 DOI: 10.1007/s11042-022-12476-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 12/26/2021] [Accepted: 01/25/2022] [Indexed: 06/14/2023]
Abstract
This study focuses on the perception of music performances when contextual factors, such as room acoustics and instrument, change. We propose to distinguish the concept of "performance" from the one of "interpretation", which expresses the "artistic intention". Towards assessing this distinction, we carried out an experimental evaluation where 91 subjects were invited to listen to various audio recordings created by resynthesizing MIDI data obtained through Automatic Music Transcription (AMT) systems and a sensorized acoustic piano. During the resynthesis, we simulated different contexts and asked listeners to evaluate how much the interpretation changes when the context changes. Results show that: (1) MIDI format alone is not able to completely grasp the artistic intention of a music performance; (2) usual objective evaluation measures based on MIDI data present low correlations with the average subjective evaluation. To bridge this gap, we propose a novel measure which is meaningfully correlated with the outcome of the tests. In addition, we investigate multimodal machine learning by providing a new score-informed AMT method and propose an approximation algorithm for the p-dispersion problem.
Collapse
Affiliation(s)
- Federico Simonetta
- LIM – Music Informatics Laboratory, Department of Computer Science, University of Milano, Milano, Italy
| | - Federico Avanzini
- LIM – Music Informatics Laboratory, Department of Computer Science, University of Milano, Milano, Italy
| | - Stavros Ntalampiras
- LIM – Music Informatics Laboratory, Department of Computer Science, University of Milano, Milano, Italy
| |
Collapse
|
13
|
Sihvonen AJ, Särkämö T. Music processing and amusia. Handb Clin Neurol 2022; 187:55-67. [PMID: 35964992 DOI: 10.1016/b978-0-12-823493-8.00014-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Music is a universal and important human trait, which is orchestrated by complex brain network centered in the temporal lobe but connecting broadly to multiple cortical and subcortical regions. In the human brain, music engages a widespread bilateral network of regions that govern auditory perception, syntactic and semantic processing, attention and memory, emotion and reward, and motor skills. The ability to perceive or produce music can be severely impaired either due to abnormal brain development or brain damage, leading to a condition called amusia. Modern neuroimaging studies of amusia have provided valuable knowledge about the structure and function of specific brain regions and white matter pathways that are crucial for music perception, highlighting the role of the right frontotemporal network in this process. In this chapter, we provide an overview on the neural basis of music processing in a healthy brain and review evidence obtained from the studies of congenital and acquired amusia.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- School of Health and Rehabilitation Sciences, Queensland Aphasia Research Centre, The University of Queensland, Herston, QLD, Australia; Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Teppo Särkämö
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.
| |
Collapse
|
14
|
Dincer D'Alessandro H, Boyle PJ, Portanova G, Mancini P. Music perception and speech intelligibility in noise performance by Italian-speaking cochlear implant users. Eur Arch Otorhinolaryngol 2021; 279:3821-3829. [PMID: 34596714 PMCID: PMC8484297 DOI: 10.1007/s00405-021-07103-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Accepted: 09/20/2021] [Indexed: 11/26/2022]
Abstract
Objective The goal of this study was to investigate the performance correlations between music perception and speech intelligibility in noise by Italian-speaking cochlear implant (CI) users. Materials and methods Twenty postlingually deafened adults with unilateral CIs (mean age 65 years, range 46–92 years) were tested with a music quality questionnaire using three passages of music from Classical Music, Jazz, and Soul. Speech recognition in noise was assessed using two newly developed adaptive tests in Italian: The Sentence Test with Adaptive Randomized Roving levels (STARR) and Matrix tests. Results Median quality ratings for Classical, Jazz and Soul music were 63%, 58% and 58%, respectively. Median SRTs for the STARR and Matrix tests were 14.3 dB and 7.6 dB, respectively. STARR performance was significantly correlated with Classical music ratings (rs = − 0.49, p = 0.029), whereas Matrix performance was significantly correlated with both Classical (rs = − 0.48, p = 0.031) and Jazz music ratings (rs = − 0.56, p = 0.011). Conclusion Speech with competitive noise and music are naturally present in everyday listening environments. Recent speech perception tests based on an adaptive paradigm and sentence materials in relation with music quality measures might be representative of everyday performance in CI users. The present data contribute to cross-language studies and suggest that improving music perception in CI users may yield everyday benefit in speech perception in noise and may hence enhance the quality of listening for CI users.
Collapse
Affiliation(s)
| | - Patrick J Boyle
- Department of Experimental Psychology, Cambridge University, Cambridge, UK
- European Research Center, Advanced Bionics GmbH, Hannover, Germany
| | - Ginevra Portanova
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, 00161, Rome, Italy
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, 00161, Rome, Italy.
| |
Collapse
|
15
|
Goltz F, Sadakata M. Do you listen to music while studying? A portrait of how people use music to optimize their cognitive performance. Acta Psychol (Amst) 2021; 220:103417. [PMID: 34555564 DOI: 10.1016/j.actpsy.2021.103417] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 08/11/2021] [Accepted: 09/10/2021] [Indexed: 10/20/2022] Open
Abstract
The effect of background music (BGM) on cognitive task performance is a popular topic. However, the evidence is not converging: experimental studies show mixed results depending on the task, the type of music used and individual characteristics. Here, we explored how people use BGM while optimally performing various cognitive tasks in everyday life, such as reading, writing, memorizing, and critical thinking. Specifically, the frequency of BGM usage, preferred music types, beliefs about the scientific evidence on BGM, and individual characteristics, such as age, extraversion and musical background were investigated. Although the results confirmed highly diverse strategies among individuals regarding when, how often, why and what type of BGM is used, we found several general tendencies: people tend to use less BGM when engaged in more difficult tasks, they become less critical about the type of BGM when engaged in easier tasks, and there is a negative correlation between the frequency of BGM and age, indicating that younger generations tend to use more BGM than older adults. The current and previous evidence are discussed in light of existing theories. Altogether, this study identifies essential variables to consider in future research and further forwards a theory-driven perspective in the field.
Collapse
|
16
|
Fuller C, Free R, Maat B, Başkent D. Self-reported music perception is related to quality of life and self-reported hearing abilities in cochlear implant users. Cochlear Implants Int 2021; 23:1-10. [PMID: 34470590 DOI: 10.1080/14670100.2021.1948716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVES To investigate the relationship between self-reported music perception and appreciation and (1) quality of life (QoL), and (2) self-assessed hearing ability in 98 post-lingually deafened cochlear implant (CI) users with a wide age range. METHODS Participants filled three questionnaires: (1) the Dutch Musical Background Questionnaire (DMBQ), which measures the music listening habits, the quality of the sound of music and the self-assessed perception of elements of music; (2) the Nijmegen Cochlear Implant Questionnaire (NCIQ), which measures health-related QoL; (3) the Speech, Spatial and Qualities (SSQ) of hearing scale, which measures self-assessed hearing ability. Additionally, speech perception was behaviorally measured with a phoneme-in-word identification. RESULTS A decline in music listening habits and a low rating of the quality of music after implantation are reported in DMBQ. A significant relationship is found between the music measures and the NCIQ and SSQ; no significant relationships are observed between the DMBQ and speech perception scores. CONCLUSIONS The findings suggest some relationship between CI users' self-reported music perception ability and QoL and self-reported hearing ability. While the causal relationship is not currently evaluated, the findings may imply that music training programs and/or device improvements that improve music perception may improve QoL and hearing ability.
Collapse
Affiliation(s)
- Christina Fuller
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands.,Treant Zorggroep, Emmen, Netherlands
| | - Rolien Free
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Bert Maat
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| |
Collapse
|
17
|
Dincer D'Alessandro H, Ballantyne D, Portanova G, Greco A, Mancini P. Temporal coding and music perception in bimodal listeners. Auris Nasus Larynx 2021; 49:202-208. [PMID: 34304943 DOI: 10.1016/j.anl.2021.07.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 06/28/2021] [Accepted: 07/06/2021] [Indexed: 10/20/2022]
Abstract
OBJECTIVE Limited low frequency (LF) pitch and temporal fine structure (TFS) sensitivity have been thought to contribute significantly to poor music perception in cochlear implant (CI) listeners. Thus, this study aimed to evaluate music perception in relation to LF pitch perception and temporal coding, specifically in people with bimodal stimulation as a promising approach to improve spectro-temporal sensitivity in CI listeners. METHODS Eleven postlingually deafened bimodal listeners participated in the study (mean age=55.5 years, range 36-75 years, SD=11.7). LF pitch/TFS sensitivity was evaluated by using two recently developed tests: Harmonic Intonation (HI) and Disharmonic Intonation (DI). The music perception protocol was based on three audio files in the genres of Classical, Jazz and Soul music and a music quality questionnaire regarding four subjective aspects: Clarity, Pleasantness, Naturalness and General Quality of Sounds. RESULTS CI alone and bimodal findings showed statistically significant differences for both temporal coding and music perception. DI findings showed statistically significant correlations with music quality ratings (p<0.05). CONCLUSION Bimodal music quality ratings were significantly better, indicating a significant improvement in the quality of music towards being significantly more clear, more natural, more pleasant, and better quality. Similarly, bimodal HI/DI findings improved significantly, although the amount of benefit was greater for the DI task with spectral information only below 300 Hz. Significant DI correlations with music quality ratings supported the test to be more indicative of better temporal coding of LF residual hearing and its effects on music perception.
Collapse
Affiliation(s)
| | - Deborah Ballantyne
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| | - Ginevra Portanova
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy.
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy.
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy.
| |
Collapse
|
18
|
Ab Shukor NF, Han W, Lee J, Seo YJ. Crucial Music Components Needed for Speech Perception Enhancement of Pediatric Cochlear Implant Users: A Systematic Review and Meta-Analysis. Audiol Neurootol 2021; 26:389-413. [PMID: 33878756 DOI: 10.1159/000515136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 02/08/2021] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Although many clinicians have attempted music training for the hearing-impaired children, no specific effects have yet been reported for individual music components. This paper seeks to discover specific music components that help in improving speech perception of children with cochlear implants (CI) and to identify the effective training periods and methods needed for each component. METHOD While assessing 5 electronic databases, that is, ScienceDirect, Scopus, PubMed, CINAHL, and Web of Science, 1,638 articles were found initially. After the screening and eligibility assessment stage based on the Participants, Intervention, Comparisons, Outcome, and Study Design (PICOS) inclusion criteria, 18 of 1,449 articles were chosen. RESULTS A total of 18 studies and 14 studies (209 participants) were analyzed using a systematic review and meta-analysis, respectively. No publication bias was detected based on an Egger's regression result even though the funnel plot was asymmetrical. The results of the meta-analysis revealed that the largest improvement was seen for rhythm perception, followed by the perception of pitch and harmony and smallest for timbre perception after the music training. The duration of training affected the rhythm, pitch, and harmony perception but not the timbre. Interestingly, musical activities, such as singing, produced the biggest effect size, implying that children with CI obtained the greatest benefits of music training by singing, followed by playing an instrument and achieved the smallest effect by only listening to musical stimuli. Significant improvement in pitch perception helped with the enhancement of prosody perception. CONCLUSION Music training can improve the music perception of children with CI and enhance their speech prosody. Long training duration was shown to provide the largest training effect of the children's perception improvement. The children with CI learned rhythm and pitch better than they did with harmony and timbre. These results support the finding of past studies that with music training, both rhythm and pitch perception can be improved, and it also helps in the development of prosody perception.
Collapse
Affiliation(s)
- Nor Farawaheeda Ab Shukor
- Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea.,Division of Speech Pathology and Audiology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea
| | - Woojae Han
- Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea.,Division of Speech Pathology and Audiology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea
| | - Jihyeon Lee
- Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea.,Research Institute of Hearing Enhancement, Yonsei University Wonju College of Medicine, Wonju, Republic of Korea
| | - Young Joon Seo
- Research Institute of Hearing Enhancement, Yonsei University Wonju College of Medicine, Wonju, Republic of Korea.,Department of Otorhinolaryngology, Yonsei University Wonju College of Medicine, Wonju, Republic of Korea
| |
Collapse
|
19
|
Hwa TP, Wen CZ, Ruckenstein MJ. Assessment of music experience after cochlear implantation: A review of current tools and their utilization. World J Otorhinolaryngol Head Neck Surg 2021; 7:116-125. [PMID: 33997721 PMCID: PMC8103528 DOI: 10.1016/j.wjorl.2021.02.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 11/24/2020] [Accepted: 02/18/2021] [Indexed: 11/16/2022] Open
Abstract
Objective To provide an overview of the current available music assessment tools after cochlear implantation (CI); to report on the utilization of music assessments in the literature; to propose potential future directions in music assessment after CI. Methods A thorough search was performed in PubMed, Embase, and The Cochrane Library through October 31, 2020. MeSH search terms, keywords, and phrases included “cochlear implant,” “cochlear prosthesis,” “auditory prosthesis,” “music,” “music assessment,” “music questionnaire,” “music perception,” “music enjoyment, and “music experience.” Potentially relevant studies were reviewed for inclusion, with particular focus on assessments developed specifically for the cochlear implant population and intended for widespread use. Results/conclusions Six hundred and forty-three studies were screened for relevance to assessment of music experience among cochlear implantees. Eighty-one studies ultimately met criteria for inclusion. There are multiple validated tools for assessment of music experience after cochlear implantation, each of which provide slightly differing insights into the patients’ subjective and/or objective post-activation experience. However, no single assessment tool has been adopted into widespread use and thus, much of the literature pertaining to this topic evaluates outcomes non-uniformly, including single-use assessments designed specifically for the study at hand. The lack of a widely accepted universal tool for assessment of music limits our collective understanding the contributory and mitigating factors applicable to current music experience of cochlear implantees, and limits our ability to uniformly evaluate the success of new implant technologies or music training paradigms.
Collapse
Affiliation(s)
- Tiffany P Hwa
- Department of Otolaryngology Head and Neck Surgery, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Christopher Z Wen
- Department of Otolaryngology Head and Neck Surgery, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Michael J Ruckenstein
- Department of Otolaryngology Head and Neck Surgery, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
20
|
Jin Z, Lu X, Huyang S, Yan Y, Jiang L, Wang J, Xu M, Li Q, Wu D. Impaired face recognition is associated with abnormal gray matter volume in the posterior cingulate cortex in congenital amusia. Neuropsychologia 2021; 156:107833. [PMID: 33757844 DOI: 10.1016/j.neuropsychologia.2021.107833] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 03/12/2021] [Accepted: 03/18/2021] [Indexed: 11/21/2022]
Abstract
Congenital amusia is as a neurodevelopment disorder primarily defined by impairment in pitch discrimination and pitch memory. Interestingly, it has been reported that individuals with congenital amusia also exhibit deficits in face recognition (prosopagnosia). One explanation of such comorbidity is that the neural substrates of pitch recognition and face recognition may be similar. To test this hypothesis, face recognition ability was assessed using the Cambridge Face Memory Test (CFMT) and gray matter volume was determined through voxel-based morphometry (VBM) among participants with and without congenital amusia. As expected, participants with amusia performed worse on the CFMT test and showed reduced gray matter volume (GMV) in the middle temporal gyrus (MTG), the superior temporal gyrus (STG), and the posterior cingulate cortex (PCC) in the right hemisphere, when compared with matched controls. Furthermore, correlation analyses demonstrated that the CFMT score was positively related to MTG, STG, and PCC GMV in all participants, while separate analyses of each group found a positive correlation of CFMT score and PCC GMV in amusics. These findings suggest that face recognition is associated with a widely distributed microstructural network in the human brain and the PCC plays an important role in both pitch recognition and face recognition in amusics. In addition, neurodevelopmental disorders such as congenital amusia and prosopagnosia may share a common neural substrate.
Collapse
|
21
|
Couvignou M, Kolinsky R. Comorbidity and cognitive overlap between developmental dyslexia and congenital amusia in children. Neuropsychologia 2021; 155:107811. [PMID: 33647287 DOI: 10.1016/j.neuropsychologia.2021.107811] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 01/19/2021] [Accepted: 02/21/2021] [Indexed: 11/24/2022]
Abstract
Developmental dyslexia and congenital amusia are two specific neurodevelopmental disorders that affect reading and music perception, respectively. Similarities at perceptual, cognitive, and anatomical levels raise the possibility that a common factor is at play in their emergence, albeit in different domains. However, little consideration has been given to what extent they can co-occur. A first adult study suggested a 30% amusia rate in dyslexia and a 25% dyslexia rate in amusia (Couvignou et al., Cognitive Neuropsychology 2019). We present newly acquired data from 38 dyslexic and 38 typically developing children. These were assessed with literacy and phonological tests, as well as with three musical tests: the Montreal Battery of Evaluation of Musical Abilities, a pitch and time change detection task, and a singing task. Overall, about 34% of the dyslexic children were musically impaired, a proportion that is significantly higher than both the estimated 1.5-4% prevalence of congenital amusia in the general population and the rate of 5% observed within the control group. They were mostly affected in the pitch dimension, both in terms of perception and production. Correlations and prediction links were found between pitch processing skills and language measures after partialing out confounding factors. These findings are discussed with regard to cognitive and neural explanatory hypotheses of a comorbidity between dyslexia and amusia.
Collapse
Affiliation(s)
- Manon Couvignou
- Unité de Recherche en Neurosciences Cognitives (Unescog), Center for Research in Cognition & Neurosciences (CRCN), Université Libre de Bruxelles (ULB), Brussels, Belgium.
| | - Régine Kolinsky
- Unité de Recherche en Neurosciences Cognitives (Unescog), Center for Research in Cognition & Neurosciences (CRCN), Université Libre de Bruxelles (ULB), Brussels, Belgium; Fonds de La Recherche Scientifique-FNRS (FRS-FNRS), Brussels, Belgium
| |
Collapse
|
22
|
Spitzer ER, Galvin JJ, Friedmann DR, Landsberger DM. Melodic interval perception with acoustic and electric hearing in bimodal and single-sided deaf cochlear implant listeners. Hear Res 2021; 400:108136. [PMID: 33310263 PMCID: PMC7796925 DOI: 10.1016/j.heares.2020.108136] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Revised: 11/20/2020] [Accepted: 11/24/2020] [Indexed: 10/22/2022]
Abstract
Two notes sounded sequentially elicit melodic intervals and contours that form the basis of melody. Many previous studies have characterized pitch perception in cochlear implant (CI) users to be poor which may be due to the limited spectro-temporal resolution and/or spectral warping with electric hearing compared to acoustic hearing (AH). Poor pitch perception in CIs has been shown to distort melodic interval perception. To characterize this interval distortion, we recruited CI users with either normal (single sided deafness, SSD) or limited (bimodal) AH in the non-implanted ear. The contralateral AH allowed for a stable reference with which to compare melodic interval perception in the CI ear, within the same listener. Melodic interval perception was compared across acoustic and electric hearing in 9 CI listeners (4 bimodal and 5 SSD). Participants were asked to rank the size of a probe interval presented to the CI ear to a reference interval presented to the contralateral AH ear using a method of constant stimuli. Ipsilateral interval ranking was also measured within the AH ear to ensure that listeners understood the task and that interval ranking was stable and accurate within AH. Stimuli were delivered to the AH ear via headphones and to the CI ear via direct audio input (DAI) to participants' clinical processors. During testing, a reference and probe interval was presented and participants indicated which was larger. Ten comparisons for each reference-probe combination were presented. Psychometric functions were fit to the data to determine the probe interval size that matched the reference interval. Across all AH reference intervals, the mean matched CI interval was 1.74 times larger than the AH reference. However, there was great inter-subject variability. For some participants, CI interval distortion varied across different reference AH intervals; for others, CI interval distortion was constant. Within the AH ear, ipsilateral interval ranking was accurate, ensuring that participants understood the task. No significant differences in the patterns of results were observed between bimodal and SSD CI users. The present data show that much larger intervals were needed with the CI to match contralateral AH reference intervals. As such, input melodic patterns are likely to be perceived as frequency compressed and/or warped with electric hearing, with less variation among notes in the pattern. The high inter-subject variability in CI interval distortion suggests that CI signal processing should be optimized for individual CI users.
Collapse
Affiliation(s)
- Emily R Spitzer
- New York University Grossman School of Medicine, Department of Otolaryngology-Head and Neck Surgery, 462 1st Avenue, NBV 5E5, New York 10016, NY, USA.
| | | | - David R Friedmann
- New York University Grossman School of Medicine, Department of Otolaryngology-Head and Neck Surgery, 462 1st Avenue, NBV 5E5, New York 10016, NY, USA
| | - David M Landsberger
- New York University Grossman School of Medicine, Department of Otolaryngology-Head and Neck Surgery, 462 1st Avenue, NBV 5E5, New York 10016, NY, USA
| |
Collapse
|
23
|
Van't Hooft JJ, Pijnenburg YAL, Sikkes SAM, Scheltens P, Spikman JM, Jaschke AC, Warren JD, Tijms BM. Frontotemporal dementia, music perception and social cognition share neurobiological circuits: A meta-analysis. Brain Cogn 2021; 148:105660. [PMID: 33421942 DOI: 10.1016/j.bandc.2020.105660] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/27/2020] [Accepted: 11/26/2020] [Indexed: 01/18/2023]
Abstract
Frontotemporal dementia (FTD) is a neurodegenerative disease that presents with profound changes in social cognition. Music might be a sensitive probe for social cognition abilities, but underlying neurobiological substrates are unclear. We performed a meta-analysis of voxel-based morphometry studies in FTD patients and functional MRI studies for music perception and social cognition tasks in cognitively normal controls to identify robust patterns of atrophy (FTD) or activation (music perception or social cognition). Conjunction analyses were performed to identify overlapping brain regions. In total 303 articles were included: 53 for FTD (n = 1153 patients, 42.5% female; 1337 controls, 53.8% female), 28 for music perception (n = 540, 51.8% female) and 222 for social cognition in controls (n = 5664, 50.2% female). We observed considerable overlap in atrophy patterns associated with FTD, and functional activation associated with music perception and social cognition, mostly encompassing the ventral language network. We further observed overlap across all three modalities in mesolimbic, basal forebrain and striatal regions. The results of our meta-analysis suggest that music perception and social cognition share neurobiological circuits that are affected in FTD. This supports the idea that music might be a sensitive probe for social cognition abilities with implications for diagnosis and monitoring.
Collapse
|
24
|
Burnham BR, Long E, Zeide J. Pitch direction on the perception of major and minor modes. Atten Percept Psychophys 2021; 83:399-414. [PMID: 33230730 DOI: 10.3758/s13414-020-02198-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/30/2020] [Indexed: 11/08/2022]
Abstract
One factor affecting the qualia of music perception is the major/minor mode distinction. Major modes are perceived as more arousing, happier, positive, brighter, and less awkward than minor modes. This difference in emotionality of modes is also affected by pitch direction, with ascending pitch associated with positive affect and decreasing pitch with negative affect. The present study examined whether pitch direction influenced the identification of major versus minor musical modes. In six experiments, participants were familiarized with ascending and descending major and minor modes. We then played ascending and descending scales or simple eight-note melodies and asked listeners to identify the mode (major or minor). Identification of mode was moderated by pitch direction: major modes were identified more accurately when played with ascending pitch, and minor modes were identified better when played with descending pitch. Additionally, we replicated the difference in emotional affect between major and minor modes. The crossover pattern in mode identification may result from dual activation of positive and negative constructs, under specific combinations of mode and pitch direction.
Collapse
|
25
|
Costantino A, Di Stefano N, Taffoni F, Di Pino G, Casale M, Keller F. Embodying melody through a conducting baton: a pilot comparison between musicians and non-musicians. Exp Brain Res 2020; 238:2279-2291. [PMID: 32725358 DOI: 10.1007/s00221-020-05890-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 07/20/2020] [Indexed: 10/23/2022]
Abstract
Finger-tapping tasks have been widely adopted to investigate auditory-motor synchronization, i.e., the coupling of movement with an external auditory rhythm. However, the discrete nature of these movements usually limits their application to the study of beat perception in the context of isochronous rhythms. The purpose of the present pilot study was to test an innovative task that allows investigating bodily responses to complex, non-isochronous rhythms. A conductor's baton was provided to 16 healthy subjects, divided into 2 different groups depending on the years of musical training they had received (musicians or non-musicians). Ad hoc-created melodies, including notes of different durations, were played to the subjects. Each subject was asked to move the baton up and down according to the changes in pitch contour. Software for video analysis and modelling (Tracker®) was used to track the movement of the baton tip. The main parameters used for the analysis were the velocity peaks in the vertical axis. In the musician group, the number of velocity peaks exactly matched the number of notes, while in the non-musician group, the number of velocity peaks exceeded the number of notes. An exploratory data analysis using Poincaré plots suggested a greater degree of coupling between hand-arm movements and melody in musicians both with isochronous and non-isochronous rhythms. The calculated root mean square error (RMSE) between the note onset times and the velocity peaks, and the analysis of the distribution of velocity peaks in relationship to note onset times confirmed the effect of musical training. Notwithstanding the small number of participants, these results suggest that this novel behavioural task could be used to investigate auditory-motor coupling in the context of music in an ecologically valid setting. Furthermore, the task may be used for rhythm training and rehabilitation in neurological patients with movement disorders.
Collapse
Affiliation(s)
- Andrea Costantino
- Integrated Sleep Surgery Team UCBM, Unit of Otolaryngology - Integrated Therapies in Otolaryngology, Campus Bio-Medico University, Rome, Italy.
| | - Nicola Di Stefano
- Department of Philosophy and Cultural Heritage, Ca' Foscari University of Venice, Venice, Italy
- FAST, Institute of Philosophy of Scientific and Technological Practice, Campus Bio-Medico University, Rome, Italy
| | - Fabrizio Taffoni
- Advanced Robotics and Human-Centred Technologies - CREO Lab, Campus Bio-Medico University, Rome, Italy
| | - Giovanni Di Pino
- Research Unit of Neurophysiology and Neuroengineering of Human-Technology Interaction (NeXTlab), Campus Bio-Medico University, Rome, Italy
| | - Manuele Casale
- Integrated Sleep Surgery Team UCBM, Unit of Otolaryngology - Integrated Therapies in Otolaryngology, Campus Bio-Medico University, Rome, Italy
| | - Flavio Keller
- FAST, Institute of Philosophy of Scientific and Technological Practice, Campus Bio-Medico University, Rome, Italy.
- Laboratory of Developmental Neuroscience and Neural Plasticity, Campus Bio-Medico University, Rome, Italy.
| |
Collapse
|
26
|
Liu W, Zhang C, Wang X, Xu J, Chang Y, Ristaniemi T, Cong F. Functional connectivity of major depression disorder using ongoing EEG during music perception. Clin Neurophysiol 2020; 131:2413-2422. [PMID: 32828045 DOI: 10.1016/j.clinph.2020.06.031] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 05/07/2020] [Accepted: 06/29/2020] [Indexed: 12/14/2022]
Abstract
OBJECTIVE The functional connectivity (FC) of major depression disorder (MDD) has not been well studied under naturalistic and continuous stimuli conditions. In this study, we investigated the frequency-specific FC of MDD patients exposed to conditions of music perception using ongoing electroencephalogram (EEG). METHODS First, we applied the phase lag index (PLI) method to calculate the connectivity matrices and graph theory-based methods to measure the topology of brain networks across different frequency bands. Then, classification methods were adopted to identify the most discriminate frequency band for the diagnosis of MDD. RESULTS During music perception, MDD patients exhibited a decreased connectivity pattern in the delta band but an increased connectivity pattern in the beta band. Healthy people showed a left hemisphere-dominant phenomenon, but MDD patients did not show such a lateralized effect. Support vector machine (SVM) achieved the best classification performance in the beta frequency band with an accuracy of 89.7%, sensitivity of 89.4% and specificity of 89.9%. CONCLUSIONS MDD patients exhibited an altered FC in delta and beta bands, and the beta band showed a superiority in the diagnosis of MDD. SIGNIFICANCE Our study provided a promising reference for the diagnosis of MDD, and revealed a new perspective for understanding the topology of MDD brain networks during music perception.
Collapse
Affiliation(s)
- Wenya Liu
- School of Biomedical Engineering, Faculty of Electronic and Electrical Engineering, Dalian University of Technology, 116024 Dalian, China; Faculty of Information Technology, University of Jyväskylä, 40014 Jyväskylä, Finland
| | - Chi Zhang
- School of Biomedical Engineering, Faculty of Electronic and Electrical Engineering, Dalian University of Technology, 116024 Dalian, China
| | - Xiaoyu Wang
- School of Biomedical Engineering, Faculty of Electronic and Electrical Engineering, Dalian University of Technology, 116024 Dalian, China
| | - Jing Xu
- Department of Neurology and Psychiatry, First Affiliated Hospital, Dalian Medical University, 116011 Dalian, China.
| | - Yi Chang
- Department of Neurology and Psychiatry, First Affiliated Hospital, Dalian Medical University, 116011 Dalian, China.
| | - Tapani Ristaniemi
- Faculty of Information Technology, University of Jyväskylä, 40014 Jyväskylä, Finland
| | - Fengyu Cong
- School of Biomedical Engineering, Faculty of Electronic and Electrical Engineering, Dalian University of Technology, 116024 Dalian, China; Faculty of Information Technology, University of Jyväskylä, 40014 Jyväskylä, Finland; School of Artificial Intelligence, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, 116024 Dalian, China; Key Laboratory of Integrated Circuit and Biomedical Electronic System, Liaoning Province. Dalian University of Technology, 116024 Dalian, China.
| |
Collapse
|
27
|
Yüksel M, Çiprut A. Music and psychoacoustic perception abilities in cochlear implant users with auditory neuropathy spectrum disorder. Int J Pediatr Otorhinolaryngol 2020; 131:109865. [PMID: 31945735 DOI: 10.1016/j.ijporl.2020.109865] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 01/05/2020] [Accepted: 01/05/2020] [Indexed: 10/25/2022]
Abstract
OBJECTIVE Auditory neuropathy spectrum disorder (ANSD) is a condition wherein the pre-neural or cochlear outer hair cell activity is intact, but the neural activity in the auditory nerve is disrupted. Cochlear implant (CI) can be beneficial for subjects with ANSD; however, little is known about the music perception and psychoacoustic abilities of CI users with ANSD. Music perception in CI users is a multidimensional and complex ability requiring the contribution of both auditory and nonauditory abilities. Even though auditory abilities lay the foundation, the contribution of patient-related variables such as ANSD may affect the music perception. This study aimed to evaluate the psychoacoustic and music perception abilities of CI recipients with ANSD. STUDY DESIGN Twelve CI users with ANSD and twelve age- and gendermatched CI users with sensorineural hearing loss (SNHL) were evaluated. Music perception abilities were measured using the Turkish version of the Clinical Assessment of Music Perception (T-CAMP) test. Psychoacoustic abilities were measured using the spectral ripple discrimination (SRD) and temporal modulation transfer function (TMTF) tests. In addition, the age of diagnosis and implantation was recorded. RESULTS Pitch direction discrimination (PDD), timbre recognition, SRD, and TMTF performance of CI users with ANSD were concordant with those reported in previous studies, and differences between ANSD and SNHL groups were not statistically significant. However, the ANSD group performed poorly compared with SNHL group in melody recognition subtest of T-CAMP, and the difference was statistically significant. CONCLUSION CI can prove beneficial for patients with ANSD with respect to their music and psychoacoustic abilities, similar to patients with SNHL, except for melody recognition. Recognition of melodies requires both auditory and non-auditory abilities, and ANSD may have an extensive but subtle effect in the life of CI users.
Collapse
Affiliation(s)
- Mustafa Yüksel
- Marmara University, Institute of Health Sciences, Audiology and Speech Disorders Program, İstanbul, Turkey.
| | - Ayça Çiprut
- Marmara University Faculty of Medicine, Audiology Department, İstanbul, Turkey
| |
Collapse
|
28
|
Abstract
Control of stimulus confounds is an ever-present, and ever-important, aspect of experimental design. Typically, researchers concern themselves with such control on a local level, ensuring that individual stimuli contain only the properties they intend for them to represent. Significantly less attention, however, is paid to stimulus properties in the aggregate, aspects that, although not present in individual stimuli, can nevertheless become emergent properties of the stimulus set when viewed in total. This paper describes two examples of such effects. The first (Case Study 1) focuses on emergent properties of pairs of to-be-performed tones on a piano keyboard, and the second (Case Study 2) focuses on emergent properties of short, atonal melodies in a perception/memory task. In both cases these sets of stimuli induced identifiable tonal influences despite being explicitly created to be devoid of musical tonality. These results highlight the importance of monitoring aggregate stimulus properties in one's research, and are discussed with reference to their implications for interpreting psychological findings quite generally.
Collapse
|
29
|
Abstract
Perception of sounds occurs in the context of surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, categorization of later sounds becomes biased through spectral contrast effects (SCEs). Past research has shown SCEs to bias categorization of speech and music alike. Recent studies have extended SCEs to naturalistic listening conditions when the inherent spectral composition of (unfiltered) sentences biased speech categorization. Here, we tested whether natural (unfiltered) music would similarly bias categorization of French horn and tenor saxophone targets. Preceding contexts were either solo performances of the French horn or tenor saxophone (unfiltered; 1 second duration in Experiment 1, or 3 seconds duration in Experiment 2) or a string quintet processed to emphasize frequencies in the horn or saxophone (filtered; 1 second duration). Both approaches produced SCEs, producing more "saxophone" responses following horn / horn-like contexts and vice versa. One-second filtered contexts produced SCEs as in previous studies, but 1-second unfiltered contexts did not. Three-second unfiltered contexts biased perception, but to a lesser degree than filtered contexts did. These results extend SCEs in musical instrument categorization to everyday listening conditions.
Collapse
|
30
|
Kaneshiro B, Nguyen DT, Norcia AM, Dmochowski JP, Berger J. Natural music evokes correlated EEG responses reflecting temporal structure and beat. Neuroimage 2020; 214:116559. [PMID: 31978543 DOI: 10.1016/j.neuroimage.2020.116559] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 12/23/2019] [Accepted: 01/14/2020] [Indexed: 11/17/2022] Open
Abstract
The brain activity of multiple subjects has been shown to synchronize during salient moments of natural stimuli, suggesting that correlation of neural responses indexes a brain state operationally termed 'engagement'. While past electroencephalography (EEG) studies have considered both auditory and visual stimuli, the extent to which these results generalize to music-a temporally structured stimulus for which the brain has evolved specialized circuitry-is less understood. Here we investigated neural correlation during natural music listening by recording EEG responses from N=48 adult listeners as they heard real-world musical works, some of which were temporally disrupted through shuffling of short-term segments (measures), reversal, or randomization of phase spectra. We measured correlation between multiple neural responses (inter-subject correlation) and between neural responses and stimulus envelope fluctuations (stimulus-response correlation) in the time and frequency domains. Stimuli retaining basic musical features, such as rhythm and melody, elicited significantly higher behavioral ratings and neural correlation than did phase-scrambled controls. However, while unedited songs were self-reported as most pleasant, time-domain correlations were highest during measure-shuffled versions. Frequency-domain measures of correlation (coherence) peaked at frequencies related to the musical beat, although the magnitudes of these spectral peaks did not explain the observed temporal correlations. Our findings show that natural music evokes significant inter-subject and stimulus-response correlations, and suggest that the neural correlates of musical 'engagement' may be distinct from those of enjoyment.
Collapse
Affiliation(s)
- Blair Kaneshiro
- Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, USA; Center for the Study of Language and Information, Stanford University, Stanford, CA, USA; Department of Otolaryngology Head & Neck Surgery, Stanford University School of Medicine, Palo Alto, CA, USA.
| | - Duc T Nguyen
- Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, USA; Center for the Study of Language and Information, Stanford University, Stanford, CA, USA; Department of Biomedical Engineering, City College of New York, New York, NY, USA
| | - Anthony M Norcia
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Jacek P Dmochowski
- Department of Biomedical Engineering, City College of New York, New York, NY, USA; Department of Psychology, Stanford University, Stanford, CA, USA
| | - Jonathan Berger
- Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, USA
| |
Collapse
|
31
|
Ben-Nathan M, Salti M, Algom D. The many faces of music: Attending to music and delight in the same music are governed by different rules of processing. Acta Psychol (Amst) 2019; 200:102949. [PMID: 31675619 DOI: 10.1016/j.actpsy.2019.102949] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 08/25/2019] [Accepted: 10/18/2019] [Indexed: 11/28/2022] Open
Abstract
Music generates manifold experiences in humans, some perceptual and some hedonic. Are these qualia governed by the same principles in processing? In particular, do the loudness and timbre of melodies combine to produce perception and likeability by the same rules of integration? In Experiment 1, we tested selective attention to loudness and timbre by applying Garner's speeded classification paradigm and found both to be perceptually integral dimensions. In Experiment 2, we tested liking for the same music by applying Norman Anderson's functional measurement model and found loudness and timbre to combine by an adding-type rule. In Experiment 3, we applied functional measurement for perception and found loudness and timbre to interact as in Experiment 1. These results show that people cannot or do not attend selectively or perceive separately any one music component, but that they nonetheless can isolate the components when they enjoy (or disenjoy) listening to music. We conclude that perception of the constituent components of a musical piece and the processing of the same components for liking are governed by different rules.
Collapse
|
32
|
Corrow SL, Stubbs JL, Schlaug G, Buss S, Paquette S, Duchaine B, Barton JJS. Perception of musical pitch in developmental prosopagnosia. Neuropsychologia 2019; 124:87-97. [PMID: 30625291 DOI: 10.1016/j.neuropsychologia.2018.12.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2017] [Revised: 12/19/2018] [Accepted: 12/29/2018] [Indexed: 11/21/2022]
Abstract
Studies of developmental prosopagnosia have often shown that developmental prosopagnosia differentially affects human face processing over non-face object processing. However, little consideration has been given to whether this condition is associated with perceptual or sensorimotor impairments in other modalities. Comorbidities have played a role in theories of other developmental disorders such as dyslexia, but studies of developmental prosopagnosia have often focused on the nature of the visual recognition impairment despite evidence for widespread neural anomalies that might affect other sensorimotor systems. We studied 12 subjects with developmental prosopagnosia with a battery of auditory tests evaluating pitch and rhythm processing as well as voice perception and recognition. Overall, three subjects were impaired in fine pitch discrimination, a prevalence of 25% that is higher than the estimated 4% prevalence of congenital amusia in the general population. This was a selective deficit, as rhythm perception was unaffected in all 12 subjects. Furthermore, two of the three prosopagnosic subjects who were impaired in pitch discrimination had intact voice perception and recognition, while two of the remaining nine subjects had impaired voice recognition but intact pitch perception. These results indicate that, in some subjects with developmental prosopagnosia, the face recognition deficit is not an isolated impairment but is associated with deficits in other domains, such as auditory perception. These deficits may form part of a broader syndrome which could be due to distributed microstructural anomalies in various brain networks, possibly with a common theme of right hemispheric predominance.
Collapse
|
33
|
Abstract
Congenital amusia is currently thought to be a life-long neurogenetic disorder in music perception, impervious to training in pitch or melody discrimination. This study provides an explicit test of whether amusic deficits can be reduced with training. Twenty amusics and 20 matched controls participated in four sessions of psychophysical training involving either pure-tone (500 Hz) pitch discrimination or a control task of lateralization (interaural level differences for bandpass white noise). Pure-tone pitch discrimination at low, medium, and high frequencies (500, 2000, and 8000 Hz) was measured before and after training (pretest and posttest) to determine the specificity of learning. Melody discrimination was also assessed before and after training using the full Montreal Battery of Evaluation of Amusia, the most widely used standardized test to diagnose amusia. Amusics performed more poorly than controls in pitch but not localization discrimination, but both groups improved with practice on the trained stimuli. Learning was broad, occurring across all three frequencies and melody discrimination for all groups, including those who trained on the non-pitch control task. Following training, 11 of 20 amusics no longer met the global diagnostic criteria for amusia. A separate group of untrained controls (n = 20), who also completed melody discrimination and pretest, improved by an equal amount as trained controls on all measures, suggesting that the bulk of learning for the control group occurred very rapidly from the pretest. Thirty-one trained participants (13 amusics) returned one year later to assess long-term maintenance of pitch and melody discrimination. On average, there was no change in performance between posttest and one-year follow-up, demonstrating that improvements on pitch- and melody-related tasks in amusics and controls can be maintained. The findings indicate that amusia is not always a life-long deficit when using the current standard diagnostic criteria.
Collapse
Affiliation(s)
- Kelly L Whiteford
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA.
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA.
| |
Collapse
|
34
|
Romero-Rivas C, Vera-Constán F, Rodríguez-Cuadrado S, Puigcerver L, Fernández-Prieto I, Navarra J. Seeing music: The perception of melodic 'ups and downs' modulates the spatial processing of visual stimuli. Neuropsychologia 2018; 117:67-74. [PMID: 29753020 DOI: 10.1016/j.neuropsychologia.2018.05.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Revised: 05/07/2018] [Accepted: 05/08/2018] [Indexed: 11/19/2022]
Abstract
Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events.
Collapse
Affiliation(s)
| | - Fátima Vera-Constán
- Fundació Sant Joan de Déu, Psychiatry and Psychology Service, Hospital Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain; Departamento de Metodología y Psicología Básica, Universidad de Murcia, Murcia, Spain
| | - Sara Rodríguez-Cuadrado
- Department of Psychology, Edge Hill University, Ormskirk, UK; Fundació Sant Joan de Déu, Psychiatry and Psychology Service, Hospital Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Laura Puigcerver
- Fundació Sant Joan de Déu, Psychiatry and Psychology Service, Hospital Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain; Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
| | - Irune Fernández-Prieto
- Fundació Sant Joan de Déu, Psychiatry and Psychology Service, Hospital Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain; Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain; Neuropsychology & Cognition Group, Department of Psychology and Research Institute for Health Sciences (iUNICS), University of Balearic Islands, Palma, Spain
| | - Jordi Navarra
- Fundació Sant Joan de Déu, Psychiatry and Psychology Service, Hospital Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain; Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.
| |
Collapse
|
35
|
Walton AE, Washburn A, Langland-Hassan P, Chemero A, Kloos H, Richardson MJ. Creating Time: Social Collaboration in Music Improvisation. Top Cogn Sci 2017; 10:95-119. [PMID: 29152904 DOI: 10.1111/tops.12306] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 06/19/2017] [Accepted: 09/08/2017] [Indexed: 11/28/2022]
Abstract
Musical collaboration emerges from the complex interaction of environmental and informational constraints, including those of the instruments and the performance context. Music improvisation in particular is more like everyday interaction in that dynamics emerge spontaneously without a rehearsed score or script. We examined how the structure of the musical context affords and shapes interactions between improvising musicians. Six pairs of professional piano players improvised with two different backing tracks while we recorded both the music produced and the movements of their heads, left arms, and right arms. The backing tracks varied in rhythmic and harmonic information, from a chord progression to a continuous drone. Differences in movement coordination and playing behavior were evaluated using the mathematical tools of complex dynamical systems, with the aim of uncovering the multiscale dynamics that characterize musical collaboration. Collectively, the findings indicated that each backing track afforded the emergence of different patterns of coordination with respect to how the musicians played together, how they moved together, as well as their experience collaborating with each other. Additionally, listeners' experiences of the music when rating audio recordings of the improvised performances were related to the way the musicians coordinated both their playing behavior and their bodily movements. Accordingly, the study revealed how complex dynamical systems methods (namely recurrence analysis) can capture the turn-taking dynamics that characterized both the social exchange of the music improvisation and the sounds of collaboration more generally. The study also demonstrated how musical improvisation provides a way of understanding how social interaction emerges from the structure of the behavioral task context.
Collapse
Affiliation(s)
| | - Auriel Washburn
- Center for Computer Research in Music and Acoustics, Stanford University
| | | | - Anthony Chemero
- Department of Philosophy, University of Cincinnati.,Department of Psychology, University of Cincinnati
| | - Heidi Kloos
- Department of Psychology, University of Cincinnati
| | - Michael J Richardson
- Department of Psychology and Perception in Action Research Centre, Faculty of Human Sciences, Macquarie University
| |
Collapse
|
36
|
Zhang J, Yang T, Bao Y, Li H, Pöppel E, Silveira S. Sadness and happiness are amplified in solitary listening to music. Cogn Process 2017; 19:133-139. [PMID: 28986700 DOI: 10.1007/s10339-017-0832-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Accepted: 08/22/2017] [Indexed: 10/18/2022]
Abstract
Previous studies have shown that music is a powerful means to convey affective states, but it remains unclear whether and how social context shape the intensity and quality of emotions perceived in music. Using a within-subject design, we studied this question in two experimental settings, i.e. when subjects were alone versus in company of others without direct social interaction or feedback. Non-vocal musical excerpts of the emotional qualities happiness or sadness were rated on arousal and valence dimensions. We found evidence for an amplification of perceived emotion in the solitary listening condition, i.e. happy music was rated as happier and more arousing when nobody else was around and, in an analogous manner, sad music was perceived as sadder. This difference might be explained by a shift of attention in the presence of others. The observed interaction of perceived emotion and social context did not differ for stimuli of different cultural origin.
Collapse
Affiliation(s)
- Jinfan Zhang
- School of Psychological and Cognitive Sciences, and Beijing Key Laboratory of Behaviour and Mental Health, Peking University, 5 Yiheyuan Road, Beijing, 100871, People's Republic of China.,Institute of Medical Psychology and Human Science Center, Ludwig-Maximilian University, Munich, 80336, Germany
| | - Taoxi Yang
- School of Psychological and Cognitive Sciences, and Beijing Key Laboratory of Behaviour and Mental Health, Peking University, 5 Yiheyuan Road, Beijing, 100871, People's Republic of China.,Institute of Medical Psychology and Human Science Center, Ludwig-Maximilian University, Munich, 80336, Germany
| | - Yan Bao
- School of Psychological and Cognitive Sciences, and Beijing Key Laboratory of Behaviour and Mental Health, Peking University, 5 Yiheyuan Road, Beijing, 100871, People's Republic of China. .,Institute of Medical Psychology and Human Science Center, Ludwig-Maximilian University, Munich, 80336, Germany.
| | - Hui Li
- School of Psychological and Cognitive Sciences, and Beijing Key Laboratory of Behaviour and Mental Health, Peking University, 5 Yiheyuan Road, Beijing, 100871, People's Republic of China.,Institute of Medical Psychology and Human Science Center, Ludwig-Maximilian University, Munich, 80336, Germany
| | - Ernst Pöppel
- School of Psychological and Cognitive Sciences, and Beijing Key Laboratory of Behaviour and Mental Health, Peking University, 5 Yiheyuan Road, Beijing, 100871, People's Republic of China.,Institute of Medical Psychology and Human Science Center, Ludwig-Maximilian University, Munich, 80336, Germany
| | - Sarita Silveira
- Institute of Medical Psychology and Human Science Center, Ludwig-Maximilian University, Munich, 80336, Germany
| |
Collapse
|
37
|
McClaskey CM. Standard-interval size affects interval-discrimination thresholds for pure-tone melodic pitch intervals. Hear Res 2017; 355:64-9. [PMID: 28935162 DOI: 10.1016/j.heares.2017.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/10/2016] [Revised: 09/12/2017] [Accepted: 09/14/2017] [Indexed: 11/20/2022]
Abstract
Our ability to discriminate between pitch intervals of different sizes is not only an important aspect of speech and music perception, but also a useful means of evaluating higher-level pitch perception. The current study examined how pitch-interval discrimination was affected by the size of the intervals being compared, and by musical training. Using an adaptive procedure, pitch-interval discrimination thresholds were measured for sequentially presented pure-tone intervals with standard intervals of 1 semitone (minor second), 6 semitones (the tri-tone), and 7 semitones (perfect fifth). Listeners were classified into three groups based on musical experience: non-musicians had less than 3 years of informal musical experience, amateur musicians had at least 10 years of experience but no formal music theory training, and expert musicians had at least 12 years of experience with 1 year of formal ear training, and were either currently pursuing or had earned a Bachelor's degree as either a music major or music minor. Consistent with previous studies, discrimination thresholds obtained from expert musicians were significantly lower than those from other listeners. Thresholds also significantly varied with the magnitude of the reference interval and were higher for conditions with a 6- or 7-semitone standard than a 1-semitone standard. These data show that interval-discrimination thresholds are strongly affected by the size of the standard interval.
Collapse
|
38
|
Mao Y, Yang J, Hahn E, Xu L. Auditory perceptual efficacy of nonlinear frequency compression used in hearing aids: A review. J Otol 2017; 12:97-111. [PMID: 29937844 PMCID: PMC5963461 DOI: 10.1016/j.joto.2017.06.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Revised: 05/31/2017] [Accepted: 06/28/2017] [Indexed: 11/30/2022] Open
Abstract
Many patients with sensorineural hearing loss have a precipitous high-frequency loss with relatively good thresholds in the low frequencies. This present paper briefly introduces and compares the basic principles of four types of frequency lowering algorithms with emphasis on nonlinear frequency compression (NLFC). A review of the effects of the NLFC algorithm on speech and music perception and sound quality appraisal is then provided. For vowel perception, it seems that the benefits provided by NLFC are limited, which are probably related to the parameter settings of the compression. For consonant perception, several studies have shown that NLFC provides improved perception of high-frequency consonants such as /s/ and /z/. However, a few other studies have demonstrated negative results in consonant perception. In terms of sentence recognition, persistent use of NLFC might provide improved performance. Compared to the conventional processing, NLFC does not alter the speech sound quality appraisal and music perception as long as the compression setting is not too aggressive. In the subsequent section, the relevant factors with regard to NLFC settings, time-course of acclimatization, listener characteristics, and perceptual tasks are discussed. Although the literature shows mixed results on the perceptual efficacy of NLFC, this technique improved certain aspects of speech understanding in certain hearing-impaired listeners. Little research is available on speech perception outcomes in languages other than English. More clinical data are needed to verify the perceptual efficacy of NLFC in patients with precipitous high-frequency hearing loss. Such knowledge will help guide clinical rehabilitation of those patients.
Collapse
Affiliation(s)
- Yitao Mao
- Department of Radiology, Xiangya Hospital, Central South University, Changsha, Hunan, China.,Communication Sciences and Disorders, Ohio University, Athens, OH, USA
| | - Jing Yang
- Communication Sciences and Disorders, University of Central Arkansas, Conway, AR, USA
| | - Emily Hahn
- Communication Sciences and Disorders, Ohio University, Athens, OH, USA
| | - Li Xu
- Communication Sciences and Disorders, Ohio University, Athens, OH, USA
| |
Collapse
|
39
|
Abstract
This study explores the influence of bilingualism on the cognitive processing of language and music. Specifically, we investigate how infants learning a non-tone language perceive linguistic and musical pitch and how bilingualism affects cross-domain pitch perception. Dutch monolingual and bilingual infants of 8-9 months participated in the study. All infants had Dutch as one of the first languages. The other first languages, varying among bilingual families, were not tone or pitch accent languages. In two experiments, infants were tested on the discrimination of a lexical (N = 42) or a violin (N = 48) pitch contrast via a visual habituation paradigm. The two contrasts shared identical pitch contours but differed in timbre. Non-tone language learning infants did not discriminate the lexical contrast regardless of their ambient language environment. When perceiving the violin contrast, bilingual but not monolingual infants demonstrated robust discrimination. We attribute bilingual infants' heightened sensitivity in the musical domain to the enhanced acoustic sensitivity stemming from a bilingual environment. The distinct perceptual patterns between language and music and the influence of acoustic salience on perception suggest processing diversion and association in the first year of life. Results indicate that the perception of music may entail both shared neural network with language processing, and unique neural network that is distinct from other cognitive functions.
Collapse
Affiliation(s)
- Liquan Liu
- School of Social Sciences and Psychology, Western Sydney University, Sydney, Australia.
- Utrecht Institute of Linguistics OTS, Utrecht University, Utrecht, The Netherlands.
| | - René Kager
- School of Social Sciences and Psychology, Western Sydney University, Sydney, Australia
| |
Collapse
|
40
|
Rosemann S, Brunner F, Kastrup A, Fahle M. Musical, visual and cognitive deficits after middle cerebral artery infarction. eNeurologicalSci 2016; 6:25-32. [PMID: 29260010 PMCID: PMC5721573 DOI: 10.1016/j.ensci.2016.11.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Revised: 07/28/2016] [Accepted: 11/03/2016] [Indexed: 11/24/2022] Open
Abstract
The perception of music can be impaired after a stroke. This dysfunction is called amusia and amusia patients often also show deficits in visual abilities, language, memory, learning, and attention. The current study investigated whether deficits in music perception are selective for musical input or generalize to other perceptual abilities. Additionally, we tested the hypothesis that deficits in working memory or attention account for impairments in music perception. Twenty stroke patients with small infarctions in the supply area of the middle cerebral artery were investigated with tests for music and visual perception, categorization, neglect, working memory and attention. Two amusia patients with selective deficits in music perception and pronounced lesions were identified. Working memory and attention deficits were highly correlated across the patient group but no correlation with musical abilities was obtained. Lesion analysis revealed that lesions in small areas of the putamen and globus pallidus were connected to a rhythm perception deficit. We conclude that neither a general perceptual deficit nor a minor domain general deficit can account for impairments in the music perception task. But we find support for the modular organization of the music perception network with brain areas specialized for musical functions as musical deficits were not correlated to any other impairment.
Collapse
Affiliation(s)
| | | | | | - Manfred Fahle
- Department of Human-Neurobiology, University of Bremen, Germany
| |
Collapse
|
41
|
Zhang J, Jiang C, Zhou L, Yang Y. Perception of hierarchical boundaries in music and its modulation by expertise. Neuropsychologia 2016; 91:490-498. [PMID: 27659874 DOI: 10.1016/j.neuropsychologia.2016.09.013] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2016] [Revised: 09/16/2016] [Accepted: 09/17/2016] [Indexed: 10/21/2022]
Abstract
Hierarchical structure with units of different timescales is a key feature of music. For the perception of such structures, the detection of each boundary is crucial. Here, using electroencephalography (EEG), we explore the perception of hierarchical boundaries in music, and test whether musical expertise modifies such processing. Musicians and non-musicians were presented with musical excerpts containing boundaries at three hierarchical levels, including section, phrase and period boundaries. Non-boundary was chosen as a baseline condition. Recordings from musicians showed CPS (closure positive shift) was evoked at all the three boundaries, and their amplitude increased as the hierarchical level became higher, which suggest that musicians could represent music events at different timescales in a hierarchical way. For non-musicians, the CPS was only elicited at the period boundary and undistinguishable negativities were induced at all the three boundaries. The results indicate that a different and less clear way was used by non-musicians in boundary perception. Our findings reveal, for the first time, an ERP correlate of perceiving hierarchical boundaries in music, and show that the phrasing ability could be enhanced by musical expertise.
Collapse
Affiliation(s)
- Jingjing Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, 16 Lincui Road, Chaoyang District, Beijing 100101, China; University of Chinese Academy of Sciences, No.19A Yuquan Road, Beijing 100049, China.
| | - Cunmei Jiang
- Music College, Shanghai Normal University, No.100 Guilin Road, Shanghai 200234, China.
| | - Linshu Zhou
- Music College, Shanghai Normal University, No.100 Guilin Road, Shanghai 200234, China.
| | - Yufang Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, 16 Lincui Road, Chaoyang District, Beijing 100101, China.
| |
Collapse
|
42
|
Abstract
Some severely brain injured patients remain unresponsive, only showing reflex movements without any response to command. This syndrome has been named unresponsive wakefulness syndrome (UWS). The objective of the present study was to determine whether UWS patients are able to alter their brain activity using neurofeedback (NFB) technique. A small sample of three patients received a daily session of NFB for 3 weeks. We applied the ratio of theta and beta amplitudes as a feedback variable. Using an automatic threshold function, patients heard their favourite music whenever their theta/beta ratio dropped below the threshold. Changes in awareness were assessed weekly with the JFK Coma Recovery Scale-Revised for each treatment week, as well as 3 weeks before and after NFB. Two patients showed a decrease in their theta/beta ratio and theta-amplitudes during this period. The third patient showed no systematic changes in his EEG activity. The results of our study provide the first evidence that NFB can be used in patients in a state of unresponsive wakefulness.
Collapse
Affiliation(s)
- Ingo Keller
- Schoen Klinik Bad Aibling, Kolbermoorer Strasse 72, 83043, Bad Aibling, Germany.
| | - Ruta Garbacenkaite
- Clinical Neuropsychology Unit and Outpatient Service, Saarland University, Saarbruecken, Germany
| |
Collapse
|
43
|
Calvino M, Gavilán J, Sánchez-Cuadrado I, Pérez-Mora RM, Muñoz E, Díez-Sebastián J, Lassaletta L. Using the HISQUI29 to assess the sound quality levels of Spanish adults with unilateral cochlear implants and no contralateral hearing. Eur Arch Otorhinolaryngol 2016. [PMID: 26440105 DOI: 10.10007/s00405-014-2983-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
To evaluate cochlear implant (CI) users' self-reported level of sound quality and quality of life (QoL). Sound quality was self-evaluated using the hearing implant sound quality index (HISQUI29). HISQUI29 scores were further examined in three subsets. QoL was self-evaluated using the glasgow benefit inventory (GBI). GBI scores were further examined in three subsets. Possible correlations between the HISQUI29 and GBI were explored. Additional possible correlations between these scores and subjects' pure tone averages, speech perception scores, age at implantation, duration of hearing loss, duration of CI use, gender, and implant type were explored. Subjects derived a "moderate" sound quality level from their CI. Television, radio, and telephone tasks were easier in quiet than in background noise. 89 % of subjects reported their QoL benefited from having a CI. Mean total HISQUI29 score significantly correlated with all subcategories of the GBI. Age at implantation inversely correlated with the total HISQUI29 score and with television and radio understanding. Sentence in noise scores significantly correlated with all sound perception scores. Women had a better mean score in music perception and in telephone use than did men. CI users' self-reported levels of sound quality significantly correlated with their QoL. Cochlear implantation had a beneficial impact on subjects' QoL. Understanding speech is easier in quiet than in noise. Music perception remains a challenge for many CI users. The HISQUI29 and the GBI can provide useful information about the everyday effects of future treatment modalities, rehabilitation strategies, and technical developments.
Collapse
Affiliation(s)
- Miryam Calvino
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Javier Gavilán
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Isabel Sánchez-Cuadrado
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Rosa M Pérez-Mora
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Elena Muñoz
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Jesús Díez-Sebastián
- Clinical Epidemiology Unit, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Luis Lassaletta
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain.
| |
Collapse
|
44
|
Munjal T, Roy AT, Carver C, Jiradejvong P, Limb CJ. Use of the Phantom Electrode strategy to improve bass frequency perception for music listening in cochlear implant users. Cochlear Implants Int 2016; 16 Suppl 3:S121-8. [PMID: 26561883 DOI: 10.1179/1467010015z.000000000270] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVES The Phantom Electrode strategy makes use of partial bipolar stimulation on the two most apical electrodes in an effort to extend the frequency range available to cochlear implant (CI) users. This study aimed to quantify the effect of the Phantom Electrode strategy on bass frequency perception in music listening in CI users. METHODS Eleven adult Advanced Bionics users with the Fidelity 120 processing strategy and 16 adult normal hearing (NH) individuals participated in the study. All subjects completed the CI-multiple stimulus with hidden reference and anchor (MUSHRA), a test of an individual's ability to make discriminations in sound quality following the removal of bass frequency information. NH participants completed the CI-MUSHRA once, whereas CI users completed the task twice - once with their baseline clinical program and once with the Phantom Electrode strategy, in random order. CI users' performance was assessed in comparison with NH performance. RESULTS The Phantom Electrode strategy improved CI users performance on the CI-MUSHRA compared with Fidelity 120. DISCUSSION Creation of a Phantom Electrode percept through partial bipolar stimulation of the two most apical electrodes appears to improve CI users' perception of bass frequency information in music, contributing to greater accuracy in the ability to detect alterations in musical sound quality. CONCLUSION The Phantom Electrode processing strategy may enhance the experience of listening to music and thus acoustic stimuli more broadly by improving perception of bass frequencies, through direction of current towards the apical portion of the cochlea beyond the termination of the electrode.
Collapse
|
45
|
Abstract
OBJECTIVES Modern cochlear implant (CI) encoding strategies represent the temporal envelope of sounds well but provide limited spectral information. This deficit in spectral information has been implicated as a contributing factor to difficulty with speech perception in noisy conditions, discriminating between talkers and melody recognition. One way to supplement spectral information for CI users is by fitting a hearing aid (HA) to the non-implanted ear. METHODS In this study 14 postlingually deaf adults (half with a unilateral CI and the other half with a CI and an HA (CI + HA)) were tested on measures of music perception and familiar melody recognition. RESULTS CI + HA listeners performed significantly better than CI-only listeners on all pitch-based music perception tasks. The CI + HA group did not perform significantly better than the CI-only group in the two tasks that relied on duration cues. Recognition of familiar melodies was significantly enhanced for the group wearing an HA in addition to their CI. This advantage in melody recognition was increased when melodic sequences were presented with the addition of harmony. CONCLUSION These results show that, for CI recipients with aidable hearing in the non-implanted ear, using a HA in addition to their implant improves perception of musical pitch and recognition of real-world melodies.
Collapse
|
46
|
Lu X, Ho HT, Sun Y, Johnson BW, Thompson WF. The influence of visual information on auditory processing in individuals with congenital amusia: An ERP study. Neuroimage 2016; 135:142-51. [PMID: 27132045 DOI: 10.1016/j.neuroimage.2016.04.043] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Accepted: 04/17/2016] [Indexed: 11/15/2022] Open
Abstract
While most normal hearing individuals can readily use prosodic information in spoken language to interpret the moods and feelings of conversational partners, people with congenital amusia report that they often rely more on facial expressions and gestures, a strategy that may compensate for deficits in auditory processing. In this investigation, we used EEG to examine the extent to which individuals with congenital amusia draw upon visual information when making auditory or audio-visual judgments. Event-related potentials (ERP) were elicited by a change in pitch (up or down) between two sequential tones paired with a change in spatial position (up or down) between two visually presented dots. The change in dot position was either congruent or incongruent with the change in pitch. Participants were asked to judge (1) the direction of pitch change while ignoring the visual information (AV implicit task), and (2) whether the auditory and visual changes were congruent (AV explicit task). In the AV implicit task, amusic participants performed significantly worse in the incongruent condition than control participants. ERPs showed an enhanced N2-P3 response to incongruent AV pairings for control participants, but not for amusic participants. However when participants were explicitly directed to detect AV congruency, both groups exhibited enhanced N2-P3 responses to incongruent AV pairings. These findings indicate that amusics are capable of extracting information from both modalities in an AV task, but are biased to rely on visual information when it is available, presumably because they have learned that auditory information is unreliable. We conclude that amusic individuals implicitly draw upon visual information when judging auditory information, even though they have the capacity to explicitly recognize conflicts between these two sensory channels.
Collapse
Affiliation(s)
- Xuejing Lu
- Department of Psychology, Macquarie University, Sydney, NSW, Australia; ARC Centre of Excellence in Cognition and its Disorders, NSW, Australia.
| | - Hao T Ho
- Department of Psychology, Macquarie University, Sydney, NSW, Australia
| | - Yanan Sun
- ARC Centre of Excellence in Cognition and its Disorders, NSW, Australia; Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia
| | - Blake W Johnson
- ARC Centre of Excellence in Cognition and its Disorders, NSW, Australia; Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia
| | - William F Thompson
- Department of Psychology, Macquarie University, Sydney, NSW, Australia; ARC Centre of Excellence in Cognition and its Disorders, NSW, Australia
| |
Collapse
|
47
|
Saliba J, Lorenzo-Seva U, Marco-Pallares J, Tillmann B, Zeitouni A, Lehmann A. French validation of the Barcelona Music Reward Questionnaire. PeerJ 2016; 4:e1760. [PMID: 27019776 PMCID: PMC4806630 DOI: 10.7717/peerj.1760] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2015] [Accepted: 02/13/2016] [Indexed: 11/20/2022] Open
Abstract
Background. The Barcelona Music Reward Questionnaire (BMRQ) questionnaire investigates the main facets of music experience that could explain the variance observed in how people experience reward associated with music. Currently, only English and Spanish versions of this questionnaire are available. The objective of this study is to validate a French version of the BMRQ. Methods. The original BMRQ was translated and adapted into an international French version. The questionnaire was then administered through an online survey aimed at adults aged over 18 years who were fluent in French. Statistical analyses were performed and compared to the original English and Spanish version for validation purposes. Results. A total of 1,027 participants completed the questionnaire. Most responses were obtained from France (89.4%). Analyses revealed that congruence values between the rotated loading matrix and the ideal loading matrix ranged between 0.88 and 0.96. Factor reliabilities of subscales (i.e., Musical Seeking, Emotion Evocation, Mood Regulation, Social Reward and Sensory-Motor) also ranged between 0.88 and 0.96. In addition, reliability of the overall factor score (i.e., Music reward) was 0.91. Finally, the internal consistency of the overall scale was 0.85. The factorial structure obtained in the French translation was similar to that of the original Spanish and English samples. Conclusion. The French version of the BMRQ appears valid and reliable. Potential applications of the BMRQ include its use as a valuable tool in music reward and emotion research, whether in healthy individuals or in patients suffering from a wide variety of cognitive, neurologic and auditory disorders.
Collapse
Affiliation(s)
- Joe Saliba
- Department of Otolaryngology-Head and Neck Surgery, McGill University , Montreal , Canada
| | - Urbano Lorenzo-Seva
- Research Center for Behavior Assessment, Universitat Rovira I Virgili Tarragona , Tarragona , Spain
| | - Josep Marco-Pallares
- Department of Basic Psychology, Universitat de Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Group, Institut d'Investigacions Biomèdiques de Bellvitge (IDIBELL), L'Hospitalet de Llobregat, Spain
| | - Barbara Tillmann
- Team Auditory Cognition and Psychoacoustics, Lyon Neurosciences Research Center, CNRS-UMR 5292, INSERM U1028, University Lyon 1 , Lyon , France
| | - Anthony Zeitouni
- Department of Otolaryngology-Head and Neck Surgery, McGill University , Montreal , Canada
| | - Alexandre Lehmann
- Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, Canada; International laboratory for Brain, Music and Sound Research (BRAMS), Center for Research on Brain, Language and Music (CRBLM), Montreal, Canada
| |
Collapse
|
48
|
Seesjärvi E, Särkämö T, Vuoksimaa E, Tervaniemi M, Peretz I, Kaprio J. The Nature and Nurture of Melody: A Twin Study of Musical Pitch and Rhythm Perception. Behav Genet 2015; 46:506-15. [PMID: 26650514 DOI: 10.1007/s10519-015-9774-y] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Accepted: 11/23/2015] [Indexed: 11/24/2022]
Abstract
Both genetic and environmental factors are known to play a role in our ability to perceive music, but the degree to which they influence different aspects of music cognition is still unclear. We investigated the relative contribution of genetic and environmental effects on melody perception in 384 young adult twins [69 full monozygotic (MZ) twin pairs, 44 full dizygotic (DZ) twin pairs, 70 MZ twins without a co-twin, and 88 DZ twins without a co-twin]. The participants performed three online music tests requiring the detection of pitch changes in a two-melody comparison task (Scale) and key and rhythm incongruities in single-melody perception tasks (Out-of-key, Off-beat). The results showed predominantly additive genetic effects in the Scale task (58 %, 95 % CI 42-70 %), shared environmental effects in the Out-of-key task (61 %, 49-70 %), and non-shared environmental effects in the Off-beat task (82 %, 61-100 %). This highly different pattern of effects suggests that the contribution of genetic and environmental factors on music perception depends on the degree to which it calls for acquired knowledge of musical tonal and metric structures.
Collapse
Affiliation(s)
- Erik Seesjärvi
- Cognitive Brain Research Unit (CBRU), Institute of Behavioural Sciences, University of Helsinki, Siltavuorenpenger 1B, P.O. Box 9, 00014, Helsinki, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit (CBRU), Institute of Behavioural Sciences, University of Helsinki, Siltavuorenpenger 1B, P.O. Box 9, 00014, Helsinki, Finland.
| | - Eero Vuoksimaa
- Department of Public Health, University of Helsinki, Helsinki, Finland
| | - Mari Tervaniemi
- Cognitive Brain Research Unit (CBRU), Institute of Behavioural Sciences, University of Helsinki, Siltavuorenpenger 1B, P.O. Box 9, 00014, Helsinki, Finland
| | - Isabelle Peretz
- International Laboratory for Brain, Music, and Sound Research (BRAMS) and Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada.,Department of Psychology, Université de Montréal, Montreal, Canada
| | - Jaakko Kaprio
- Department of Public Health, University of Helsinki, Helsinki, Finland
| |
Collapse
|
49
|
Wilcox LJ, He K, Derkay CS. Identifying musical difficulties as they relate to congenital amusia in the pediatric population. Int J Pediatr Otorhinolaryngol 2015; 79:2411-5. [PMID: 26631597 DOI: 10.1016/j.ijporl.2015.11.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2015] [Revised: 11/02/2015] [Accepted: 11/03/2015] [Indexed: 10/22/2022]
Abstract
INTRODUCTIONS/OBJECTIVES Approximately 4% of the population fails to develop basic music skills and can be identified as "amusic". Congenital amusia (CA), or "tone deafness", is thought to be a hereditary disordera predominantly affecting the perception and production of music. The gold standard for diagnosis is the Montreal Battery for Evaluation of Amusia (MBEA). This study aims to pinpoint factors in the history that may help identify amusic children and to determine if amusic pediatric patients can be identified using a widely available, shorter test validated in adults. METHODS Subjects ages 7-17 years were recruited to take an online test, validated against the MBEA, for CA. The sections tested recognition of "off-beat" (OB), "mistuned" (MT), and "out-of-key" (OOK) conditions. Parents filled out a questionnaire regarding the subject's past medical, educational, musical exposure, and family history. RESULTS Of 114 subjects recruited, complete data was available on 105 with a mean age of 12.5 years. According to adult criteria, 63/105 (60%) of subjects scored in the "amusic" range. Children >10 years of age scored significantly higher on the off-beat section (p=0.001) and total scores (p=0.025). Subjects who were born prematurely scored significantly lower (p=0.045). Children whose father had difficulties with music scored significantly lower on the off-beat section (p=0.003) and total scores (p=0.008). CONCLUSIONS CA is a disorder that has implications for quality of life. Earlier identification may help elucidate the pathogenesis of the condition and, in the future, the institution of prompt treatment. Further studies are needed to identify the most appropriate and convenient tests, as well as the optimal timing of testing, for reliably diagnosing CA in children.
Collapse
Affiliation(s)
- Lyndy J Wilcox
- Department of Otolaryngology-Head and Neck Surgery, Eastern Virginia Medical School, 600 Gresham Drive, Suite 1100, Norfolk, VA 23507, USA.
| | - Kaidi He
- Department of Otolaryngology-Head and Neck Surgery, Eastern Virginia Medical School, 600 Gresham Drive, Suite 1100, Norfolk, VA 23507, USA.
| | - Craig S Derkay
- Department of Otolaryngology-Head and Neck Surgery, Eastern Virginia Medical School, 600 Gresham Drive, Suite 1100, Norfolk, VA 23507, USA; Department of Pediatric Otolaryngology-Head and Neck Surgery, Children's Hospital of the King's Daughters, 601 Children's Lane, Second Floor, ENT Suite, Norfolk, VA 23507, USA.
| |
Collapse
|
50
|
Calvino M, Gavilán J, Sánchez-Cuadrado I, Pérez-Mora RM, Muñoz E, Díez-Sebastián J, Lassaletta L. Using the HISQUI29 to assess the sound quality levels of Spanish adults with unilateral cochlear implants and no contralateral hearing. Eur Arch Otorhinolaryngol 2015; 273:2343-53. [PMID: 26440105 DOI: 10.1007/s00405-015-3789-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2015] [Accepted: 09/23/2015] [Indexed: 10/23/2022]
Abstract
To evaluate cochlear implant (CI) users' self-reported level of sound quality and quality of life (QoL). Sound quality was self-evaluated using the hearing implant sound quality index (HISQUI29). HISQUI29 scores were further examined in three subsets. QoL was self-evaluated using the glasgow benefit inventory (GBI). GBI scores were further examined in three subsets. Possible correlations between the HISQUI29 and GBI were explored. Additional possible correlations between these scores and subjects' pure tone averages, speech perception scores, age at implantation, duration of hearing loss, duration of CI use, gender, and implant type were explored. Subjects derived a "moderate" sound quality level from their CI. Television, radio, and telephone tasks were easier in quiet than in background noise. 89 % of subjects reported their QoL benefited from having a CI. Mean total HISQUI29 score significantly correlated with all subcategories of the GBI. Age at implantation inversely correlated with the total HISQUI29 score and with television and radio understanding. Sentence in noise scores significantly correlated with all sound perception scores. Women had a better mean score in music perception and in telephone use than did men. CI users' self-reported levels of sound quality significantly correlated with their QoL. Cochlear implantation had a beneficial impact on subjects' QoL. Understanding speech is easier in quiet than in noise. Music perception remains a challenge for many CI users. The HISQUI29 and the GBI can provide useful information about the everyday effects of future treatment modalities, rehabilitation strategies, and technical developments.
Collapse
Affiliation(s)
- Miryam Calvino
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Javier Gavilán
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Isabel Sánchez-Cuadrado
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Rosa M Pérez-Mora
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Elena Muñoz
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Jesús Díez-Sebastián
- Clinical Epidemiology Unit, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain
| | - Luis Lassaletta
- Department of Otolaryngology, IdiPAZ Research Institute, La Paz University Hospital, Paseo de La Castellana 261, 28046, Madrid, Spain.
| |
Collapse
|