1
|
Pei Y, Xu Z, He Y, Liu X, Bai Y, Kwok SC, Li X, Wang Z. Effects of musical expertise on line section and line extension. Front Psychol 2024; 14:1190098. [PMID: 38655497 PMCID: PMC11036337 DOI: 10.3389/fpsyg.2023.1190098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 10/17/2023] [Indexed: 04/26/2024] Open
Abstract
Background This study investigated whether music training led to better length estimation and/or rightward bias by comparing the performance of musicians (pianists) and non-musicians on performance of line sections and line extensions. Methods One hundred and sixteen participants, among them 62 musicians and 54 non-musicians, participated in the present study, completed line section and line extension task under three conditions: 1/2, 1/3 and 2/3. Results The mixed repeated measures ANOVA analysis revealed a significant group × condition interaction, that the musicians were more accurate than non-musicians in all the line section tasks and showed no obvious pseudoneglect, while their overall performance on the line extension tasks was comparable to the non-musicians, and only performed more accurately in the 1/2 line extension condition. Conclusion These findings indicated that there was a dissociation between the effects of music training on line section and line extension. This dissociation does not support the view that music training has a general beneficial effect on line estimation, and provides insight into a potentially important limit on the effects of music training on spatial cognition.
Collapse
Affiliation(s)
- Yilai Pei
- Key Laboratory of Brain Functional Genomics, Ministry of Education and Shanghai, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Zhiyuan Xu
- Key Laboratory of Brain Functional Genomics, Ministry of Education and Shanghai, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yibo He
- Key Laboratory of Brain Functional Genomics, Ministry of Education and Shanghai, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Xinxin Liu
- Key Laboratory of Brain Functional Genomics, Ministry of Education and Shanghai, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yuxuan Bai
- Key Laboratory of Brain Functional Genomics, Ministry of Education and Shanghai, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Sze Chai Kwok
- Key Laboratory of Brain Functional Genomics, Ministry of Education and Shanghai, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- Shanghai Changning Mental Health Center, Shanghai, China
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China
- Phylo-Cognition Laboratory, Division of Natural and Applied Sciences, Data Science Research Center, Duke Institute for Brain Sciences, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Xiaonuo Li
- Institute of Research of Musical Arts, Shanghai Conservatory of Music, Shanghai, China
| | - Zhaoxin Wang
- Key Laboratory of Brain Functional Genomics, Ministry of Education and Shanghai, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- Shanghai Changning Mental Health Center, Shanghai, China
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, China
| |
Collapse
|
3
|
Skerritt-Davis B, Elhilali M. Neural Encoding of Auditory Statistics. J Neurosci 2021; 41:6726-6739. [PMID: 34193552 PMCID: PMC8336711 DOI: 10.1523/jneurosci.1887-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 05/19/2021] [Accepted: 05/26/2021] [Indexed: 11/21/2022] Open
Abstract
The human brain extracts statistical regularities embedded in real-world scenes to sift through the complexity stemming from changing dynamics and entwined uncertainty along multiple perceptual dimensions (e.g., pitch, timbre, location). While there is evidence that sensory dynamics along different auditory dimensions are tracked independently by separate cortical networks, how these statistics are integrated to give rise to unified objects remains unknown, particularly in dynamic scenes that lack conspicuous coupling between features. Using tone sequences with stochastic regularities along spectral and spatial dimensions, this study examines behavioral and electrophysiological responses from human listeners (male and female) to changing statistics in auditory sequences and uses a computational model of predictive Bayesian inference to formulate multiple hypotheses for statistical integration across features. Neural responses reveal multiplexed brain responses reflecting both local statistics along individual features in frontocentral networks, together with global (object-level) processing in centroparietal networks. Independent tracking of local surprisal along each acoustic feature reveals linear modulation of neural responses, while global melody-level statistics follow a nonlinear integration of statistical beliefs across features to guide perception. Near identical results are obtained in separate experiments along spectral and spatial acoustic dimensions, suggesting a common mechanism for statistical inference in the brain. Potential variations in statistical integration strategies and memory deployment shed light on individual variability between listeners in terms of behavioral efficacy and fidelity of neural encoding of stochastic change in acoustic sequences.SIGNIFICANCE STATEMENT The world around us is complex and ever changing: in everyday listening, sound sources evolve along multiple dimensions, such as pitch, timbre, and spatial location, and they exhibit emergent statistical properties that change over time. In the face of this complexity, the brain builds an internal representation of the external world by collecting statistics from the sensory input along multiple dimensions. Using a Bayesian predictive inference model, this work considers alternative hypotheses for how statistics are combined across sensory dimensions. Behavioral and neural responses from human listeners show the brain multiplexes two representations, where local statistics along each feature linearly affect neural responses, and global statistics nonlinearly combine statistical beliefs across dimensions to shape perception of stochastic auditory sequences.
Collapse
|
4
|
Lacey S, Nguyen J, Schneider P, Sathian K. Crossmodal Visuospatial Effects on Auditory Perception of Musical Contour. Multisens Res 2020; 34:113-127. [PMID: 33706275 DOI: 10.1163/22134808-bja10034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Accepted: 07/08/2020] [Indexed: 11/19/2022]
Abstract
The crossmodal correspondence between auditory pitch and visuospatial elevation (in which high- and low-pitched tones are associated with high and low spatial elevation respectively) has been proposed as the basis for Western musical notation. One implication of this is that music perception engages visuospatial processes and may not be exclusively auditory. Here, we investigated how music perception is influenced by concurrent visual stimuli. Participants listened to unfamiliar five-note musical phrases with four kinds of pitch contour (rising, falling, rising-falling, or falling-rising), accompanied by incidental visual contours that were either congruent (e.g., auditory rising/visual rising) or incongruent (e.g., auditory rising/visual falling) and judged whether the final note of the musical phrase was higher or lower in pitch than the first. Response times for the auditory judgment were significantly slower for incongruent compared to congruent trials, i.e., there was a congruency effect, even though the visual contours were incidental to the auditory task. These results suggest that music perception, although generally regarded as an auditory experience, may actually be multisensory in nature.
Collapse
Affiliation(s)
- Simon Lacey
- 1Department of Neurology, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA.,2Department of Neural and Behavioral Sciences, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA
| | - James Nguyen
- 1Department of Neurology, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA
| | - Peter Schneider
- 3Department of Neuroradiology, Heidelberg Medical School, Heidelberg, Germany.,4Department of Neurology, Heidelberg Medical School, Heidelberg, Germany
| | - K Sathian
- 1Department of Neurology, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA.,2Department of Neural and Behavioral Sciences, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA.,5Department of Psychology, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA
| |
Collapse
|
6
|
The Effect of Memory in Inducing Pleasant Emotions with Musical and Pictorial Stimuli. Sci Rep 2018; 8:17638. [PMID: 30518885 PMCID: PMC6281742 DOI: 10.1038/s41598-018-35899-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 11/07/2018] [Indexed: 11/29/2022] Open
Abstract
Music is known to evoke emotions through a range of mechanisms, but empirical investigation into the mechanisms underlying different emotions is sparse. This study investigated how affective experiences to music and pictures vary when induced by personal memories or mere stimulus features. Prior to the experiment, participants were asked to select eight types of stimuli according to distinct criteria concerning the emotion induction mechanism and valence. In the experiment, participants (N = 30) evaluated their affective experiences with the self-chosen material. EEG was recorded throughout the session. The results showed certain interaction effects of mechanism (memory vs. stimulus features), emotional valence of the stimulus (pleasant vs. unpleasant), and stimulus modality (music vs. pictures). While effects were mainly similar in music and pictures, the findings suggest that when personal memories are involved, stronger positive emotions were experienced in the context of music, even when the music was experienced as unpleasant. Memory generally enhanced social emotions specifically in pleasant conditions. As for sadness and melancholia, stimulus features did not evoke negative experiences; however, these emotions increased strongly with the involvement of memory, particularly in the condition of unpleasant music. Analysis of EEG-data corroborated the findings by relating frontomedial theta activity to memory-evoking material.
Collapse
|