1
|
Li X, Cai S, Chen Y, Tian X, Wang A. Enhancement of visual dominance effects at the response level in children with attention-deficit/hyperactivity disorder. J Exp Child Psychol 2024; 242:105897. [PMID: 38461557 DOI: 10.1016/j.jecp.2024.105897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 02/16/2024] [Accepted: 02/16/2024] [Indexed: 03/12/2024]
Abstract
Previous studies have widely demonstrated that individuals with attention-deficit/hyperactivity disorder (ADHD) exhibit deficits in conflict control tasks. However, there is limited evidence regarding the performance of children with ADHD in cross-modal conflict processing tasks. The current study aimed to investigate whether children with ADHD have poor conflict control, which has an impact on sensory dominance effects at different levels of information processing under the influence of visual similarity. A total of 82 children aged 7 to 14 years, including 41 children with ADHD and 41 age- and sex-matched typically developing (TD) children, were recruited. We used the 2:1 mapping paradigm to separate levels of conflict, and the congruency of the audiovisual stimuli was divided into three conditions. In C trials, the target stimulus and the distractor stimulus were identical, and the bimodal stimuli corresponded to the same response keys. In PRIC trials, the distractor stimulus differed from the target stimulus and did not correspond to any response keys. In RIC trials, the distractor stimulus differed from the target stimulus, and the bimodal stimuli corresponded to different response keys. Therefore, we explicitly differentiated cross-modal conflict into a preresponse level (PRIC > C), corresponding to the encoding process, and a response level (RIC > PRIC), corresponding to the response selection process. Our results suggested that auditory distractors caused more interference during visual processing than visual distractors caused during auditory processing (i.e., typical auditory dominance) at the preresponse level regardless of group. However, visual dominance effects were observed in the ADHD group, whereas no visual dominance effects were observed in the TD group at the response level. A possible explanation is that the increased interference effects due to visual similarity and children with ADHD made it more difficult to control conflict when simultaneously confronted with incongruent visual and auditory inputs. The current study highlights how children with ADHD process cross-modal conflicts at multiple levels of information processing, thereby shedding light on the mechanisms underlying ADHD.
Collapse
Affiliation(s)
- Xin Li
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
| | - Shizhong Cai
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou 215025, China
| | - Yan Chen
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou 215025, China.
| | - Xiaoming Tian
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Suzhou University of Science and Technology, Suzhou 215011, China.
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China.
| |
Collapse
|
2
|
Hashim S, Küssner MB, Weinreich A, Omigie D. The neuro-oscillatory profiles of static and dynamic music-induced visual imagery. Int J Psychophysiol 2024; 199:112309. [PMID: 38242363 DOI: 10.1016/j.ijpsycho.2024.112309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 12/22/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
Visual imagery, i.e., seeing in the absence of the corresponding retinal input, has been linked to visual and motor processing areas of the brain. Music listening provides an ideal vehicle for exploring the neural correlates of visual imagery because it has been shown to reliably induce a broad variety of content, ranging from abstract shapes to dynamic scenes. Forty-two participants listened with closed eyes to twenty-four excerpts of music, while a 15-channel EEG was recorded, and, after each excerpt, rated the extent to which they experienced static and dynamic visual imagery. Our results show both static and dynamic imagery to be associated with posterior alpha suppression (especially in lower alpha) early in the onset of music listening, while static imagery was associated with an additional alpha enhancement later in the listening experience. With regard to the beta band, our results demonstrate beta enhancement to static imagery, but first beta suppression before enhancement in response to dynamic imagery. We also observed a positive association, early in the listening experience, between gamma power and dynamic imagery ratings that was not present for static imagery ratings. Finally, we offer evidence that musical training may selectively drive effects found with respect to static and dynamic imagery and alpha, beta, and gamma band oscillations. Taken together, our results show the promise of using music listening as an effective stimulus for examining the neural correlates of visual imagery and its contents. Our study also highlights the relevance of future work seeking to study the temporal dynamics of music-induced visual imagery.
Collapse
Affiliation(s)
- Sarah Hashim
- Department of Psychology, Goldsmiths, University of London, United Kingdom.
| | - Mats B Küssner
- Department of Psychology, Goldsmiths, University of London, United Kingdom; Department of Musicology and Media Studies, Humboldt-Universität zu Berlin, Germany
| | - André Weinreich
- Department of Psychology, BSP Business & Law School Berlin, Germany
| | - Diana Omigie
- Department of Psychology, Goldsmiths, University of London, United Kingdom
| |
Collapse
|
3
|
Weng Y, Rong Y, Peng G. The development of audiovisual speech perception in Mandarin-speaking children: Evidence from the McGurk paradigm. Child Dev 2024; 95:750-765. [PMID: 37843038 DOI: 10.1111/cdev.14022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 08/30/2023] [Accepted: 09/21/2023] [Indexed: 10/17/2023]
Abstract
The developmental trajectory of audiovisual speech perception in Mandarin-speaking children remains understudied. This cross-sectional study in Mandarin-speaking 3- to 4-year-old, 5- to 6-year-old, 7- to 8-year-old children, and adults from Xiamen, China (n = 87, 44 males) investigated this issue using the McGurk paradigm with three levels of auditory noise. For the identification of congruent stimuli, 3- to 4-year-olds underperformed older groups whose performances were comparable. For the perception of the incongruent stimuli, a developmental shift was observed as 3- to 4-year-olds made significantly more audio-dominant but fewer audiovisual-integrated responses to incongruent stimuli than older groups. With increasing auditory noise, the difference between children and adults widened in identifying congruent stimuli but narrowed in perceiving incongruent ones. The findings regarding noise effects agree with the statistically optimal hypothesis.
Collapse
Affiliation(s)
- Yi Weng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Yicheng Rong
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Gang Peng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| |
Collapse
|
4
|
Li L, Ishida K, Mizuhara K, Barry RJ, Nittono H. Effects of the cardiac cycle on auditory processing: A preregistered study on mismatch negativity. Psychophysiology 2024; 61:e14506. [PMID: 38149745 DOI: 10.1111/psyp.14506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 11/23/2023] [Accepted: 12/01/2023] [Indexed: 12/28/2023]
Abstract
The systolic and diastolic phases of the cardiac cycle are known to affect perception and cognition differently. Higher order processing tends to be facilitated at systole, whereas sensory processing of external stimuli tends to be impaired at systole compared to diastole. The current study aims to examine whether the cardiac cycle affects auditory deviance detection, as reflected in the mismatch negativity (MMN) of the event-related brain potential (ERP). We recorded the intensity deviance response to deviant tones (70 dB) presented among standard tones (60 or 80 dB, depending on blocks) and calculated the MMN by subtracting standard ERP waveforms from deviant ERP waveforms. We also assessed intensity-dependent N1 and P2 amplitude changes by subtracting ERPs elicited by soft standard tones (60 dB) from ERPs elicited by loud standard tones (80 dB). These subtraction methods were used to eliminate phase-locked cardiac-related electric artifacts that overlap auditory ERPs. The endogenous MMN was expected to be larger at systole, reflecting the facilitation of memory-based auditory deviance detection, whereas the exogenous N1 and P2 would be smaller at systole, reflecting impaired exteroceptive sensory processing. However, after the elimination of cardiac-related artifacts, there were no significant differences between systole and diastole in any ERP components. The intensity-dependent N1 and P2 amplitude changes were not obvious in either cardiac phase, probably because of the short interstimulus intervals. The lack of a cardiac phase effect on MMN amplitude suggests that preattentive auditory processing may not be affected by bodily signals from the heart.
Collapse
Affiliation(s)
- Lingjun Li
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
| | - Kai Ishida
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Keita Mizuhara
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Robert J Barry
- School of Psychology, Brain & Behaviour Research Institute, University of Wollongong, Wollongong, New South Wales, Australia
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
| |
Collapse
|
5
|
Bernal-Berdun E, Vallejo M, Sun Q, Serrano A, Gutierrez D. Modeling the Impact of Head-Body Rotations on Audio-Visual Spatial Perception for Virtual Reality Applications. IEEE Trans Vis Comput Graph 2024; 30:2624-2632. [PMID: 38446650 DOI: 10.1109/tvcg.2024.3372112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Humans perceive the world by integrating multimodal sensory feedback, including visual and auditory stimuli, which holds true in virtual reality (VR) environments. Proper synchronization of these stimuli is crucial for perceiving a coherent and immersive VR experience. In this work, we focus on the interplay between audio and vision during localization tasks involving natural head-body rotations. We explore the impact of audio-visual offsets and rotation velocities on users' directional localization acuity for various viewing modes. Using psychometric functions, we model perceptual disparities between visual and auditory cues and determine offset detection thresholds. Our findings reveal that target localization accuracy is affected by perceptual audio-visual disparities during head-body rotations, but remains consistent in the absence of stimuli-head relative motion. We then showcase the effectiveness of our approach in predicting and enhancing users' localization accuracy within realistic VR gaming applications. To provide additional support for our findings, we implement a natural VR game wherein we apply a compensatory audio-visual offset derived from our measured psychometric functions. As a result, we demonstrate a substantial improvement of up to 40% in participants' target localization accuracy. We additionally provide guidelines for content creation to ensure coherent and seamless VR experiences.
Collapse
|
6
|
Chee ZJ, Chang CYM, Cheong JY, Malek FHBA, Hussain S, de Vries M, Bellato A. The effects of music and auditory stimulation on autonomic arousal, cognition and attention: A systematic review. Int J Psychophysiol 2024; 199:112328. [PMID: 38458383 DOI: 10.1016/j.ijpsycho.2024.112328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 03/01/2024] [Accepted: 03/04/2024] [Indexed: 03/10/2024]
Abstract
According to the arousal-mood hypothesis, changes in arousal and mood when exposed to auditory stimulation underlie the detrimental effects or improvements in cognitive performance. Findings supporting or against this hypothesis are, however, often based on subjective ratings of arousal rather than autonomic/physiological indices of arousal. To assess the arousal-mood hypothesis, we carried out a systematic review of the literature on 31 studies investigating cardiac, electrodermal, and pupillometry measures when exposed to different types of auditory stimulation (music, ambient noise, white noise, and binaural beats) in relation to cognitive performance. Our review suggests that the effects of music, noise, or binaural beats on cardiac, electrodermal, and pupillometry measures in relation to cognitive performance are either mixed or insufficient to draw conclusions. Importantly, the evidence for or against the arousal-mood hypothesis is at best indirect because autonomic arousal and cognitive performance are often considered separately. Future research is needed to directly evaluate the effects of auditory stimulation on autonomic arousal and cognitive performance holistically.
Collapse
Affiliation(s)
- Zhong Jian Chee
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia; School of Psychology, University of Aberdeen, Aberdeen, United Kingdom
| | - Chern Yi Marybeth Chang
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia; Mind and Neurodevelopment (MiND) Interdisciplinary Cluster, University of Nottingham Malaysia, Semenyih 43500, Malaysia
| | - Jean Yi Cheong
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia
| | | | - Shahad Hussain
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia
| | - Marieke de Vries
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia; Mind and Neurodevelopment (MiND) Interdisciplinary Cluster, University of Nottingham Malaysia, Semenyih 43500, Malaysia; Development and Education of Youth in Diverse Societies (DEEDS), Faculty of Social Sciences, Utrecht University, the Netherlands
| | - Alessio Bellato
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia; Mind and Neurodevelopment (MiND) Interdisciplinary Cluster, University of Nottingham Malaysia, Semenyih 43500, Malaysia; School of Psychology, University of Southampton, Southampton SO17 1BJ, United Kingdom; Centre for Innovation in Mental Health, University of Southampton, Southampton SO17 1BJ, United Kingdom; Institute for Life Sciences, University of Southampton, United Kingdom.
| |
Collapse
|
7
|
Tseng HC, Hsieh IH. Effects of absolute pitch on brain activation and functional connectivity during hearing-in-noise perception. Cortex 2024; 174:1-18. [PMID: 38484435 DOI: 10.1016/j.cortex.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/11/2024] [Accepted: 02/06/2024] [Indexed: 04/21/2024]
Abstract
Hearing-in-noise (HIN) ability is crucial in speech and music communication. Recent evidence suggests that absolute pitch (AP), the ability to identify isolated musical notes, is associated with HIN benefits. A theoretical account postulates a link between AP ability and neural network indices of segregation. However, how AP ability modulates the brain activation and functional connectivity underlying HIN perception remains unclear. Here we used functional magnetic resonance imaging to contrast brain responses among a sample (n = 45) comprising 15 AP musicians, 15 non-AP musicians, and 15 non-musicians in perceiving Mandarin speech and melody targets under varying signal-to-noise ratios (SNRs: No-Noise, 0, -9 dB). Results reveal that AP musicians exhibited increased activation in auditory and superior frontal regions across both HIN domains (music and speech), irrespective of noise levels. Notably, substantially higher sensorimotor activation was found in AP musicians when the target was music compared to speech. Furthermore, we examined AP effects on neural connectivity using psychophysiological interaction analysis with the auditory cortex as the seed region. AP musicians showed decreased functional connectivity with the sensorimotor and middle frontal gyrus compared to non-AP musicians. Crucially, AP differentially affected connectivity with parietal and frontal brain regions depending on the HIN domain being music or speech. These findings suggest that AP plays a critical role in HIN perception, manifested by increased activation and functional independence between auditory and sensorimotor regions for perceiving music and speech streams.
Collapse
Affiliation(s)
- Hung-Chen Tseng
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
| | - I-Hui Hsieh
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan; Cognitive Intelligence and Precision Healthcare Center, National Central University, Taoyuan City, Taiwan.
| |
Collapse
|
8
|
Benson P, Kathios N, Loui P. Predictive coding in musical anhedonia: A study of groove. PLoS One 2024; 19:e0301478. [PMID: 38652721 PMCID: PMC11037533 DOI: 10.1371/journal.pone.0301478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 03/12/2024] [Indexed: 04/25/2024] Open
Abstract
Groove, or the pleasurable urge to move to music, offers unique insight into the relationship between emotion and action. The predictive coding of music model posits that groove is linked to predictions of music formed over time, with stimuli of moderate complexity rated as most pleasurable and likely to engender movement. At the same time, listeners vary in the pleasure they derive from music listening: individuals with musical anhedonia report reduced pleasure during music listening despite no impairments in music perception and no general anhedonia. Little is known about musical anhedonics' subjective experience of groove. Here we examined the relationship between groove and music reward sensitivity. Participants (n = 287) heard drum-breaks that varied in perceived complexity, and rated each for pleasure and wanting to move. Musical anhedonics (n = 13) had significantly lower ratings compared to controls (n = 13) matched on music perception abilities and general anhedonia. However, both groups demonstrated the classic inverted-U relationship between ratings of pleasure & move and stimulus complexity, with ratings peaking for intermediately complex stimuli. Across our entire sample, pleasure ratings were most strongly related with music reward sensitivity for highly complex stimuli (i.e., there was an interaction between music reward sensitivity and stimulus complexity). Finally, the sensorimotor subscale of music reward was uniquely associated with move, but not pleasure, ratings above and beyond the five other dimensions of musical reward. Results highlight the multidimensional nature of reward sensitivity and suggest that pleasure and wanting to move are driven by overlapping but separable mechanisms.
Collapse
Affiliation(s)
- Peter Benson
- Dept. of Music, College of Arts, Media, and Design, Northeastern University, Boston, Massachusetts, United States of America
- Dept. of Computer Science, Khoury College of Computer Sciences, Northeastern University, Boston, Massachusetts, United States of America
| | - Nicholas Kathios
- Dept. of Psychology, College of Science, Northeastern University, Boston, Massachusetts, United States of America
| | - Psyche Loui
- Dept. of Music, College of Arts, Media, and Design, Northeastern University, Boston, Massachusetts, United States of America
- Dept. of Psychology, College of Science, Northeastern University, Boston, Massachusetts, United States of America
| |
Collapse
|
9
|
Tolnai S, Weiß M, Beutelmann R, Bankstahl JP, Bovee S, Ross TL, Berding G, Klump GM. Age-Related Deficits in Binaural Hearing: Contribution of Peripheral and Central Effects. J Neurosci 2024; 44:e0963222024. [PMID: 38395618 PMCID: PMC11026345 DOI: 10.1523/jneurosci.0963-22.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 01/12/2024] [Accepted: 02/01/2024] [Indexed: 02/25/2024] Open
Abstract
Pure-tone audiograms often poorly predict elderly humans' ability to communicate in everyday complex acoustic scenes. Binaural processing is crucial for discriminating sound sources in such complex acoustic scenes. The compromised perception of communication signals presented above hearing threshold has been linked to both peripheral and central age-related changes in the auditory system. Investigating young and old Mongolian gerbils of both sexes, an established model for human hearing, we demonstrate age-related supra-threshold deficits in binaural hearing using behavioral, electrophysiological, anatomical, and imaging methods. Binaural processing ability was measured as the binaural masking level difference (BMLD), an established measure in human psychophysics. We tested gerbils behaviorally with "virtual headphones," recorded single-unit responses in the auditory midbrain and evaluated gross midbrain and cortical responses using positron emission tomography (PET) imaging. Furthermore, we obtained additional measures of auditory function based on auditory brainstem responses, auditory-nerve synapse counts, and evidence for central inhibitory processing revealed by PET. BMLD deteriorates already in middle-aged animals having normal audiometric thresholds and is even worse in old animals with hearing loss. The magnitude of auditory brainstem response measures related to auditory-nerve function and binaural processing in the auditory brainstem also deteriorate. Furthermore, central GABAergic inhibition is affected by age. Because the number of synapses in the apical turn of the inner ear was not reduced in middle-aged animals, we conclude that peripheral synaptopathy contributes little to binaural processing deficits. Exploratory analyses suggest increased hearing thresholds, altered binaural processing in the brainstem and changed central GABAergic inhibition as potential contributors.
Collapse
Affiliation(s)
- Sandra Tolnai
- Animal Physiology and Behavior Group, Department of Neuroscience, School of Medicine and Health Sciences, Carl von Ossietzky University of Oldenburg, Oldenburg 26111, Germany
- Cluster of Excellence "Hearing4all", Oldenburg 26111, Germany
| | - Mariella Weiß
- Cluster of Excellence "Hearing4all", Hannover 30625, Germany
- Department of Nuclear Medicine, Hannover Medical School, Hannover 30625, Germany
- The Calcium Signalling Group, Department of Biochemistry and Molecular Cell Biology, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
| | - Rainer Beutelmann
- Animal Physiology and Behavior Group, Department of Neuroscience, School of Medicine and Health Sciences, Carl von Ossietzky University of Oldenburg, Oldenburg 26111, Germany
- Cluster of Excellence "Hearing4all", Oldenburg 26111, Germany
| | - Jens P Bankstahl
- Department of Nuclear Medicine, Hannover Medical School, Hannover 30625, Germany
| | - Sonny Bovee
- Animal Physiology and Behavior Group, Department of Neuroscience, School of Medicine and Health Sciences, Carl von Ossietzky University of Oldenburg, Oldenburg 26111, Germany
- Cluster of Excellence "Hearing4all", Oldenburg 26111, Germany
| | - Tobias L Ross
- Department of Nuclear Medicine, Hannover Medical School, Hannover 30625, Germany
| | - Georg Berding
- Cluster of Excellence "Hearing4all", Hannover 30625, Germany
- Department of Nuclear Medicine, Hannover Medical School, Hannover 30625, Germany
| | - Georg M Klump
- Animal Physiology and Behavior Group, Department of Neuroscience, School of Medicine and Health Sciences, Carl von Ossietzky University of Oldenburg, Oldenburg 26111, Germany
- Cluster of Excellence "Hearing4all", Oldenburg 26111, Germany
| |
Collapse
|
10
|
Becker J, Korn CW, Blank H. Pupil diameter as an indicator of sound pair familiarity after statistically structured auditory sequence. Sci Rep 2024; 14:8739. [PMID: 38627572 PMCID: PMC11021535 DOI: 10.1038/s41598-024-59302-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 04/09/2024] [Indexed: 04/19/2024] Open
Abstract
Inspired by recent findings in the visual domain, we investigated whether the stimulus-evoked pupil dilation reflects temporal statistical regularities in sequences of auditory stimuli. We conducted two preregistered pupillometry experiments (experiment 1, n = 30, 21 females; experiment 2, n = 31, 22 females). In both experiments, human participants listened to sequences of spoken vowels in two conditions. In the first condition, the stimuli were presented in a random order and, in the second condition, the same stimuli were presented in a sequence structured in pairs. The second experiment replicated the first experiment with a modified timing and number of stimuli presented and without participants being informed about any sequence structure. The sound-evoked pupil dilation during a subsequent familiarity task indicated that participants learned the auditory vowel pairs of the structured condition. However, pupil diameter during the structured sequence did not differ according to the statistical regularity of the pair structure. This contrasts with similar visual studies, emphasizing the susceptibility of pupil effects during statistically structured sequences to experimental design settings in the auditory domain. In sum, our findings suggest that pupil diameter may serve as an indicator of sound pair familiarity but does not invariably respond to task-irrelevant transition probabilities of auditory sequences.
Collapse
Affiliation(s)
- Janika Becker
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany.
| | - Christoph W Korn
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
- Section Social Neuroscience, Department of General Psychiatry, University of Heidelberg, 69115, Heidelberg, Germany
| | - Helen Blank
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
| |
Collapse
|
11
|
Kania D, Romaniszyn-Kania P, Tuszy A, Bugdol M, Ledwoń D, Czak M, Turner B, Bibrowicz K, Szurmik T, Pollak A, Mitas AW. Evaluation of physiological response and synchronisation errors during synchronous and pseudosynchronous stimulation trials. Sci Rep 2024; 14:8814. [PMID: 38627479 PMCID: PMC11021516 DOI: 10.1038/s41598-024-59477-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 04/11/2024] [Indexed: 04/19/2024] Open
Abstract
Rhythm perception and synchronisation is musical ability with neural basis defined as the ability to perceive rhythm in music and synchronise body movements with it. The study aimed to check the errors of synchronisation and physiological response as a reaction of the subjects to metrorhythmic stimuli of synchronous and pseudosynchronous stimulation (synchronisation with an externally controlled rhythm, but in reality controlled or produced tone by tapping) Nineteen subjects without diagnosed motor disorders participated in the study. Two tests were performed, where the electromyography signal and reaction time were recorded using the NORAXON system. In addition, physiological signals such as electrodermal activity and blood volume pulse were measured using the Empatica E4. Study 1 consisted of adapting the finger tapping test in pseudosynchrony with a given metrorhythmic stimulus with a selection of preferred, choices of decreasing and increasing tempo. Study 2 consisted of metrorhythmic synchronisation during the heel stomping test. Numerous correlations and statistically significant parameters were found between the response of the subjects with respect to their musical education, musical and sports activities. Most of the differentiating characteristics shown evidence of some group division in the undertaking of musical activities. The use of detailed analyses of synchronisation errors can contribute to the development of methods to improve the rehabilitation process of subjects with motor dysfunction, and this will contribute to the development of an expert system that considers personalised musical preferences.
Collapse
Affiliation(s)
- Damian Kania
- Institute of Physiotherapy and Health Sciences, The Jerzy Kukuczka Academy of Physical Education in Katowice, Mikołowska 72A, 40-065, Katowice, Poland
| | - Patrycja Romaniszyn-Kania
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland.
| | - Aleksandra Tuszy
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| | - Monika Bugdol
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| | - Daniel Ledwoń
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| | - Miroslaw Czak
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| | - Bruce Turner
- dBs Music, HE Music Faculty, 17 St Thomas St, Redcliffe, Bristol, BS1 6JS, UK
| | - Karol Bibrowicz
- Science and Research Center of Body Posture, College of Education and Therapy in Poznań, 61-473, Poznań, Poland
| | - Tomasz Szurmik
- Faculty of Arts and Educational Science, University of Silesia, ul. Bielska 62, 43-400, Cieszyn, Poland
| | - Anita Pollak
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
- Institute of Psychology, University of Silesia, ul. Grazynskiego 53, 40-126, Katowice, Poland
| | - Andrzej W Mitas
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| |
Collapse
|
12
|
Tsunada J, Wang X, Eliades SJ. Multiple processes of vocal sensory-motor interaction in primate auditory cortex. Nat Commun 2024; 15:3093. [PMID: 38600118 PMCID: PMC11006904 DOI: 10.1038/s41467-024-47510-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 04/02/2024] [Indexed: 04/12/2024] Open
Abstract
Sensory-motor interactions in the auditory system play an important role in vocal self-monitoring and control. These result from top-down corollary discharges, relaying predictions about vocal timing and acoustics. Recent evidence suggests such signals may be two distinct processes, one suppressing neural activity during vocalization and another enhancing sensitivity to sensory feedback, rather than a single mechanism. Single-neuron recordings have been unable to disambiguate due to overlap of motor signals with sensory inputs. Here, we sought to disentangle these processes in marmoset auditory cortex during production of multi-phrased 'twitter' vocalizations. Temporal responses revealed two timescales of vocal suppression: temporally-precise phasic suppression during phrases and sustained tonic suppression. Both components were present within individual neurons, however, phasic suppression presented broadly regardless of frequency tuning (gating), while tonic was selective for vocal frequencies and feedback (prediction). This suggests that auditory cortex is modulated by concurrent corollary discharges during vocalization, with different computational mechanisms.
Collapse
Affiliation(s)
- Joji Tsunada
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
- Chinese Institute for Brain Research, Beijing, China
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Steven J Eliades
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA.
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC, USA.
| |
Collapse
|
13
|
Puschmann S, Regev M, Fakhar K, Zatorre RJ, Thiel CM. Attention-Driven Modulation of Auditory Cortex Activity during Selective Listening in a Multispeaker Setting. J Neurosci 2024; 44:e1157232023. [PMID: 38388426 PMCID: PMC11007309 DOI: 10.1523/jneurosci.1157-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 10/30/2023] [Accepted: 11/05/2023] [Indexed: 02/24/2024] Open
Abstract
Real-world listening settings often consist of multiple concurrent sound streams. To limit perceptual interference during selective listening, the auditory system segregates and filters the relevant sensory input. Previous work provided evidence that the auditory cortex is critically involved in this process and selectively gates attended input toward subsequent processing stages. We studied at which level of auditory cortex processing this filtering of attended information occurs using functional magnetic resonance imaging (fMRI) and a naturalistic selective listening task. Forty-five human listeners (of either sex) attended to one of two continuous speech streams, presented either concurrently or in isolation. Functional data were analyzed using an inter-subject analysis to assess stimulus-specific components of ongoing auditory cortex activity. Our results suggest that stimulus-related activity in the primary auditory cortex and the adjacent planum temporale are hardly affected by attention, whereas brain responses at higher stages of the auditory cortex processing hierarchy become progressively more selective for the attended input. Consistent with these findings, a complementary analysis of stimulus-driven functional connectivity further demonstrated that information on the to-be-ignored speech stream is shared between the primary auditory cortex and the planum temporale but largely fails to reach higher processing stages. Our findings suggest that the neural processing of ignored speech cannot be effectively suppressed at the level of early cortical processing of acoustic features but is gradually attenuated once the competing speech streams are fully segregated.
Collapse
Affiliation(s)
- Sebastian Puschmann
- Biological Psychology Lab, Department of Psychology, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany
- Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany
| | - Mor Regev
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
| | - Kayson Fakhar
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg 20246, Germany
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Quebec H2V 2S9, Canada
| | - Christiane M Thiel
- Biological Psychology Lab, Department of Psychology, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany
- Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany
| |
Collapse
|
14
|
Benjamin L, Sablé-Meyer M, Fló A, Dehaene-Lambertz G, Al Roumi F. Long-Horizon Associative Learning Explains Human Sensitivity to Statistical and Network Structures in Auditory Sequences. J Neurosci 2024; 44:e1369232024. [PMID: 38408873 PMCID: PMC10993028 DOI: 10.1523/jneurosci.1369-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 01/16/2024] [Accepted: 02/07/2024] [Indexed: 02/28/2024] Open
Abstract
Networks are a useful mathematical tool for capturing the complexity of the world. In a previous behavioral study, we showed that human adults were sensitive to the high-level network structure underlying auditory sequences, even when presented with incomplete information. Their performance was best explained by a mathematical model compatible with associative learning principles, based on the integration of the transition probabilities between adjacent and nonadjacent elements with a memory decay. In the present study, we explored the neural correlates of this hypothesis via magnetoencephalography (MEG). Participants (N = 23, 16 females) passively listened to sequences of tones organized in a sparse community network structure comprising two communities. An early difference (∼150 ms) was observed in the brain responses to tone transitions with similar transition probability but occurring either within or between communities. This result implies a rapid and automatic encoding of the sequence structure. Using time-resolved decoding, we estimated the duration and overlap of the representation of each tone. The decoding performance exhibited exponential decay, resulting in a significant overlap between the representations of successive tones. Based on this extended decay profile, we estimated a long-horizon associative learning novelty index for each transition and found a correlation of this measure with the MEG signal. Overall, our study sheds light on the neural mechanisms underlying human sensitivity to network structures and highlights the potential role of Hebbian-like mechanisms in supporting learning at various temporal scales.
Collapse
Affiliation(s)
- Lucas Benjamin
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, 91190 Gif/Yvette, France
| | - Mathias Sablé-Meyer
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, 91190 Gif/Yvette, France
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London W1T 4JG, United Kingdom
| | - Ana Fló
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, 91190 Gif/Yvette, France
- Department of Developmental Psychology and Socialization, University of Padova, Padova 35131, Italy
| | - Ghislaine Dehaene-Lambertz
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, 91190 Gif/Yvette, France
| | - Fosca Al Roumi
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, 91190 Gif/Yvette, France
| |
Collapse
|
15
|
Hu M, Bianco R, Hidalgo AR, Chait M. Concurrent Encoding of Sequence Predictability and Event-Evoked Prediction Error in Unfolding Auditory Patterns. J Neurosci 2024; 44:e1894232024. [PMID: 38350998 PMCID: PMC10993036 DOI: 10.1523/jneurosci.1894-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 02/02/2024] [Accepted: 02/06/2024] [Indexed: 03/26/2024] Open
Abstract
Human listeners possess an innate capacity to discern patterns within rapidly unfolding sensory input. Core questions, guiding ongoing research, focus on the mechanisms through which these representations are acquired and whether the brain prioritizes or suppresses predictable sensory signals. Previous work, using fast auditory sequences (tone-pips presented at a rate of 20 Hz), revealed sustained response effects that appear to track the dynamic predictability of the sequence. Here, we extend the investigation to slower sequences (4 Hz), permitting the isolation of responses to individual tones. Stimuli were 50 ms tone-pips, ordered into random (RND) and regular (REG; a repeating pattern of 10 frequencies) sequences; Two timing profiles were created: in "fast" sequences, tone-pips were presented in direct succession (20 Hz); in "slow" sequences, tone-pips were separated by a 200 ms silent gap (4 Hz). Naive participants (N = 22; both sexes) passively listened to these sequences, while brain responses were recorded using magnetoencephalography (MEG). Results unveiled a heightened magnitude of sustained brain responses in REG when compared to RND patterns. This manifested from three tones after the onset of the pattern repetition, even in the context of slower sequences characterized by extended pattern durations (2,500 ms). This observation underscores the remarkable implicit sensitivity of the auditory brain to acoustic regularities. Importantly, brain responses evoked by single tones exhibited the opposite pattern-stronger responses to tones in RND than REG sequences. The demonstration of simultaneous but opposing sustained and evoked response effects reveals concurrent processes that shape the representation of unfolding auditory patterns.
Collapse
Affiliation(s)
- Mingyue Hu
- Ear Institute, University College London, London WC1X 8EE, United Kingdom
| | - Roberta Bianco
- Ear Institute, University College London, London WC1X 8EE, United Kingdom
- Neuroscience of Perception & Action Lab, Italian Institute of Technology (IIT), Rome 00161, Italy
| | | | - Maria Chait
- Ear Institute, University College London, London WC1X 8EE, United Kingdom
| |
Collapse
|
16
|
Siedenburg K, Bürgel M, Özgür E, Scheicht C, Töpken S. Vibrotactile enhancement of musical engagement. Sci Rep 2024; 14:7764. [PMID: 38565622 PMCID: PMC10987628 DOI: 10.1038/s41598-024-57961-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 03/23/2024] [Indexed: 04/04/2024] Open
Abstract
Sound is sensed by the ear but can also be felt on the skin, by means of vibrotactile stimulation. Only little research has addressed perceptual implications of vibrotactile stimulation in the realm of music. Here, we studied which perceptual dimensions of music listening are affected by vibrotactile stimulation and whether the spatial segregation of vibrations improves vibrotactile stimulation. Forty-one listeners were presented with vibrotactile stimuli via a chair's surfaces (left and right arm rests, back rest, seat) in addition to music presented over headphones. Vibrations for each surface were derived from individual tracks of the music (multi condition) or conjointly by a mono-rendering, in addition to incongruent and headphones-only conditions. Listeners evaluated unknown music from popular genres according to valence, arousal, groove, the feeling of being part of a live performance, the feeling of being part of the music, and liking. Results indicated that the multi- and mono vibration conditions robustly enhanced the nature of the musical experience compared to listening via headphones alone. Vibrotactile enhancement was strong in the latent dimension of 'musical engagement', encompassing the sense of being a part of the music, arousal, and groove. These findings highlight the potential of vibrotactile cues for creating intensive musical experiences.
Collapse
Affiliation(s)
- Kai Siedenburg
- Graz University of Technology, Signal Processing and Speech Communication Laboratory, 8010, Graz, Austria.
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany.
| | - Michel Bürgel
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| | - Elif Özgür
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| | - Christoph Scheicht
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| | - Stephan Töpken
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| |
Collapse
|
17
|
Zhou L, Xing L, Zheng C, Li S. Moving stimuli enhance beat timing and sensorimotor coupling in vision. J Exp Psychol Hum Percept Perform 2024; 50:416-429. [PMID: 38421792 DOI: 10.1037/xhp0001193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
Vision has long been known for its inefficiency in beat perception and synchronization. However, this has been challenged by the finding that moving stimuli (bouncing ball or moving bar) can significantly improve visual beat synchronization. The present study examined two possible mechanisms for this phenomenon: visual motion facilitates temporal processing or promotes sensorimotor coupling. Instead of a single visual object (such as a ball or bar), random-dot kinematograms (RDKs) were used to construct visual motion sequences to avoid confounding factors, such as changes in trajectory and velocity. Experiment 1 showed that RDKs improved beat-timing discrimination compared with visual flashes, but auditory tones were still superior to RDKs. In Experiment 2, synchronized movements improved auditory-tone beat timing but impaired visual-flash beat timing, with no effect on RDK beat timing. Experiment 3 indicated that the regression slope of the phase correction response in RDKs was higher than that in visual flashes but still lower than that in auditory tones. The results showed that moving stimuli enhances both temporal processing (Experiment 1) and sensorimotor coupling (Experiments 2 and 3) in vision, but to a lesser degree, with audition retaining an advantage. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Liang Zhou
- Department of Psychology, Shandong Normal University
| | - Lianzi Xing
- Department of Psychology, Shandong Normal University
| | - Chenhao Zheng
- Department of Psychology, Shandong Normal University
| | - Shouxin Li
- Department of Psychology, Shandong Normal University
| |
Collapse
|
18
|
Wang L, Xin H, Buren Q, Zhang Y, Han Y, Ouyang B, Sun Z, Bao Y, Dong C. Specific rules for time and space of multisensory plasticity in the superior colliculus. Brain Res 2024; 1828:148774. [PMID: 38244758 DOI: 10.1016/j.brainres.2024.148774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 12/28/2023] [Accepted: 01/15/2024] [Indexed: 01/22/2024]
Abstract
Cat superior colliculus (SC) neurons commonly combine information from different senses, which facilitates event detection and localization. Integration in SC multisensory neurons depends on the spatial and temporal relationships between cross-modal cues. Here, we revealed the parallel process of short-term plasticity in the temporal/spatial integration process during adulthood that adapts multisensory integration to reliable changes in environmental conditions. Short-term experience alters the temporal preferences of SC multisensory neurons, and this short-term plasticity in the temporal/spatial integration process is limited to changes in cross-modal timing (a factor commonly induced by events at different distances from the receiver). However, this plasticity was not evident in response to changes in the cross-modal spatial configuration.
Collapse
Affiliation(s)
- Linghong Wang
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Hongmei Xin
- School of Humanities Education, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Qiqige Buren
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Yan Zhang
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Yaxin Han
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Biao Ouyang
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Zhe Sun
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Yulong Bao
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China.
| | - Chao Dong
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China.
| |
Collapse
|
19
|
Arya P, Kolodny NH, Gobes SMH. Tracing the development of learned song preferences in the female zebra finch brain with functional magnetic resonance imaging. Dev Neurobiol 2024; 84:47-58. [PMID: 38466218 PMCID: PMC11009042 DOI: 10.1002/dneu.22934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 01/31/2024] [Accepted: 02/07/2024] [Indexed: 03/12/2024]
Abstract
In sexually dimorphic zebra finches (Taeniopygia guttata), only males learn to sing their father's song, whereas females learn to recognize the songs of their father or mate but cannot sing themselves. Memory of learned songs is behaviorally expressed in females by preferring familiar songs over unfamiliar ones. Auditory association regions such as the caudomedial mesopallium (CMM; or caudal mesopallium) have been shown to be key nodes in a network that supports preferences for learned songs in adult females. However, much less is known about how song preferences develop during the sensitive period of learning in juvenile female zebra finches. In this study, we used blood-oxygen level-dependent (BOLD) functional magnetic resonance imaging (fMRI) to trace the development of a memory-based preference for the father's song in female zebra finches. Using BOLD fMRI, we found that only in adult female zebra finches with a preference for learned song over novel conspecific song, neural selectivity for the father's song was localized in the thalamus (dorsolateral nucleus of the medial thalamus; part of the anterior forebrain pathway, AFP) and in CMM. These brain regions also showed a selective response in juvenile female zebra finches, although activation was less prominent. These data reveal that neural responses in CMM, and perhaps also in the AFP, are shaped during development to support behavioral preferences for learned songs.
Collapse
Affiliation(s)
- Payal Arya
- Neuroscience Department, Wellesley College, Wellesley, Massachusetts 02481, USA
| | - Nancy H. Kolodny
- Chemistry Department, Wellesley College, Wellesley, Massachusetts 02481, USA
| | - Sharon M. H. Gobes
- Neuroscience Department, Wellesley College, Wellesley, Massachusetts 02481, USA
| |
Collapse
|
20
|
Jones SD, Stewart HJ, Westermann G. A maturational frequency discrimination deficit may explain developmental language disorder. Psychol Rev 2024; 131:695-715. [PMID: 37498700 DOI: 10.1037/rev0000436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Auditory perceptual deficits are widely observed among children with developmental language disorder (DLD). Yet, the nature of these deficits and the extent to which they explain speech and language problems remain controversial. In this study, we hypothesize that disruption to the maturation of the basilar membrane may impede the optimization of the auditory pathway from brainstem to cortex, curtailing high-resolution frequency sensitivity and the efficient spectral decomposition and encoding of natural speech. A series of computational simulations involving deep convolutional neural networks that were trained to encode, recognize, and retrieve naturalistic speech are presented to demonstrate the strength of this account. These neural networks were built on top of biologically truthful inner ear models developed to model human cochlea function, which-in the key innovation of the present study-were scheduled to mature at different rates over time. Delaying cochlea maturation qualitatively replicated the linguistic behavior and neurophysiology of individuals with language learning difficulties in a number of ways, resulting in (a) delayed language acquisition profiles, (b) lower spoken word recognition accuracy, (c) word finding and retrieval difficulties, (d) "fuzzy" and intersecting speech encodings and signatures of immature neural optimization, and (e) emergent working memory and attentional deficits. These simulations illustrate many negative cascading effects that a primary maturational frequency discrimination deficit may have on early language development and generate precise and testable hypotheses for future research into the nature and cost of auditory processing deficits in children with language learning difficulties. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
|
21
|
Simon A, Bech S, Loquet G, Østergaard J. Cortical linear encoding and decoding of sounds: Similarities and differences between naturalistic speech and music listening. Eur J Neurosci 2024; 59:2059-2074. [PMID: 38303522 DOI: 10.1111/ejn.16265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 11/02/2023] [Accepted: 01/12/2024] [Indexed: 02/03/2024]
Abstract
Linear models are becoming increasingly popular to investigate brain activity in response to continuous and naturalistic stimuli. In the context of auditory perception, these predictive models can be 'encoding', when stimulus features are used to reconstruct brain activity, or 'decoding' when neural features are used to reconstruct the audio stimuli. These linear models are a central component of some brain-computer interfaces that can be integrated into hearing assistive devices (e.g., hearing aids). Such advanced neurotechnologies have been widely investigated when listening to speech stimuli but rarely when listening to music. Recent attempts at neural tracking of music show that the reconstruction performances are reduced compared with speech decoding. The present study investigates the performance of stimuli reconstruction and electroencephalogram prediction (decoding and encoding models) based on the cortical entrainment of temporal variations of the audio stimuli for both music and speech listening. Three hypotheses that may explain differences between speech and music stimuli reconstruction were tested to assess the importance of the speech-specific acoustic and linguistic factors. While the results obtained with encoding models suggest different underlying cortical processing between speech and music listening, no differences were found in terms of reconstruction of the stimuli or the cortical data. The results suggest that envelope-based linear modelling can be used to study both speech and music listening, despite the differences in the underlying cortical mechanisms.
Collapse
Affiliation(s)
- Adèle Simon
- Artificial Intelligence and Sound, Department of Electronic Systems, Aalborg University, Aalborg, Denmark
- Research Department, Bang & Olufsen A/S, Struer, Denmark
| | - Søren Bech
- Artificial Intelligence and Sound, Department of Electronic Systems, Aalborg University, Aalborg, Denmark
- Research Department, Bang & Olufsen A/S, Struer, Denmark
| | - Gérard Loquet
- Department of Audiology and Speech Pathology, University of Melbourne, Melbourne, Victoria, Australia
| | - Jan Østergaard
- Artificial Intelligence and Sound, Department of Electronic Systems, Aalborg University, Aalborg, Denmark
| |
Collapse
|
22
|
Ooi K, Goh J, Lin HW, Ong ZT, Wong T, Watcharasupat KN, Lam B, Gan WS. Lion city soundscapes: Modified partitioning around medoids for a perceptually diverse dataset of Singaporean soundscapesa). JASA Express Lett 2024; 4:047402. [PMID: 38662119 DOI: 10.1121/10.0025830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 04/04/2024] [Indexed: 04/26/2024]
Abstract
This study presents a dataset of audio-visual soundscape recordings at 62 different locations in Singapore, initially made as full-length recordings over spans of 9-38 min. For consistency and reduction in listener fatigue in future subjective studies, one-minute excerpts were cropped from the full-length recordings. An automated method using pre-trained models for Pleasantness and Eventfulness (according to ISO 12913) in a modified partitioning around medoids algorithm was employed to generate the set of excerpts by balancing the need to encompass the perceptual space with uniformity in distribution. A validation study on the method confirmed its adherence to the intended design.
Collapse
Affiliation(s)
- Kenneth Ooi
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
| | - Jessie Goh
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
| | - Hao-Weng Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
| | - Zhen-Ting Ong
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
| | - Trevor Wong
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
| | - Karn N Watcharasupat
- Music Informatics Group, Georgia Institute of Technology, Atlanta, Georgia 30332, ; ; ; ; ; ; ;
| | - Bhan Lam
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
| | - Woon-Seng Gan
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
| |
Collapse
|
23
|
Venskus A. Perceptual Training as Means to Assess the Effect of Alpha Frequency on Temporal Binding Window. J Cogn Neurosci 2024; 36:706-711. [PMID: 36877055 DOI: 10.1162/jocn_a_01982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023]
Abstract
For decades, it has been shown that alpha frequency is related to temporal binding window, and currently, such is the mainstream viewpoint [Noguchi, Y. Individual differences in beta frequency correlate with the audio-visual fusion illusion. Psychophysiology, 59, e14041, 2022; Gray, M. J., & Emmanouil, T. A. Individual alpha frequency increases during a task but is unchanged by alpha-band flicker. Psychophysiology, 57, e13480, 2020; Hirst, R. J., McGovern, D. P., Setti, A., Shams, L., & Newell, F. N. What you see is what you hear: Twenty years of research using the sound-induced flash illusion. Neuroscience & Biobehavioral Reviews, 118, 759-774, 2020; Keil, J. Double flash illusions: Current findings and future directions. Frontiers in Neuroscience, 14, 298, 2020; Migliorati, D., Zappasodi, F., Perrucci, M. G., Donno, B., Northoff, G., Romei, V., & Costantini, M. Individual alpha frequency predicts perceived visuotactile simultaneity. Journal of Cognitive Neuroscience, 32, 1-11, 2020; Keil, J., & Senkowski, D. Individual alpha frequency relates to the sound-induced flash illusion. Multisensory Research, 30, 565-578, 2017; Minami, S., & Amano, K. Illusory jitter perceived at the frequency of alpha oscillations. Current Biology, 27, 2344-2351, 2017; Cecere, R., Rees, G., & Romei, V. Individual differences in alpha frequency drive crossmodal illusory perception. Current Biology, 25, 231-235, 2015]. However, recently, this stance has been challenged [Buergers, S., & Noppeney, U. The role of alpha oscillations in temporal binding within and across the senses. Nature Human Behaviour, 6, 732-742, 2022]. Moreover, both stances appear to have their limitations regarding the reliability of results. Therefore, it is of paramount importance to develop new methodology to gain more reliable results. Perceptual training seems to be such a method that also offers significant practical implications.
Collapse
|
24
|
Koehler F, Schäfer SK, Lieb K, Wessa M. The interplay between music engagement and affect: A random-intercept cross-lagged panel analysis. Emotion 2024; 24:562-573. [PMID: 37676160 DOI: 10.1037/emo0001279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/08/2023]
Abstract
Engagement with music has the capacity to influence and be influenced by affective experiences. Although cross-sectional and experimental research provides evidence that music engagement is related to higher positive and lower negative affect, few studies have investigated the bidirectional nature of this relationship over time. The present longitudinal study, therefore, examined the interplay between passive and active music engagement and affect using random-intercept cross-lagged panel analysis. Over 8 weeks in 2022, 428 participants regularly engaging with music completed weekly online surveys on quantitative music engagement (i.e., time spent with music listening/music making), qualitative music engagement (i.e., use of music listening/music making for mood regulation) as well as positive and negative affect. Results revealed cross-lagged associations between music engagement and negative affect, but not positive affect: regarding quantitative music engagement, more time spent with music listening (but not music making) was related to less negative affect than usual at the following measurement. Results on qualitative music engagement showed that weeks with more negative affect than usual were followed by an increased use of music listening and music making for mood regulation. Our findings emphasize the bidirectional nature of the relationship between music engagement and affect corroborating the significant role of music engagement in affect regulation. Future research should replicate these findings with a more diverse sample regarding age, sex, ethnicity, education, and socioeconomic status. Additionally, further studies could examine individual and contextual factors and adequate measurement time points for further investigation of bidirectional affective processes involved in music engagement. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | - Klaus Lieb
- Leibniz Institute for Resilience Research
| | | |
Collapse
|
25
|
Hirano Y, Nakamura I, Tamura S. Abnormal connectivity and activation during audiovisual speech perception in schizophrenia. Eur J Neurosci 2024; 59:1918-1932. [PMID: 37990611 DOI: 10.1111/ejn.16183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 10/14/2023] [Accepted: 10/20/2023] [Indexed: 11/23/2023]
Abstract
The unconscious integration of vocal and facial cues during speech perception facilitates face-to-face communication. Recent studies have provided substantial behavioural evidence concerning impairments in audiovisual (AV) speech perception in schizophrenia. However, the specific neurophysiological mechanism underlying these deficits remains unknown. Here, we investigated activities and connectivities centered on the auditory cortex during AV speech perception in schizophrenia. Using magnetoencephalography, we recorded and analysed event-related fields in response to auditory (A: voice), visual (V: face) and AV (voice-face) stimuli in 23 schizophrenia patients (13 males) and 22 healthy controls (13 males). The functional connectivity associated with the subadditive response to AV stimulus (i.e., [AV] < [A] + [V]) was also compared between the two groups. Within the healthy control group, [AV] activity was smaller than the sum of [A] and [V] at latencies of approximately 100 ms in the posterior ramus of the lateral sulcus in only the left hemisphere, demonstrating a subadditive N1m effect. Conversely, the schizophrenia group did not show such a subadditive response. Furthermore, weaker functional connectivity from the posterior ramus of the lateral sulcus of the left hemisphere to the fusiform gyrus of the right hemisphere was observed in schizophrenia. Notably, this weakened connectivity was associated with the severity of negative symptoms. These results demonstrate abnormalities in connectivity between speech- and face-related cortical areas in schizophrenia. This aberrant subadditive response and connectivity deficits for integrating speech and facial information may be the neural basis of social communication dysfunctions in schizophrenia.
Collapse
Affiliation(s)
- Yoji Hirano
- Department of Psychiatry, Division of Clinical Neuroscience, Faculty of Medicine, University of Miyazaki, Miyazaki, Japan
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
- Institute of Industrial Science, The University of Tokyo, Tokyo, Japan
| | - Itta Nakamura
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Shunsuke Tamura
- Department of Psychiatry, Division of Clinical Neuroscience, Faculty of Medicine, University of Miyazaki, Miyazaki, Japan
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| |
Collapse
|
26
|
Cary E, Pacheco D, Kaplan-Kahn E, McKernan E, Matsuba E, Prieve B, Russo N. Brain Signatures of Early and Late Neural Measures of Auditory Habituation and Discrimination in Autism and Their Relationship to Autistic Traits and Sensory Overresponsivity. J Autism Dev Disord 2024; 54:1344-1360. [PMID: 36626009 DOI: 10.1007/s10803-022-05866-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/08/2022] [Indexed: 01/11/2023]
Abstract
Sensory differences are included in the DSM-5 criteria of autism for the first time, yet it is unclear how they relate to neural indicators of perception. We studied early brain signatures of perception and examined their relationship to sensory behaviors and autistic traits. Thirteen autistic children and 13 Typically Developing (TD) children matched on age and nonverbal IQ participated in a passive oddball task, during which P1 habituation and P1 and MMN discrimination were evoked by pure tones. Autistic children had less neural habituation than the TD comparison group, and the MMN, but not P1, mapped on to sensory overresponsivity. Findings highlight the significance of temporal and contextual factors in neural information processing as it relates to autistic traits and sensory behaviors.
Collapse
Affiliation(s)
- Emily Cary
- Department of Psychology, Syracuse University, 430 Huntington Hall, 13244 2340, Syracuse, NY, USA
| | - Devon Pacheco
- Department of Communication Sciences and Disorders, Syracuse University, 621 Skytop Rd. Suite 1200, 13244, Syracuse, NY, USA
| | - Elizabeth Kaplan-Kahn
- Department of Psychology, Syracuse University, 430 Huntington Hall, 13244 2340, Syracuse, NY, USA
| | - Elizabeth McKernan
- Department of Psychology, Syracuse University, 430 Huntington Hall, 13244 2340, Syracuse, NY, USA
| | - Erin Matsuba
- Department of Psychology, Syracuse University, 430 Huntington Hall, 13244 2340, Syracuse, NY, USA
| | - Beth Prieve
- Department of Communication Sciences and Disorders, Syracuse University, 621 Skytop Rd. Suite 1200, 13244, Syracuse, NY, USA
| | - Natalie Russo
- Department of Psychology, Syracuse University, 430 Huntington Hall, 13244 2340, Syracuse, NY, USA.
| |
Collapse
|
27
|
Oude Lohuis MN, Marchesi P, Olcese U, Pennartz CMA. Triple dissociation of visual, auditory and motor processing in mouse primary visual cortex. Nat Neurosci 2024; 27:758-771. [PMID: 38307971 DOI: 10.1038/s41593-023-01564-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 12/19/2023] [Indexed: 02/04/2024]
Abstract
Primary sensory cortices respond to crossmodal stimuli-for example, auditory responses are found in primary visual cortex (V1). However, it remains unclear whether these responses reflect sensory inputs or behavioral modulation through sound-evoked body movement. We address this controversy by showing that sound-evoked activity in V1 of awake mice can be dissociated into auditory and behavioral components with distinct spatiotemporal profiles. The auditory component began at approximately 27 ms, was found in superficial and deep layers and originated from auditory cortex. Sound-evoked orofacial movements correlated with V1 neural activity starting at approximately 80-100 ms and explained auditory frequency tuning. Visual, auditory and motor activity were expressed by different laminar profiles and largely segregated subsets of neuronal populations. During simultaneous audiovisual stimulation, visual representations remained dissociable from auditory-related and motor-related activity. This three-fold dissociability of auditory, motor and visual processing is central to understanding how distinct inputs to visual cortex interact to support vision.
Collapse
Affiliation(s)
- Matthijs N Oude Lohuis
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
| | - Pietro Marchesi
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Umberto Olcese
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Cyriel M A Pennartz
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands.
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands.
| |
Collapse
|
28
|
De Doncker W, Kuppuswamy A. Influence of Perceptual Load on Attentional Orienting in Post-Stroke Fatigue: A Study of Auditory Evoked Potentials. Neurorehabil Neural Repair 2024; 38:257-267. [PMID: 38339993 PMCID: PMC10976458 DOI: 10.1177/15459683241230030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2024]
Abstract
OBJECTIVE Increasing perceptual load alters behavioral outcomes in post-stroke fatigue (PSF). While the effect of perceptual load on top-down attentional processing is known, here we investigate if increasing perceptual load modulates bottom-up attentional processing in a fatigue dependent manner. METHODS In this cross-sectional observational study, in 29 first-time stroke survivors with no clinical depression, an auditory oddball task consisting of target, standard, and novel tones was performed in conditions of low and high perceptual load. Electroencephalography was used to measure auditory evoked potentials. Perceived effort was rated using the visual analog scale at regular intervals during the experiment. Fatigue was measured using the fatigue severity scale. The effect of fatigue and perceptual load on behavior (response time, accuracy, and effort rating) and auditory evoked potentials (amplitude and latency) was examined using mixed model ananlysis of variances (ANOVA). RESULTS Response time was prolonged with greater perceptual load and fatigue. There was no effect of load or fatigue on accuracy. Greater effort was reported with higher perceptual load both in high and low fatigue. p300a amplitude of auditory evoked potentials (AEP) for novel stimuli was attenuated in high fatigue with increasing load when compared to low fatigue. Latency of p300a was longer in low fatigue with increasing load when compared to high fatigue. There were no effects on p300b components, with smaller N100 in high load conditions. INTERPRETATION High fatigue specific modulation of p300a component of AEP with increasing load is indicative of distractor driven alteration in orienting response, suggestive of compromise in bottom-up selective attention in PSF.
Collapse
Affiliation(s)
- William De Doncker
- Department of Clinical and Movement Neuroscience, Institute of Neurology, UCL, London, UK
| | - Annapoorna Kuppuswamy
- Department of Clinical and Movement Neuroscience, Institute of Neurology, UCL, London, UK
- Department of Biomedical Sciences, University of Leeds, Leeds, UK
| |
Collapse
|
29
|
Asakura T. Subjective effects of broadband water sounds with inaudible high-frequency components. Sci Rep 2024; 14:7627. [PMID: 38561365 PMCID: PMC10984986 DOI: 10.1038/s41598-024-57749-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 03/21/2024] [Indexed: 04/04/2024] Open
Abstract
This study aimed to investigate the effects of reproducing an ultrasonic sound above 20 kHz on the subjective impressions of water sounds using psychological and physiological information obtained by the semantic differential method and electroencephalography (EEG), respectively. The results indicated that the ultrasonic component affected the subjective impression of the water sounds. In addition, regarding the relationship between psychological and physiological aspects, a moderate correlation was confirmed between the EEG change rate and subjective impressions. However, no differences in characteristics were found between with and without the ultrasound component, suggesting that ultrasound does not directly affect the relationship between subjective impressions and EEG energy at the current stage. Furthermore, the correlations calculated for the left and right channels in the occipital region differed significantly, which suggests functional asymmetry for sound perception between the right and left hemispheres.
Collapse
Affiliation(s)
- Takumi Asakura
- Department of Mechanical and Aerospace Engineering, Faculty of Science and Engineering, Tokyo University of Science, Chiba, Japan.
| |
Collapse
|
30
|
Mittelstadt JK, Shilling-Scrivo KV, Kanold PO. Long-term training alters response dynamics in the aging auditory cortex. Hear Res 2024; 444:108965. [PMID: 38364511 DOI: 10.1016/j.heares.2024.108965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/16/2024] [Accepted: 01/20/2024] [Indexed: 02/18/2024]
Abstract
Age-related auditory dysfunction, presbycusis, is caused in part by functional changes in the auditory cortex (ACtx) such as altered response dynamics and increased population correlations. Given the ability of cortical function to be altered by training, we tested if performing auditory tasks might benefit auditory function in old age. We examined this by training adult mice on a low-effort tone-detection task for at least six months and then investigated functional responses in ACtx at an older age (∼18 months). Task performance remained stable well into old age. Comparing sound-evoked responses of thousands of ACtx neurons using in vivo 2-photon Ca2+ imaging, we found that many aspects of youthful neuronal activity, including low activity correlations, lower neural excitability, and a greater proportion of suppressed responses, were preserved in trained old animals as compared to passively-exposed old animals. Thus, consistent training on a low-effort task can benefit age-related functional changes in ACtx and may preserve many aspects of auditory function.
Collapse
Affiliation(s)
- Jonah K Mittelstadt
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21205, USA; Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - Kelson V Shilling-Scrivo
- Department of Biology, University of Maryland, College Park, MD 20742, USA; Department of Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore, MD 21230, USA
| | - Patrick O Kanold
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21205, USA; Department of Biology, University of Maryland, College Park, MD 20742, USA; Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD 21205, USA.
| |
Collapse
|
31
|
Stodt B, Neudek D, Getzmann S, Wascher E, Martin R. Comparing auditory distance perception in real and virtual environments and the role of the loudness cue: A study based on event-related potentials. Hear Res 2024; 444:108968. [PMID: 38350176 DOI: 10.1016/j.heares.2024.108968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 01/12/2024] [Accepted: 02/02/2024] [Indexed: 02/15/2024]
Abstract
The perception of the distance to a sound source is relevant in many everyday situations, not only in real spaces, but also in virtual reality (VR) environments. Where real rooms often reach their limits, VR offers far-reaching possibilities to simulate a wide range of acoustic scenarios. However, in virtual room acoustics a plausible reproduction of distance-related cues can be challenging. In the present study, we compared the detection of changes of the distance to a sound source and its neurocognitive correlates in a real and a virtual reverberant environment, using an active auditory oddball paradigm and EEG measures. The main goal was to test whether the experiments in the virtual and real environments produced equivalent behavioral and EEG results. Three loudspeakers were placed at ego-centric distances of 2 m (near), 4 m (center), and 8 m (far) in front of the participants (N = 20), each 66 cm below their ear level. Sequences of 500 ms noise stimuli were presented either from the center position (standards, 80 % of trials) or from the near or far position (targets, 10 % each). The participants had to indicate a target position via a joystick response ("near" or "far"). Sounds were emitted either by real loudspeakers in the real environment or rendered and played back for the corresponding positions via headphones in the virtual environment. In addition, within both environments, loudness of the auditory stimuli was either unaltered (natural loudness) or the loudness cue was manipulated, so that all three loudspeakers were perceived equally loud at the listener's position (matched loudness). The EEG analysis focused on the mismatch negativity (MMN), P3a, and P3b as correlates of deviance detection, attentional orientation, and context-updating/stimulus evaluation, respectively. Overall, behavioral data showed that detection of the target positions was reduced within the virtual environment, and especially when loudness was matched. Except for slight latency shifts in the virtual environment, EEG analysis indicated comparable patterns within both environments and independent of loudness settings. Thus, while the neurocognitive processing of changes in distance appears to be similar in virtual and real spaces, a proper representation of loudness appears to be crucial to achieve a good task performance in virtual acoustic environments.
Collapse
Affiliation(s)
- Benjamin Stodt
- Leibniz Research Centre for Working Environment and Human Factors at the TU Dortmund (IfADo), Ardeystraße 67, Dortmund 44139, Germany.
| | - Daniel Neudek
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Universitätsstraße 150, Bochum 44780, Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors at the TU Dortmund (IfADo), Ardeystraße 67, Dortmund 44139, Germany
| | - Edmund Wascher
- Leibniz Research Centre for Working Environment and Human Factors at the TU Dortmund (IfADo), Ardeystraße 67, Dortmund 44139, Germany
| | - Rainer Martin
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Universitätsstraße 150, Bochum 44780, Germany
| |
Collapse
|
32
|
Morandell K, Yin A, Triana Del Rio R, Schneider DM. Movement-Related Modulation in Mouse Auditory Cortex Is Widespread Yet Locally Diverse. J Neurosci 2024; 44:e1227232024. [PMID: 38286628 PMCID: PMC10941236 DOI: 10.1523/jneurosci.1227-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 12/12/2023] [Accepted: 01/15/2024] [Indexed: 01/31/2024] Open
Abstract
Neurons in the mouse auditory cortex are strongly influenced by behavior, including both suppression and enhancement of sound-evoked responses during movement. The mouse auditory cortex comprises multiple fields with different roles in sound processing and distinct connectivity to movement-related centers of the brain. Here, we asked whether movement-related modulation in male mice might differ across auditory cortical fields, thereby contributing to the heterogeneity of movement-related modulation at the single-cell level. We used wide-field calcium imaging to identify distinct cortical fields and cellular-resolution two-photon calcium imaging to visualize the activity of layer 2/3 excitatory neurons within each field. We measured each neuron's responses to three sound categories (pure tones, chirps, and amplitude-modulated white noise) as mice rested and ran on a non-motorized treadmill. We found that individual neurons in each cortical field typically respond to just one sound category. Some neurons are only active during rest and others during locomotion, and those that are responsive across conditions retain their sound-category tuning. The effects of locomotion on sound-evoked responses vary at the single-cell level, with both suppression and enhancement of neural responses, and the net modulatory effect of locomotion is largely conserved across cortical fields. Movement-related modulation in auditory cortex also reflects more complex behavioral patterns, including instantaneous running speed and nonlocomotor movements such as grooming and postural adjustments, with similar patterns seen across all auditory cortical fields. Our findings underscore the complexity of movement-related modulation throughout the mouse auditory cortex and indicate that movement-related modulation is a widespread phenomenon.
Collapse
Affiliation(s)
- Karin Morandell
- Center for Neural Science, New York University, New York, New York 10012
| | - Audrey Yin
- Center for Neural Science, New York University, New York, New York 10012
| | | | - David M Schneider
- Center for Neural Science, New York University, New York, New York 10012
| |
Collapse
|
33
|
Zhao S, Contadini-Wright C, Chait M. Cross-Modal Interactions Between Auditory Attention and Oculomotor Control. J Neurosci 2024; 44:e1286232024. [PMID: 38331581 PMCID: PMC10941240 DOI: 10.1523/jneurosci.1286-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 01/28/2024] [Accepted: 01/31/2024] [Indexed: 02/10/2024] Open
Abstract
Microsaccades are small, involuntary eye movements that occur during fixation. Their role is debated with recent hypotheses proposing a contribution to automatic scene sampling. Microsaccadic inhibition (MSI) refers to the abrupt suppression of microsaccades, typically evoked within 0.1 s after new stimulus onset. The functional significance and neural underpinnings of MSI are subjects of ongoing research. It has been suggested that MSI is a component of the brain's attentional re-orienting network which facilitates the allocation of attention to new environmental occurrences by reducing disruptions or shifts in gaze that could interfere with processing. The extent to which MSI is reflexive or influenced by top-down mechanisms remains debated. We developed a task that examines the impact of auditory top-down attention on MSI, allowing us to disentangle ocular dynamics from visual sensory processing. Participants (N = 24 and 27; both sexes) listened to two simultaneous streams of tones and were instructed to attend to one stream while detecting specific task "targets." We quantified MSI in response to occasional task-irrelevant events presented in both the attended and unattended streams (frequency steps in Experiment 1, omissions in Experiment 2). The results show that initial stages of MSI are not affected by auditory attention. However, later stages (∼0.25 s postevent onset), affecting the extent and duration of the inhibition, are enhanced for sounds in the attended stream compared to the unattended stream. These findings provide converging evidence for the reflexive nature of early MSI stages and robustly demonstrate the involvement of auditory attention in modulating the later stages.
Collapse
Affiliation(s)
- Sijia Zhao
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom
| | | | - Maria Chait
- Ear Institute, University College London, London WC1X 8EE, United Kingdom
| |
Collapse
|
34
|
Dureux A, Zanini A, Everling S. Mapping of facial and vocal processing in common marmosets with ultra-high field fMRI. Commun Biol 2024; 7:317. [PMID: 38480875 PMCID: PMC10937914 DOI: 10.1038/s42003-024-06002-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 03/01/2024] [Indexed: 03/17/2024] Open
Abstract
Primate communication relies on multimodal cues, such as vision and audition, to facilitate the exchange of intentions, enable social interactions, avoid predators, and foster group cohesion during daily activities. Understanding the integration of facial and vocal signals is pivotal to comprehend social interaction. In this study, we acquire whole-brain ultra-high field (9.4 T) fMRI data from awake marmosets (Callithrix jacchus) to explore brain responses to unimodal and combined facial and vocal stimuli. Our findings reveal that the multisensory condition not only intensifies activations in the occipito-temporal face patches and auditory voice patches but also engages a more extensive network that includes additional parietal, prefrontal and cingulate areas, compared to the summed responses of the unimodal conditions. By uncovering the neural network underlying multisensory audiovisual integration in marmosets, this study highlights the efficiency and adaptability of the marmoset brain in processing facial and vocal social signals, providing significant insights into primate social communication.
Collapse
Affiliation(s)
- Audrey Dureux
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, N6A 5K8, Canada.
| | - Alessandro Zanini
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, N6A 5K8, Canada
| | - Stefan Everling
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, N6A 5K8, Canada
- Department of Physiology and Pharmacology, University of Western Ontario, London, ON, N6A 5K8, Canada
| |
Collapse
|
35
|
Cody P, Kumar M, Tzounopoulos T. Cortical Zinc Signaling Is Necessary for Changes in Mouse Pupil Diameter That Are Evoked by Background Sounds with Different Contrasts. J Neurosci 2024; 44:e0939232024. [PMID: 38242698 PMCID: PMC10941062 DOI: 10.1523/jneurosci.0939-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 12/29/2023] [Accepted: 01/14/2024] [Indexed: 01/21/2024] Open
Abstract
Luminance-independent changes in pupil diameter (PD) during wakefulness influence and are influenced by neuromodulatory, neuronal, and behavioral responses. However, it is unclear whether changes in neuromodulatory activity in a specific brain area are necessary for the associated changes in PD or whether some different mechanisms cause parallel fluctuations in both PD and neuromodulation. To answer this question, we simultaneously recorded PD and cortical neuronal activity in male and female mice. Namely, we measured PD and neuronal activity during adaptation to sound contrast, which is a well-described adaptation conserved in many species and brain areas. In the primary auditory cortex (A1), increases in the variability of sound level (contrast) induce a decrease in the slope of the neuronal input-output relationship, neuronal gain, which depends on cortical neuromodulatory zinc signaling. We found a previously unknown modulation of PD by changes in background sensory context: high stimulus contrast sounds evoke larger increases in evoked PD compared with low-contrast sounds. To explore whether these changes in evoked PD are controlled by cortical neuromodulatory zinc signaling, we imaged single-cell neural activity in A1, manipulated zinc signaling in the cortex, and assessed PD in the same awake mouse. We found that cortical synaptic zinc signaling is necessary for increases in PD during high-contrast background sounds compared with low-contrast sounds. This finding advances our knowledge about how cortical neuromodulatory activity affects PD changes and thus advances our understanding of the brain states, circuits, and neuromodulatory mechanisms that can be inferred from pupil size fluctuations.
Collapse
Affiliation(s)
- Patrick Cody
- Department of Otolaryngology, Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Manoj Kumar
- Department of Otolaryngology, Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| | - Thanos Tzounopoulos
- Department of Otolaryngology, Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| |
Collapse
|
36
|
Xu R, Walsh EG, Watanabe T, Sasaki Y. Shift in excitation-inhibition balance underlies perceptual learning of temporal discrimination. Neuropsychologia 2024; 195:108814. [PMID: 38316210 PMCID: PMC10923091 DOI: 10.1016/j.neuropsychologia.2024.108814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 11/28/2023] [Accepted: 01/29/2024] [Indexed: 02/07/2024]
Abstract
Temporal perceptual learning (TPL) constitutes a unique and profound demonstration of neural plasticity within the brain. Our understanding for the neurometabolic changes associated with TPL on the other hand has been limited in part by the use of traditional fMRI approaches. Since plasticity in the visual cortex has been shown to underlie perceptual learning of visual information, we tested the hypothesis that TPL of an auditory interval involves a similar change in plasticity of the auditory pathway and if so, whether these changes take place in a lower-order sensory-specific brain area such as the primary auditory cortex (A1), or a higher-order modality-independent brain area such as the inferior parietal cortex (IPC). This distinction will inform us of the mechanisms underlying perceptual learning as well as the locus of change as it relates to TPL. In the present study, we took advantage of a new technique: proton magnetic resonance spectroscopy (MRS) in combination with psychophysical measures to provide the first evidence of changes in neurometabolic processing following 5 days of temporal discrimination training. We measured the (E)xcitation-to-(I)nhibition ratio as an index of learning in the right IPC and left A1 while participants learned an auditory two-tone discrimination task. During the first day of training, we found a significant task-related increase in functional E/I ratio within the IPC. While the A1 exhibited the opposite pattern of neurochemical activity, this relationship did not reach statistical significance. After timing performance has reached a plateau, there were no further changes to functional E/I. These findings support the hypothesis that improvements in temporal discrimination relies on neuroplastic changes in the IPC, but it is possible that both areas work synergistically to acquire a temporal interval.
Collapse
Affiliation(s)
- Rannie Xu
- Department of Cognitive, Linguistic & Psychological Sciences, United States.
| | - Edward G Walsh
- Department of Neuroscience, Brown University, Providence, 02912, United States
| | - Takeo Watanabe
- Department of Cognitive, Linguistic & Psychological Sciences, United States
| | - Yuka Sasaki
- Department of Cognitive, Linguistic & Psychological Sciences, United States
| |
Collapse
|
37
|
Lee IS, Kang JH, Kim J. Auditory influence on stickiness perception: an fMRI study of multisensory integration. Neuroreport 2024; 35:269-276. [PMID: 38305131 PMCID: PMC10852036 DOI: 10.1097/wnr.0000000000002003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 01/02/2024] [Indexed: 02/03/2024]
Abstract
This study explored how the human brain perceives stickiness through tactile and auditory channels, especially when presented with congruent or incongruent intensity cues. In our behavioral and functional MRI (fMRI) experiments, we presented participants with adhesive tape stimuli at two different intensities. The congruent condition involved providing stickiness stimuli with matching intensity cues in both auditory and tactile channels, whereas the incongruent condition involved cues of different intensities. Behavioral results showed that participants were able to distinguish between the congruent and incongruent conditions with high accuracy. Through fMRI searchlight analysis, we tested which brain regions could distinguish between congruent and incongruent conditions, and as a result, we identified the superior temporal gyrus, known primarily for auditory processing. Interestingly, we did not observe any significant activation in regions associated with somatosensory or motor functions. This indicates that the brain dedicates more attention to auditory cues than to tactile cues, possibly due to the unfamiliarity of conveying the sensation of stickiness through sound. Our results could provide new perspectives on the complexities of multisensory integration, highlighting the subtle yet significant role of auditory processing in understanding tactile properties such as stickiness.
Collapse
Affiliation(s)
- In-Seon Lee
- College of Korean Medicine, Kyung Hee University, Seoul
| | - Jae-Hwan Kang
- Digital Health Research Division, Korea Institute of Oriental Medicine
- Aging Convergence Research Center, Korea Research Institute of Bioscience and Biotechnology, Daejeon
| | - Junsuk Kim
- School of Information Convergence, Kwangwoon University, Seoul, South Korea
| |
Collapse
|
38
|
Peng F, Harper NS, Mishra AP, Auksztulewicz R, Schnupp JWH. Dissociable Roles of the Auditory Midbrain and Cortex in Processing the Statistical Features of Natural Sound Textures. J Neurosci 2024; 44:e1115232023. [PMID: 38267259 PMCID: PMC10919253 DOI: 10.1523/jneurosci.1115-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 11/23/2023] [Accepted: 12/11/2023] [Indexed: 01/26/2024] Open
Abstract
Sound texture perception takes advantage of a hierarchy of time-averaged statistical features of acoustic stimuli, but much remains unclear about how these statistical features are processed along the auditory pathway. Here, we compared the neural representation of sound textures in the inferior colliculus (IC) and auditory cortex (AC) of anesthetized female rats. We recorded responses to texture morph stimuli that gradually add statistical features of increasingly higher complexity. For each texture, several different exemplars were synthesized using different random seeds. An analysis of transient and ongoing multiunit responses showed that the IC units were sensitive to every type of statistical feature, albeit to a varying extent. In contrast, only a small proportion of AC units were overtly sensitive to any statistical features. Differences in texture types explained more of the variance of IC neural responses than did differences in exemplars, indicating a degree of "texture type tuning" in the IC, but the same was, perhaps surprisingly, not the case for AC responses. We also evaluated the accuracy of texture type classification from single-trial population activity and found that IC responses became more informative as more summary statistics were included in the texture morphs, while for AC population responses, classification performance remained consistently very low. These results argue against the idea that AC neurons encode sound type via an overt sensitivity in neural firing rate to fine-grain spectral and temporal statistical features.
Collapse
Affiliation(s)
- Fei Peng
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 2JD, United Kingdom
| | - Ambika P Mishra
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
- Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin 14195, Germany
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| |
Collapse
|
39
|
Trost W, Trevor C, Fernandez N, Steiner F, Frühholz S. Live music stimulates the affective brain and emotionally entrains listeners in real time. Proc Natl Acad Sci U S A 2024; 121:e2316306121. [PMID: 38408255 DOI: 10.1073/pnas.2316306121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 01/18/2024] [Indexed: 02/28/2024] Open
Abstract
Music is powerful in conveying emotions and triggering affective brain mechanisms. Affective brain responses in previous studies were however rather inconsistent, potentially because of the non-adaptive nature of recorded music used so far. Live music instead can be dynamic and adaptive and is often modulated in response to audience feedback to maximize emotional responses in listeners. Here, we introduce a setup for studying emotional responses to live music in a closed-loop neurofeedback setup. This setup linked live performances by musicians to neural processing in listeners, with listeners' amygdala activity was displayed to musicians in real time. Brain activity was measured using functional MRI, and especially amygdala activity was quantified in real time for the neurofeedback signal. Live pleasant and unpleasant piano music performed in response to amygdala neurofeedback from listeners was acoustically very different from comparable recorded music and elicited significantly higher and more consistent amygdala activity. Higher activity was also found in a broader neural network for emotion processing during live compared to recorded music. This finding included observations of the predominance for aversive coding in the ventral striatum while listening to unpleasant music, and involvement of the thalamic pulvinar nucleus, presumably for regulating attentional and cortical flow mechanisms. Live music also stimulated a dense functional neural network with the amygdala as a central node influencing other brain systems. Finally, only live music showed a strong and positive coupling between features of the musical performance and brain activity in listeners pointing to real-time and dynamic entrainment processes.
Collapse
Affiliation(s)
- Wiebke Trost
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Caitlyn Trevor
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Natalia Fernandez
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Florence Steiner
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich 8057, Switzerland
- Department of Psychology, University of Oslo, Oslo 0373, Norway
| |
Collapse
|
40
|
Wang C, Zhao X, Tao B, Peng J, Wang H, Yu J, Jin L. Do domestic budgerigars perceive predation risk? Anim Cogn 2024; 27:8. [PMID: 38429588 PMCID: PMC10907484 DOI: 10.1007/s10071-024-01847-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 10/28/2023] [Accepted: 11/12/2023] [Indexed: 03/03/2024]
Abstract
Predation risk may affect the foraging behavior of birds. However, there has been little research on the ability of domestic birds to perceive predation risk and thus adjust their feeding behavior. In this study, we tested whether domestic budgerigars (Melopsittacus undulatus) perceived predation risk after the presentation of specimens and sounds of sparrowhawks (Accipiter nisus), domestic cats (Felis catus), and humans, and whether this in turn influenced their feeding behavior. When exposed to visual or acoustic stimuli, budgerigars showed significantly longer latency to feed under sparrowhawk, domestic cat, and human treatments than with controls. Budgerigars responded more strongly to acoustic stimuli than visual stimuli, and they showed the longest latency to feed and the least number of feeding times in response to sparrowhawk calls. Moreover, budgerigars showed shorter latency to feed and greater numbers of feeding times in response to human voices than to sparrowhawk or domestic cat calls. Our results suggest that domestic budgerigars may identify predation risk through visual or acoustic signals and adjust their feeding behavior accordingly.
Collapse
Affiliation(s)
- Chang Wang
- Jilin Engineering Laboratory for Avian Ecology and Conservation Genetics, School of Life Sciences, Northeast Normal University, Changchun, 130024, China
- Jilin Provincial Key Laboratory of Animal Resource Conservation and Utilization, School of Life Sciences, Northeast Normal University, Changchun, 130024, China
| | - Xueqi Zhao
- Jilin Engineering Laboratory for Avian Ecology and Conservation Genetics, School of Life Sciences, Northeast Normal University, Changchun, 130024, China
- Jilin Provincial Key Laboratory of Animal Resource Conservation and Utilization, School of Life Sciences, Northeast Normal University, Changchun, 130024, China
| | - Baodan Tao
- Jilin Engineering Laboratory for Avian Ecology and Conservation Genetics, School of Life Sciences, Northeast Normal University, Changchun, 130024, China
- Jilin Provincial Key Laboratory of Animal Resource Conservation and Utilization, School of Life Sciences, Northeast Normal University, Changchun, 130024, China
| | - Jiaqi Peng
- Jilin Engineering Laboratory for Avian Ecology and Conservation Genetics, School of Life Sciences, Northeast Normal University, Changchun, 130024, China
- Jilin Provincial Key Laboratory of Animal Resource Conservation and Utilization, School of Life Sciences, Northeast Normal University, Changchun, 130024, China
| | - Haitao Wang
- Jilin Engineering Laboratory for Avian Ecology and Conservation Genetics, School of Life Sciences, Northeast Normal University, Changchun, 130024, China
- Jilin Provincial Key Laboratory of Animal Resource Conservation and Utilization, School of Life Sciences, Northeast Normal University, Changchun, 130024, China
| | - Jiangping Yu
- Jilin Engineering Laboratory for Avian Ecology and Conservation Genetics, School of Life Sciences, Northeast Normal University, Changchun, 130024, China.
- Jilin Provincial Key Laboratory of Animal Resource Conservation and Utilization, School of Life Sciences, Northeast Normal University, Changchun, 130024, China.
| | - Longru Jin
- Jilin Engineering Laboratory for Avian Ecology and Conservation Genetics, School of Life Sciences, Northeast Normal University, Changchun, 130024, China.
- Jilin Provincial Key Laboratory of Animal Resource Conservation and Utilization, School of Life Sciences, Northeast Normal University, Changchun, 130024, China.
| |
Collapse
|
41
|
Momtaz S, Bidelman GM. Effects of Stimulus Rate and Periodicity on Auditory Cortical Entrainment to Continuous Sounds. eNeuro 2024; 11:ENEURO.0027-23.2024. [PMID: 38253583 PMCID: PMC10913036 DOI: 10.1523/eneuro.0027-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 01/14/2024] [Accepted: 01/16/2024] [Indexed: 01/24/2024] Open
Abstract
The neural mechanisms underlying the exogenous coding and neural entrainment to repetitive auditory stimuli have seen a recent surge of interest. However, few studies have characterized how parametric changes in stimulus presentation alter entrained responses. We examined the degree to which the brain entrains to repeated speech (i.e., /ba/) and nonspeech (i.e., click) sounds using phase-locking value (PLV) analysis applied to multichannel human electroencephalogram (EEG) data. Passive cortico-acoustic tracking was investigated in N = 24 normal young adults utilizing EEG source analyses that isolated neural activity stemming from both auditory temporal cortices. We parametrically manipulated the rate and periodicity of repetitive, continuous speech and click stimuli to investigate how speed and jitter in ongoing sound streams affect oscillatory entrainment. Neuronal synchronization to speech was enhanced at 4.5 Hz (the putative universal rate of speech) and showed a differential pattern to that of clicks, particularly at higher rates. PLV to speech decreased with increasing jitter but remained superior to clicks. Surprisingly, PLV entrainment to clicks was invariant to periodicity manipulations. Our findings provide evidence that the brain's neural entrainment to complex sounds is enhanced and more sensitized when processing speech-like stimuli, even at the syllable level, relative to nonspeech sounds. The fact that this specialization is apparent even under passive listening suggests a priority of the auditory system for synchronizing to behaviorally relevant signals.
Collapse
Affiliation(s)
- Sara Momtaz
- School of Communication Sciences & Disorders, University of Memphis, Memphis, Tennessee 38152
- Boys Town National Research Hospital, Boys Town, Nebraska 68131
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, Indiana 47408
- Program in Neuroscience, Indiana University, Bloomington, Indiana 47405
| |
Collapse
|
42
|
Etani T, Miura A, Kawase S, Fujii S, Keller PE, Vuust P, Kudo K. A review of psychological and neuroscientific research on musical groove. Neurosci Biobehav Rev 2024; 158:105522. [PMID: 38141692 DOI: 10.1016/j.neubiorev.2023.105522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 12/18/2023] [Accepted: 12/19/2023] [Indexed: 12/25/2023]
Abstract
When listening to music, we naturally move our bodies rhythmically to the beat, which can be pleasurable and difficult to resist. This pleasurable sensation of wanting to move the body to music has been called "groove." Following pioneering humanities research, psychological and neuroscientific studies have provided insights on associated musical features, behavioral responses, phenomenological aspects, and brain structural and functional correlates of the groove experience. Groove research has advanced the field of music science and more generally informed our understanding of bidirectional links between perception and action, and the role of the motor system in prediction. Activity in motor and reward-related brain networks during music listening is associated with the groove experience, and this neural activity is linked to temporal prediction and learning. This article reviews research on groove as a psychological phenomenon with neurophysiological correlates that link musical rhythm perception, sensorimotor prediction, and reward processing. Promising future research directions range from elucidating specific neural mechanisms to exploring clinical applications and socio-cultural implications of groove.
Collapse
Affiliation(s)
- Takahide Etani
- School of Medicine, College of Medical, Pharmaceutical, and Health, Kanazawa University, Kanazawa, Japan; Graduate School of Media and Governance, Keio University, Fujisawa, Japan; Advanced Research Center for Human Sciences, Waseda University, Tokorozawa, Japan.
| | - Akito Miura
- Faculty of Human Sciences, Waseda University, Tokorozawa, Japan
| | - Satoshi Kawase
- The Faculty of Psychology, Kobe Gakuin University, Kobe, Japan
| | - Shinya Fujii
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Peter E Keller
- Center for Music in the Brain, Aarhus University, Aarhus, Denmark/The Royal Academy of Music Aarhus/Aalborg, Denmark; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University, Aarhus, Denmark/The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Kazutoshi Kudo
- Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
43
|
Dong C, Noppeney U, Wang S. Perceptual uncertainty explains activation differences between audiovisual congruent speech and McGurk stimuli. Hum Brain Mapp 2024; 45:e26653. [PMID: 38488460 DOI: 10.1002/hbm.26653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 02/20/2024] [Accepted: 02/26/2024] [Indexed: 03/19/2024] Open
Abstract
Face-to-face communication relies on the integration of acoustic speech signals with the corresponding facial articulations. In the McGurk illusion, an auditory /ba/ phoneme presented simultaneously with a facial articulation of a /ga/ (i.e., viseme), is typically fused into an illusory 'da' percept. Despite its widespread use as an index of audiovisual speech integration, critics argue that it arises from perceptual processes that differ categorically from natural speech recognition. Conversely, Bayesian theoretical frameworks suggest that both the illusory McGurk and the veridical audiovisual congruent speech percepts result from probabilistic inference based on noisy sensory signals. According to these models, the inter-sensory conflict in McGurk stimuli may only increase observers' perceptual uncertainty. This functional magnetic resonance imaging (fMRI) study presented participants (20 male and 24 female) with audiovisual congruent, McGurk (i.e., auditory /ba/ + visual /ga/), and incongruent (i.e., auditory /ga/ + visual /ba/) stimuli along with their unisensory counterparts in a syllable categorization task. Behaviorally, observers' response entropy was greater for McGurk compared to congruent audiovisual stimuli. At the neural level, McGurk stimuli increased activations in a widespread neural system, extending from the inferior frontal sulci (IFS) to the pre-supplementary motor area (pre-SMA) and insulae, typically involved in cognitive control processes. Crucially, in line with Bayesian theories these activation increases were fully accounted for by observers' perceptual uncertainty as measured by their response entropy. Our findings suggest that McGurk and congruent speech processing rely on shared neural mechanisms, thereby supporting the McGurk illusion as a valid measure of natural audiovisual speech perception.
Collapse
Affiliation(s)
- Chenjie Dong
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, the Netherlands
| | - Uta Noppeney
- Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, the Netherlands
| | - Suiping Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
| |
Collapse
|
44
|
Fernández-Vargas M, Macedo-Lima M, Remage-Healey L. Acute Aromatase Inhibition Impairs Neural and Behavioral Auditory Scene Analysis in Zebra Finches. eNeuro 2024; 11:ENEURO.0423-23.2024. [PMID: 38467426 PMCID: PMC10960633 DOI: 10.1523/eneuro.0423-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 12/31/2023] [Accepted: 01/04/2024] [Indexed: 03/13/2024] Open
Abstract
Auditory perception can be significantly disrupted by noise. To discriminate sounds from noise, auditory scene analysis (ASA) extracts the functionally relevant sounds from acoustic input. The zebra finch communicates in noisy environments. Neurons in their secondary auditory pallial cortex (caudomedial nidopallium, NCM) can encode song from background chorus, or scenes, and this capacity may aid behavioral ASA. Furthermore, song processing is modulated by the rapid synthesis of neuroestrogens when hearing conspecific song. To examine whether neuroestrogens support neural and behavioral ASA in both sexes, we retrodialyzed fadrozole (aromatase inhibitor, FAD) and recorded in vivo awake extracellular NCM responses to songs and scenes. We found that FAD affected neural encoding of songs by decreasing responsiveness and timing reliability in inhibitory (narrow-spiking), but not in excitatory (broad-spiking) neurons. Congruently, FAD decreased neural encoding of songs in scenes for both cell types, particularly in females. Behaviorally, we trained birds using operant conditioning and tested their ability to detect songs in scenes after administering FAD orally or injected bilaterally into NCM. Oral FAD increased response bias and decreased correct rejections in females, but not in males. FAD in NCM did not affect performance. Thus, FAD in the NCM impaired neuronal ASA but that did not lead to behavioral disruption suggesting the existence of resilience or compensatory responses. Moreover, impaired performance after systemic FAD suggests involvement of other aromatase-rich networks outside the auditory pathway in ASA. This work highlights how transient estrogen synthesis disruption can modulate higher-order processing in an animal model of vocal communication.
Collapse
Affiliation(s)
- Marcela Fernández-Vargas
- Neuroscience and Behavior Program, Center for Neuroendocrine Studies, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| | - Matheus Macedo-Lima
- Neuroscience and Behavior Program, Center for Neuroendocrine Studies, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| | - Luke Remage-Healey
- Neuroscience and Behavior Program, Center for Neuroendocrine Studies, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| |
Collapse
|
45
|
Kawase S. Is happier music groovier? The influence of emotional characteristics of musical chord progressions on groove. Psychol Res 2024; 88:438-448. [PMID: 37615754 PMCID: PMC10858120 DOI: 10.1007/s00426-023-01869-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 08/07/2023] [Indexed: 08/25/2023]
Abstract
Specific rhythmic patterns in music have been reported to induce an urge to move with feelings of pleasure or enjoyment, called "groove." However, it is unclear how the emotional characteristics of music (e.g., happiness or sadness) affect groove. To address this issue I investigated the effects of the emotional characteristics of music on groove by altering the chord progressions accompanying drum breaks composed by a professional composer while manipulating independent tempo and rhythmic patterns. An online listening experiment was conducted using pieces composed by a professional composer but comprising different types of chord progressions that lead to happiness or sadness. Participants evaluated the nine items on a 7-point scale, including urge to move (i.e., groove), felt emotions, nori, and liking. The experiment found that: (1) chord progressions that evoke happiness were more likely to induce groove, (2) emotional characteristics did not interact with tempi and syncopation in terms of groove ratings, and (3) the accompaniment of drum breaks enhanced groove in both happy and sad chord progressions. Musical pieces with chord progressions that induce happiness were more likely to evoke groove, namely the urge to move. This implies that considering the emotional characteristics of musical pieces and rhythms is crucial when creating music for movement during rehabilitation, therapy, or dance.
Collapse
Affiliation(s)
- Satoshi Kawase
- The Faculty of Psychology, Kobe Gakuin University, 518 Arise, Ikawadani-cho, Nishi-ku, Kobe, Hyogo, 651-2180, Japan.
| |
Collapse
|
46
|
Ristic J, Capozzi F. The role of visual and auditory information in social event segmentation. Q J Exp Psychol (Hove) 2024; 77:626-638. [PMID: 37154602 PMCID: PMC10880416 DOI: 10.1177/17470218231176471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Revised: 04/26/2023] [Accepted: 05/01/2023] [Indexed: 05/10/2023]
Abstract
Humans organise their social worlds into social and nonsocial events. Social event segmentation refers to the ability to parse the environmental content into social and nonsocial events or units. Here, we investigated the role that perceptual information from visual and auditory modalities, in isolation and in conjunction, played in social event segmentation. Participants viewed a video clip depicting an interaction between two actors and marked the boundaries of social and nonsocial events. Depending on the condition, the clip at first contained only auditory or only visual information. Then, the clip was shown containing both auditory and visual information. Higher overall group consensus and response consistency in parsing the clip was found for social segmentation and when both auditory and visual information was available. Presenting the clip in the visual domain only benefitted group agreement in social segmentation while the inclusion of auditory information (under the audiovisual condition) also improved response consistency in nonsocial segmentation. Thus, social segmentation utilises information from the visual modality, with the auditory cues contributing under ambiguous or uncertain conditions and during segmentation of nonsocial content.
Collapse
Affiliation(s)
- Jelena Ristic
- Department of Psychology, McGill University, Montreal, Québec, Canada
| | - Francesca Capozzi
- Department of Psychology, Université du Québec à Montréal, Montreal, Québec, Canada
| |
Collapse
|
47
|
Mok BA, Viswanathan V, Borjigin A, Singh R, Kafi H, Bharadwaj HM. Web-based psychoacoustics: Hearing screening, infrastructure, and validation. Behav Res Methods 2024; 56:1433-1448. [PMID: 37326771 PMCID: PMC10704001 DOI: 10.3758/s13428-023-02101-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/01/2023] [Indexed: 06/17/2023]
Abstract
Anonymous web-based experiments are increasingly used in many domains of behavioral research. However, online studies of auditory perception, especially of psychoacoustic phenomena pertaining to low-level sensory processing, are challenging because of limited available control of the acoustics, and the inability to perform audiometry to confirm normal-hearing status of participants. Here, we outline our approach to mitigate these challenges and validate our procedures by comparing web-based measurements to lab-based data on a range of classic psychoacoustic tasks. Individual tasks were created using jsPsych, an open-source JavaScript front-end library. Dynamic sequences of psychoacoustic tasks were implemented using Django, an open-source library for web applications, and combined with consent pages, questionnaires, and debriefing pages. Subjects were recruited via Prolific, a subject recruitment platform for web-based studies. Guided by a meta-analysis of lab-based data, we developed and validated a screening procedure to select participants for (putative) normal-hearing status based on their responses in a suprathreshold task and a survey. Headphone use was standardized by supplementing procedures from prior literature with a binaural hearing task. Individuals meeting all criteria were re-invited to complete a range of classic psychoacoustic tasks. For the re-invited participants, absolute thresholds were in excellent agreement with lab-based data for fundamental frequency discrimination, gap detection, and sensitivity to interaural time delay and level difference. Furthermore, word identification scores, consonant confusion patterns, and co-modulation masking release effect also matched lab-based studies. Our results suggest that web-based psychoacoustics is a viable complement to lab-based research. Source code for our infrastructure is provided.
Collapse
Affiliation(s)
- Brittany A Mok
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, USA
| | - Vibha Viswanathan
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Agudemu Borjigin
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Ravinderjit Singh
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Homeira Kafi
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Hari M Bharadwaj
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, USA.
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
48
|
Hockley A, Malmierca MS. Auditory processing control by the medial prefrontal cortex: A review of the rodent functional organisation. Hear Res 2024; 443:108954. [PMID: 38271895 DOI: 10.1016/j.heares.2024.108954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/04/2024] [Accepted: 01/11/2024] [Indexed: 01/27/2024]
Abstract
Afferent inputs from the cochlea transmit auditory information to the central nervous system, where information is processed and passed up the hierarchy, ending in the auditory cortex. Through these brain pathways, spectral and temporal features of sounds are processed and sent to the cortex for perception. There are also many mechanisms in place for modulation of these inputs, with a major source of modulation being based in the medial prefrontal cortex (mPFC). Neurons of the rodent mPFC receive input from the auditory cortex and other regions such as thalamus, hippocampus and basal forebrain, allowing them to encode high-order information about sounds such as context, predictability and valence. The mPFC then exerts control over auditory perception via top-down modulation of the central auditory pathway, altering perception of and responses to sounds. The result is a higher-order control of auditory processing that produces such characteristics as deviance detection, attention, avoidance and fear conditioning. This review summarises connections between mPFC and the primary auditory pathway, responses of mPFC neurons to auditory stimuli, how mPFC outputs shape the perception of sounds, and how changes to these systems during hearing loss and tinnitus may contribute to these conditions.
Collapse
Affiliation(s)
- A Hockley
- Cognitive and Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León, University of Salamanca, Salamanca, Spain; Department of Cell Biology and Pathology, University of Salamanca, Salamanca, Spain; Institute for Biomedical Research of Salamanca, Salamanca, Spain.
| | - M S Malmierca
- Cognitive and Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León, University of Salamanca, Salamanca, Spain; Department of Cell Biology and Pathology, University of Salamanca, Salamanca, Spain; Institute for Biomedical Research of Salamanca, Salamanca, Spain
| |
Collapse
|
49
|
Wetekam J, Hechavarría J, López-Jury L, González-Palomares E, Kössl M. Deviance Detection to Natural Stimuli in Population Responses of the Brainstem of Bats. J Neurosci 2024; 44:e1588232023. [PMID: 38262723 PMCID: PMC10904087 DOI: 10.1523/jneurosci.1588-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 11/09/2023] [Accepted: 11/29/2023] [Indexed: 01/25/2024] Open
Abstract
Deviance detection describes an increase of neural response strength caused by a stimulus with a low probability of occurrence. This ubiquitous phenomenon has been reported for humans and multiple other species, from subthalamic areas to the auditory cortex. Cortical deviance detection has been well characterized by a range of studies using a variety of different stimuli, from artificial to natural, with and without a behavioral relevance. This allowed the identification of a broad variety of regularity deviations that are detected by the cortex. Moreover, subcortical deviance detection has been studied with simple stimuli that are not meaningful to the subject. Here, we aim to bridge this gap by using noninvasively recorded auditory brainstem responses (ABRs) to investigate deviance detection at population level in the lower stations of the auditory system of a highly vocal species: the bat Carollia perspicillata (of either sex). Our present approach uses behaviorally relevant vocalization stimuli that are similar to the animals' natural soundscape. We show that deviance detection in ABRs is significantly stronger for echolocation pulses than for social communication calls or artificial sounds, indicating that subthalamic deviance detection depends on the behavioral meaning of a stimulus. Additionally, complex physical sound features like frequency- and amplitude modulation affected the strength of deviance detection in the ABR. In summary, our results suggest that the brain can detect different types of deviants already in the brainstem, showing that subthalamic brain structures exhibit more advanced forms of deviance detection than previously known.
Collapse
Affiliation(s)
- Johannes Wetekam
- Department of Neurobiology and Biological Sensors, Institute of Cell Biology and Neuroscience, Goethe University, 60439 Frankfurt am Main, Germany
| | - Julio Hechavarría
- Department of Neurobiology and Biological Sensors, Institute of Cell Biology and Neuroscience, Goethe University, 60439 Frankfurt am Main, Germany
| | - Luciana López-Jury
- Department of Neurobiology and Biological Sensors, Institute of Cell Biology and Neuroscience, Goethe University, 60439 Frankfurt am Main, Germany
| | - Eugenia González-Palomares
- Department of Neurobiology and Biological Sensors, Institute of Cell Biology and Neuroscience, Goethe University, 60439 Frankfurt am Main, Germany
| | - Manfred Kössl
- Department of Neurobiology and Biological Sensors, Institute of Cell Biology and Neuroscience, Goethe University, 60439 Frankfurt am Main, Germany
| |
Collapse
|
50
|
Ono K, Mizuochi R, Yamamoto K, Sasaoka T, Ymawaki S. Exploring the neural underpinnings of chord prediction uncertainty: an electroencephalography (EEG) study. Sci Rep 2024; 14:4586. [PMID: 38403782 PMCID: PMC10894873 DOI: 10.1038/s41598-024-55366-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 02/22/2024] [Indexed: 02/27/2024] Open
Abstract
Predictive processing in the brain, involving interaction between interoceptive (bodily signal) and exteroceptive (sensory) processing, is essential for understanding music as it encompasses musical temporality dynamics and affective responses. This study explores the relationship between neural correlates and subjective certainty of chord prediction, focusing on the alignment between predicted and actual chord progressions in both musically appropriate chord sequences and random chord sequences. Participants were asked to predict the final chord in sequences while their brain activity was measured using electroencephalography (EEG). We found that the stimulus preceding negativity (SPN), an EEG component associated with predictive processing of sensory stimuli, was larger for non-harmonic chord sequences than for harmonic chord progressions. Additionally, the heartbeat evoked potential (HEP), an EEG component related to interoceptive processing, was larger for random chord sequences and correlated with prediction certainty ratings. HEP also correlated with the N5 component, found while listening to the final chord. Our findings suggest that HEP more directly reflects the subjective prediction certainty than SPN. These findings offer new insights into the neural mechanisms underlying music perception and prediction, emphasizing the importance of considering auditory prediction certainty when examining the neural basis of music cognition.
Collapse
Affiliation(s)
- Kentaro Ono
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan.
| | - Ryohei Mizuochi
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| | - Kazuki Yamamoto
- Graduate School of Humanities and Social Sciences, Hiroshima University, Higashihiroshima, Japan
| | - Takafumi Sasaoka
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| | - Shigeto Ymawaki
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| |
Collapse
|