1
|
van Ackooij M, Paul JM, van der Zwaag W, van der Stoep N, Harvey BM. Auditory timing-tuned neural responses in the human auditory cortices. Neuroimage 2022; 258:119366. [PMID: 35690255 DOI: 10.1016/j.neuroimage.2022.119366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 05/25/2022] [Accepted: 06/08/2022] [Indexed: 11/27/2022] Open
Abstract
Perception of sub-second auditory event timing supports multisensory integration, and speech and music perception and production. Neural populations tuned for the timing (duration and rate) of visual events were recently described in several human extrastriate visual areas. Here we ask whether the brain also contains neural populations tuned for auditory event timing, and whether these are shared with visual timing. Using 7T fMRI, we measured responses to white noise bursts of changing duration and rate. We analyzed these responses using neural response models describing different parametric relationships between event timing and neural response amplitude. This revealed auditory timing-tuned responses in the primary auditory cortex, and auditory association areas of the belt, parabelt and premotor cortex. While these areas also showed tonotopic tuning for auditory pitch, pitch and timing preferences were not consistently correlated. Auditory timing-tuned response functions differed between these areas, though without clear hierarchical integration of responses. The similarity of auditory and visual timing tuned responses, together with the lack of overlap between the areas showing these responses for each modality, suggests modality-specific responses to event timing are computed similarly but from different sensory inputs, and then transformed differently to suit the needs of each modality.
Collapse
Affiliation(s)
- Martijn van Ackooij
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands
| | - Jacob M Paul
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands; Melbourne School of Psychological Sciences, University of Melbourne, Redmond Barry Building, Parkville 3010, Victoria, Australia
| | | | - Nathan van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands
| | - Ben M Harvey
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands.
| |
Collapse
|
2
|
Mosabbir AA, Braun Janzen T, Al Shirawi M, Rotzinger S, Kennedy SH, Farzan F, Meltzer J, Bartel L. Investigating the Effects of Auditory and Vibrotactile Rhythmic Sensory Stimulation on Depression: An EEG Pilot Study. Cureus 2022; 14:e22557. [PMID: 35371676 PMCID: PMC8958118 DOI: 10.7759/cureus.22557] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2022] [Indexed: 12/18/2022] Open
Abstract
Background Major depressive disorder (MDD) is a persistent psychiatric condition and one of the leading causes of global disease burden. In a previous study, we investigated the effects of a five-week intervention consisting of rhythmic gamma frequency (30-70 Hz) vibroacoustic stimulation in 20 patients formally diagnosed with MDD. In that study, the findings suggested a significant clinical improvement in depression symptoms as measured using the Montgomery-Asberg Depression Rating Scale (MADRS), with 37% of participants meeting the criteria for clinical response. The goal of the present research was to examine possible changes from baseline to posttreatment in resting-state electroencephalography (EEG) recordings using the same treatment protocol and to characterize basic changes in EEG related to treatment response. Materials and methods The study sample consisted of 19 individuals aged 18-70 years with a clinical diagnosis of MDD. The participants were assessed before and after a five-week treatment period, which consisted of listening to an instrumental musical track on a vibroacoustic device, delivering auditory and vibrotactile stimulus in the gamma-band range (30-70 Hz, with particular emphasis on 40 Hz). The primary outcome measure was the change in Montgomery-Asberg Depression Rating Scale (MADRS) from baseline to posttreatment and resting-state EEG. Results Analysis comparing MADRS score at baseline and post-intervention indicated a significant change in the severity of depression symptoms after five weeks (t = 3.9923, df = 18, p = 0.0009). The clinical response rate was 36.85%. Resting-state EEG power analysis revealed a significant increase in occipital alpha power (t = -2.149, df = 18, p = 0.04548), as well as an increase in the prefrontal gamma power of the responders (t = 2.8079, df = 13.431, p = 0.01442). Conclusions The results indicate that improvements in MADRS scores after rhythmic sensory stimulation (RSS) were accompanied by an increase in alpha power in the occipital region and an increase in gamma in the prefrontal region, thus suggesting treatment effects on cortical activity in depression. The results of this pilot study will help inform subsequent controlled studies evaluating whether treatment response to vibroacoustic stimulation constitutes a real and replicable reduction of depressive symptoms and to characterize the underlying mechanisms.
Collapse
Affiliation(s)
| | | | | | - Susan Rotzinger
- Department of Psychiatry, University Health Network, Toronto, CAN
| | - Sidney H Kennedy
- Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, CAN
| | - Faranak Farzan
- School of Mechatronic Systems Engineering, Simon Fraser University, Surrey, CAN
| | - Jed Meltzer
- Rotman Research Institute, Baycrest Health Sciences, Toronto, CAN
| | - Lee Bartel
- Faculty of Music, University of Toronto, Toronto, CAN
| |
Collapse
|
3
|
Yurgil KA, Velasquez MA, Winston JL, Reichman NB, Colombo PJ. Music Training, Working Memory, and Neural Oscillations: A Review. Front Psychol 2020; 11:266. [PMID: 32153474 PMCID: PMC7047970 DOI: 10.3389/fpsyg.2020.00266] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Accepted: 02/04/2020] [Indexed: 12/18/2022] Open
Abstract
This review focuses on reports that link music training to working memory and neural oscillations. Music training is increasingly associated with improvement in working memory, which is strongly related to both localized and distributed patterns of neural oscillations. Importantly, there is a small but growing number of reports of relationships between music training, working memory, and neural oscillations in adults. Taken together, these studies make important contributions to our understanding of the neural mechanisms that support effects of music training on behavioral measures of executive functions. In addition, they reveal gaps in our knowledge that hold promise for further investigation. The current review is divided into the main sections that follow: (1) discussion of behavioral measures of working memory, and effects of music training on working memory in adults; (2) relationships between music training and neural oscillations during temporal stages of working memory; (3) relationships between music training and working memory in children; (4) relationships between music training and working memory in older adults; and (5) effects of entrainment of neural oscillations on cognitive processing. We conclude that the study of neural oscillations is proving useful in elucidating the neural mechanisms of relationships between music training and the temporal stages of working memory. Moreover, a lifespan approach to these studies will likely reveal strategies to improve and maintain executive function during development and aging.
Collapse
Affiliation(s)
- Kate A. Yurgil
- Department of Psychological Sciences, Loyola University, New Orleans, LA, United States
| | | | - Jenna L. Winston
- Department of Psychology, Tulane University, New Orleans, LA, United States
| | - Noah B. Reichman
- Brain Institute, Tulane University, New Orleans, LA, United States
| | - Paul J. Colombo
- Department of Psychology, Tulane University, New Orleans, LA, United States
- Brain Institute, Tulane University, New Orleans, LA, United States
| |
Collapse
|
4
|
Hou Y, Chen S. Distinguishing Different Emotions Evoked by Music via Electroencephalographic Signals. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2019:3191903. [PMID: 30956655 PMCID: PMC6431402 DOI: 10.1155/2019/3191903] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 12/25/2018] [Accepted: 01/28/2019] [Indexed: 11/18/2022]
Abstract
Music can evoke a variety of emotions, which may be manifested by distinct signals on the electroencephalogram (EEG). Many previous studies have examined the associations between specific aspects of music, including the subjective emotions aroused, and EEG signal features. However, no study has comprehensively examined music-related EEG features and selected those with the strongest potential for discriminating emotions. So, this paper conducted a series of experiments to identify the most influential EEG features induced by music evoking different emotions (calm, joy, sad, and angry). We extracted 27-dimensional features from each of 12 electrode positions then used correlation-based feature selection method to identify the feature set most strongly related to the original features but with lowest redundancy. Several classifiers, including Support Vector Machine (SVM), C4.5, LDA, and BPNN, were then used to test the recognition accuracy of the original and selected feature sets. Finally, results are analyzed in detail and the relationships between selected feature set and human emotions are shown clearly. Through the classification results of 10 random examinations, it could be concluded that the selected feature sets of Pz are more effective than other features when using as the key feature set to classify human emotion statues.
Collapse
Affiliation(s)
- Yimin Hou
- School of Automation Engineering, Northeast Electric Power University, Jilin, China
| | | |
Collapse
|
5
|
Bridwell DA, Cavanagh JF, Collins AGE, Nunez MD, Srinivasan R, Stober S, Calhoun VD. Moving Beyond ERP Components: A Selective Review of Approaches to Integrate EEG and Behavior. Front Hum Neurosci 2018; 12:106. [PMID: 29632480 PMCID: PMC5879117 DOI: 10.3389/fnhum.2018.00106] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Accepted: 03/06/2018] [Indexed: 11/17/2022] Open
Abstract
Relationships between neuroimaging measures and behavior provide important clues about brain function and cognition in healthy and clinical populations. While electroencephalography (EEG) provides a portable, low cost measure of brain dynamics, it has been somewhat underrepresented in the emerging field of model-based inference. We seek to address this gap in this article by highlighting the utility of linking EEG and behavior, with an emphasis on approaches for EEG analysis that move beyond focusing on peaks or “components” derived from averaging EEG responses across trials and subjects (generating the event-related potential, ERP). First, we review methods for deriving features from EEG in order to enhance the signal within single-trials. These methods include filtering based on user-defined features (i.e., frequency decomposition, time-frequency decomposition), filtering based on data-driven properties (i.e., blind source separation, BSS), and generating more abstract representations of data (e.g., using deep learning). We then review cognitive models which extract latent variables from experimental tasks, including the drift diffusion model (DDM) and reinforcement learning (RL) approaches. Next, we discuss ways to access associations among these measures, including statistical models, data-driven joint models and cognitive joint modeling using hierarchical Bayesian models (HBMs). We think that these methodological tools are likely to contribute to theoretical advancements, and will help inform our understandings of brain dynamics that contribute to moment-to-moment cognitive function.
Collapse
Affiliation(s)
| | - James F Cavanagh
- Department of Psychology, University of New Mexico, Albuquerque, NM, United States
| | - Anne G E Collins
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States.,Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
| | - Michael D Nunez
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States.,Department of Biomedical Engineering, University of California, Irvine, Irvine, CA, United States
| | - Ramesh Srinivasan
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States.,Department of Biomedical Engineering, University of California, Irvine, Irvine, CA, United States
| | - Sebastian Stober
- Research Focus Cognitive Sciences, University of Potsdam, Potsdam, Germany
| | - Vince D Calhoun
- The Mind Research Network, Albuquerque, NM, United States.,Department of ECE, University of New Mexico, Albuquerque, NM, United States
| |
Collapse
|