1
|
Cook AJ, Im HY, Giaschi DE. Large-scale functional networks underlying visual attention. Neurosci Biobehav Rev 2025; 173:106165. [PMID: 40245970 DOI: 10.1016/j.neubiorev.2025.106165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Revised: 04/11/2025] [Accepted: 04/15/2025] [Indexed: 04/19/2025]
Abstract
Attention networks are loosely defined as the regions of the brain which interact to control behaviour during attentional tasks, but the specific definition of attention networks varies between research programs based on task demands and modalities. The Attention Network Task was designed to exemplify three aspects of attention, alerting, orienting, and executive control, using a visual cueing paradigm. Its proponents propose a system of networks which underlies these aspects. It is debated whether there exists a unified system of networks which underlies attention independently of other cognitive and sensory processing systems. We review the evidence for an attention system within the domain of visual attention. Neuroimaging research using fMRI, EEG, MEG, and others across a variety of tasks attributed to attention, visual cueing, visual search, and divided attention, is compared. This concludes with a discussion on the limitations of an independent "attention system" for describing how the brain flexibly controls many abilities attributed to visual attention.
Collapse
Affiliation(s)
- Alexander J Cook
- Department of Psychology, The University of British Columbia, 2136 West Mall, Vancouver, British Columbia V6T 1Z4, Canada; BC Children's Hospital, 4480 Oak St., Vancouver, British Columbia, V6H 3V4, Canada.
| | - Hee Yeon Im
- Department of Psychology, The University of British Columbia, 2136 West Mall, Vancouver, British Columbia V6T 1Z4, Canada; BC Children's Hospital, 4480 Oak St., Vancouver, British Columbia, V6H 3V4, Canada
| | - Deborah E Giaschi
- BC Children's Hospital, 4480 Oak St., Vancouver, British Columbia, V6H 3V4, Canada; Department of Ophthalmology & Visual Sciences, The University of British Columbia, 2550 Willow St, Vancouver V5Z 3N9, Canada
| |
Collapse
|
2
|
Celebi B, Cavdan M, Drewing K. Spatial attention modulates time perception on the human torso. Atten Percept Psychophys 2025; 87:779-793. [PMID: 39971887 PMCID: PMC11965207 DOI: 10.3758/s13414-025-03025-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/23/2025] [Indexed: 02/21/2025]
Abstract
Time perception is a fundamental aspect of human life, and is influenced and regulated by cognitive and sensory processes. For instance, spatial attention is found to modulate temporal judgments when resources are allocated to a specific stimulus location in vision and audition. However, it is unclear to what extent the attentional effects observed in vision and audition can be generalized to the tactile modality. Here, we study the effects of attentional cues on the time perception of tactile stimuli presented on the human torso. Across four experiments, we examined (1) the impact of visual versus tactile spatial cues, (2) the modulation of time perception by dynamic versus static tactile cues, (3) the role of spatial congruency between cue and target locations (front vs. back of the torso), and (4) the influence of cue-target intervals. Participants performed temporal bisection tasks, judging whether the vibrations following the cues were closer to short or long anchor durations. Tactile cues expanded the perceived duration of subsequent stimuli, with dynamic cues having a greater effect than static ones. While no congruency effects were observed for left and right torso locations, front-back congruency enhanced time expansion. The attentional effect peaked at a 100-ms cue-target interval. We conclude that the time-expanding effects of spatial attention extend to tactile stimuli on the human torso given that time expansion follows principles known from spatial attention.
Collapse
Affiliation(s)
- Bora Celebi
- Experimental Psychology - HapLab, Justus Liebig University Gießen, Otto-Behaghel-Str. 10F, 35394, Gießen, Germany.
| | - Müge Cavdan
- Experimental Psychology - HapLab, Justus Liebig University Gießen, Otto-Behaghel-Str. 10F, 35394, Gießen, Germany
| | - Knut Drewing
- Experimental Psychology - HapLab, Justus Liebig University Gießen, Otto-Behaghel-Str. 10F, 35394, Gießen, Germany
| |
Collapse
|
3
|
Temudo S, Pinheiro AP. What Is Faster than Where in Vocal Emotional Perception. J Cogn Neurosci 2025; 37:239-265. [PMID: 39348115 DOI: 10.1162/jocn_a_02251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/01/2024]
Abstract
Voices carry a vast amount of information about speakers (e.g., emotional state; spatial location). Neuroimaging studies postulate that spatial ("where") and emotional ("what") cues are processed by partially independent processing streams. Although behavioral evidence reveals interactions between emotion and space, the temporal dynamics of these processes in the brain and its modulation by attention remain unknown. We investigated whether and how spatial and emotional features interact during voice processing as a function of attention focus. Spatialized nonverbal vocalizations differing in valence (neutral, amusement, anger) were presented at different locations around the head, whereas listeners discriminated either the spatial location or emotional quality of the voice. Neural activity was measured with ERPs of the EEG. Affective ratings were collected at the end of the EEG session. Emotional vocalizations elicited decreased N1 but increased P2 and late positive potential amplitudes. Interactions of space and emotion occurred at the salience detection stage: neutral vocalizations presented at right (vs. left) locations elicited increased P2 amplitudes, but no such differences were observed for emotional vocalizations. When task instructions involved emotion categorization, the P2 was increased for vocalizations presented at front (vs. back) locations. Behaviorally, only valence and arousal ratings showed emotion-space interactions. These findings suggest that emotional representations are activated earlier than spatial representations in voice processing. The perceptual prioritization of emotional cues occurred irrespective of task instructions but was not paralleled by an augmented stimulus representation in space. These findings support the differential responding to emotional information by auditory processing pathways.
Collapse
|
4
|
DeYoe EA, Huddleston W, Greenberg AS. Are neuronal mechanisms of attention universal across human sensory and motor brain maps? Psychon Bull Rev 2024; 31:2371-2389. [PMID: 38587756 PMCID: PMC11680640 DOI: 10.3758/s13423-024-02495-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/10/2024] [Indexed: 04/09/2024]
Abstract
One's experience of shifting attention from the color to the smell to the act of picking a flower seems like a unitary process applied, at will, to one modality after another. Yet, the unique and separable experiences of sight versus smell versus movement might suggest that the neural mechanisms of attention have been separately optimized to employ each modality to its greatest advantage. Moreover, addressing the issue of universality can be particularly difficult due to a paucity of existing cross-modal comparisons and a dearth of neurophysiological methods that can be applied equally well across disparate modalities. Here we outline some of the conceptual and methodological issues related to this problem and present an instructive example of an experimental approach that can be applied widely throughout the human brain to permit detailed, quantitative comparison of attentional mechanisms across modalities. The ultimate goal is to spur efforts across disciplines to provide a large and varied database of empirical observations that will either support the notion of a universal neural substrate for attention or more clearly identify the degree to which attentional mechanisms are specialized for each modality.
Collapse
Affiliation(s)
- Edgar A DeYoe
- Department of Radiology, Medical College of Wisconsin, 8701 Watertown Plank Rd, Milwaukee, WI, 53226, USA.
- , Signal Mountain, USA.
| | - Wendy Huddleston
- School of Rehabilitation Sciences and Technology, College of Health Professions and Sciences, University of Wisconsin - Milwaukee, 3409 N. Downer Ave, Milwaukee, WI, 53211, USA
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI, 53226, USA
| |
Collapse
|
5
|
Niu Y, Chen N, Zhu H, Li G, Chen Y. Brain connectivity and time-frequency fusion-based auditory spatial attention detection. Neuroscience 2024; 560:397-405. [PMID: 39265802 DOI: 10.1016/j.neuroscience.2024.09.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 08/20/2024] [Accepted: 09/06/2024] [Indexed: 09/14/2024]
Abstract
Auditory spatial attention detection (ASAD) aims to decipher the spatial locus of a listener's selective auditory attention from electroencephalogram (EEG) signals. However, current models may exhibit deficiencies in EEG feature extraction, leading to overfitting on small datasets or a decline in EEG discriminability. Furthermore, they often neglect topological relationships between EEG channels and, consequently, brain connectivities. Although graph-based EEG modeling has been employed in ASAD, effectively incorporating both local and global connectivities remains a great challenge. To address these limitations, we propose a new ASAD model. First, time-frequency feature fusion provides a more precise and discriminative EEG representation. Second, EEG segments are treated as graphs, and the graph convolution and global attention mechanism are leveraged to capture local and global brain connections, respectively. A series of experiments are conducted in a leave-trials-out cross-validation manner. On the MAD-EEG and KUL datasets, the accuracies of the proposed model are more than 9% and 3% higher than those of the corresponding state-of-the-art models, respectively, while the accuracy of the proposed model on the SNHL dataset is roughly comparable to that of the state-of-the-art model. EEG time-frequency feature fusion proves to be indispensable in the proposed model. EEG electrodes over the frontal cortex are most important for ASAD tasks, followed by those over the temporal lobe. Additionally, the proposed model performs well even on small datasets. This study contributes to a deeper understanding of the neural encoding related to human hearing and attention, with potential applications in neuro-steered hearing devices.
Collapse
Affiliation(s)
- Yixiang Niu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Ning Chen
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China.
| | - Hongqing Zhu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Guangqiang Li
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Yibo Chen
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| |
Collapse
|
6
|
Niu Y, Chen N, Zhu H, Li G, Chen Y. Subject-independent auditory spatial attention detection based on brain topology modeling and feature distribution alignment. Hear Res 2024; 453:109104. [PMID: 39255528 DOI: 10.1016/j.heares.2024.109104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 08/12/2024] [Accepted: 08/13/2024] [Indexed: 09/12/2024]
Abstract
Auditory spatial attention detection (ASAD) seeks to determine which speaker in a surround sound field a listener is focusing on based on the one's brain biosignals. Although existing studies have achieved ASAD from a single-trial electroencephalogram (EEG), the huge inter-subject variability makes them generally perform poorly in cross-subject scenarios. Besides, most ASAD methods do not take full advantage of topological relationships between EEG channels, which are crucial for high-quality ASAD. Recently, some advanced studies have introduced graph-based brain topology modeling into ASAD, but how to calculate edge weights in a graph to better capture actual brain connectivity is worthy of further investigation. To address these issues, we propose a new ASAD method in this paper. First, we model a multi-channel EEG segment as a graph, where differential entropy serves as the node feature, and a static adjacency matrix is generated based on inter-channel mutual information to quantify brain functional connectivity. Then, different subjects' EEG graphs are encoded into a shared embedding space through a total variation graph neural network. Meanwhile, feature distribution alignment based on multi-kernel maximum mean discrepancy is adopted to learn subject-invariant patterns. Note that we align EEG embeddings of different subjects to reference distributions rather than align them to each other for the purpose of privacy preservation. A series of experiments on open datasets demonstrate that the proposed model outperforms state-of-the-art ASAD models in cross-subject scenarios with relatively low computational complexity, and feature distribution alignment improves the generalizability of the proposed model to a new subject.
Collapse
Affiliation(s)
- Yixiang Niu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Ning Chen
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China.
| | - Hongqing Zhu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Guangqiang Li
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Yibo Chen
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| |
Collapse
|
7
|
Choudhari V, Han C, Bickel S, Mehta AD, Schevon C, McKhann GM, Mesgarani N. Brain-Controlled Augmented Hearing for Spatially Moving Conversations in Multi-Talker Environments. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2401379. [PMID: 39248654 PMCID: PMC11538705 DOI: 10.1002/advs.202401379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 06/17/2024] [Indexed: 09/10/2024]
Abstract
Focusing on a specific conversation amidst multiple interfering talkers is challenging, especially for those with hearing loss. Brain-controlled assistive hearing devices aim to alleviate this problem by enhancing the attended speech based on the listener's neural signals using auditory attention decoding (AAD). Departing from conventional AAD studies that relied on oversimplified scenarios with stationary talkers, a realistic AAD task that involves multiple talkers taking turns as they continuously move in space in background noise is presented. Invasive electroencephalography (iEEG) data are collected from three neurosurgical patients as they focused on one of the two moving conversations. An enhanced brain-controlled assistive hearing system that combines AAD and a binaural speaker-independent speech separation model is presented. The separation model unmixes talkers while preserving their spatial location and provides talker trajectories to the neural decoder to improve AAD accuracy. Subjective and objective evaluations show that the proposed system enhances speech intelligibility and facilitates conversation tracking while maintaining spatial cues and voice quality in challenging acoustic environments. This research demonstrates the potential of this approach in real-world scenarios and marks a significant step toward developing assistive hearing technologies that adapt to the intricate dynamics of everyday auditory experiences.
Collapse
Affiliation(s)
- Vishal Choudhari
- Department of Electrical EngineeringColumbia UniversityNew YorkNY10027USA
- Mortimer B. Zuckerman Mind Brain Behavior InstituteNew YorkNY10027USA
| | - Cong Han
- Department of Electrical EngineeringColumbia UniversityNew YorkNY10027USA
- Mortimer B. Zuckerman Mind Brain Behavior InstituteNew YorkNY10027USA
| | - Stephan Bickel
- Hofstra Northwell School of MedicineUniondaleNY11549USA
- The Feinstein Institutes for Medical ResearchManhassetNY11030USA
| | - Ashesh D. Mehta
- Hofstra Northwell School of MedicineUniondaleNY11549USA
- The Feinstein Institutes for Medical ResearchManhassetNY11030USA
| | | | - Guy M. McKhann
- Department of Neurological Surgery, Vagelos College of Physicians and SurgeonsColumbia University, New YorkNew YorkNY10027USA
| | - Nima Mesgarani
- Department of Electrical EngineeringColumbia UniversityNew YorkNY10027USA
- Mortimer B. Zuckerman Mind Brain Behavior InstituteNew YorkNY10027USA
| |
Collapse
|
8
|
Mahjoory K, Bahmer A, Henry MJ. Convolutional neural networks can identify brain interactions involved in decoding spatial auditory attention. PLoS Comput Biol 2024; 20:e1012376. [PMID: 39116183 PMCID: PMC11335149 DOI: 10.1371/journal.pcbi.1012376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 08/20/2024] [Accepted: 07/30/2024] [Indexed: 08/10/2024] Open
Abstract
Human listeners have the ability to direct their attention to a single speaker in a multi-talker environment. The neural correlates of selective attention can be decoded from a single trial of electroencephalography (EEG) data. In this study, leveraging the source-reconstructed and anatomically-resolved EEG data as inputs, we sought to employ CNN as an interpretable model to uncover task-specific interactions between brain regions, rather than simply to utilize it as a black box decoder. To this end, our CNN model was specifically designed to learn pairwise interaction representations for 10 cortical regions from five-second inputs. By exclusively utilizing these features for decoding, our model was able to attain a median accuracy of 77.56% for within-participant and 65.14% for cross-participant classification. Through ablation analysis together with dissecting the features of the models and applying cluster analysis, we were able to discern the presence of alpha-band-dominated inter-hemisphere interactions, as well as alpha- and beta-band dominant interactions that were either hemisphere-specific or were characterized by a contrasting pattern between the right and left hemispheres. These interactions were more pronounced in parietal and central regions for within-participant decoding, but in parietal, central, and partly frontal regions for cross-participant decoding. These findings demonstrate that our CNN model can effectively utilize features known to be important in auditory attention tasks and suggest that the application of domain knowledge inspired CNNs on source-reconstructed EEG data can offer a novel computational framework for studying task-relevant brain interactions.
Collapse
Affiliation(s)
- Keyvan Mahjoory
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Andreas Bahmer
- RheinMain University of Applied Sciences Campus Ruesselsheim, Wiesbaden, Germany
| | - Molly J. Henry
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Psychology, Toronto Metropolitan University, Toronto, Ontario, Canada
| |
Collapse
|
9
|
Razzaghipour A, Ashrafi M, Mohammadzadeh A. A Review of Auditory Attention: Neural Mechanisms, Theories, and Affective Disorders. Indian J Otolaryngol Head Neck Surg 2024; 76:2250-2256. [PMID: 38883545 PMCID: PMC11169100 DOI: 10.1007/s12070-023-04373-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Accepted: 11/17/2023] [Indexed: 06/18/2024] Open
Abstract
Attention is a fundamental aspect of human cognitive function and is crucial for essential activities such as learning, social interaction, and routine tasks. Notably, Auditory attention involves complex interactions and collaboration among multiple brain networks. Recognizing the impairment of auditory attention, comprehending its underlying mechanisms, and identifying the activated brain regions essential for the development of treatments and interventions for individuals facing auditory attention deficits, emphasizes the significance of investigating these matters. In the current study, we conducted a review by searching for the full text of 53 articles published related to auditory attention, mechanisms, and networks in databases like Science Direct, Google Scholar, ProQuest, and PubMed using the keywords Attention, Auditory Attention, Auditory Attention Impairment, theories of attention were investigated in the years 2000 to 2023 And focused on articles that provided discussions within this research domain. The studies have demonstrated that auditory attention exceeds being an acoustic attribute and assumes a fundamental role in complex acoustic environments, information processing, and even speech comprehension. In the context of this study, we have conducted a review and summary of the proposed theories related to attention and the brain networks involved in different forms of auditory attention. In conclusion, the integration of auditory attention assessments, behavioral observations, and an understanding of the neural mechanisms and brain regions implicated in auditory attention proves to be an effective approach for the diagnosis and treatment of attention-related disorders.
Collapse
Affiliation(s)
- Amirreza Razzaghipour
- Student Research Committee, Department of Audiology, Faculty of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Majid Ashrafi
- Department of Audiology, Faculty of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ali Mohammadzadeh
- Department of Audiology, Faculty of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
10
|
Wang K, Fang Y, Guo Q, Shen L, Chen Q. Superior Attentional Efficiency of Auditory Cue via the Ventral Auditory-thalamic Pathway. J Cogn Neurosci 2024; 36:303-326. [PMID: 38010315 DOI: 10.1162/jocn_a_02090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Auditory commands are often executed more efficiently than visual commands. However, empirical evidence on the underlying behavioral and neural mechanisms remains scarce. In two experiments, we manipulated the delivery modality of informative cues and the prediction violation effect and found consistently enhanced RT benefits for the matched auditory cues compared with the matched visual cues. At the neural level, when the bottom-up perceptual input matched the prior prediction induced by the auditory cue, the auditory-thalamic pathway was significantly activated. Moreover, the stronger the auditory-thalamic connectivity, the higher the behavioral benefits of the matched auditory cue. When the bottom-up input violated the prior prediction induced by the auditory cue, the ventral auditory pathway was specifically involved. Moreover, the stronger the ventral auditory-prefrontal connectivity, the larger the behavioral costs caused by the violation of the auditory cue. In addition, the dorsal frontoparietal network showed a supramodal function in reacting to the violation of informative cues irrespective of the delivery modality of the cue. Taken together, the results reveal novel behavioral and neural evidence that the superior efficiency of the auditory cue is twofold: The auditory-thalamic pathway is associated with improvements in task performance when the bottom-up input matches the auditory cue, whereas the ventral auditory-prefrontal pathway is involved when the auditory cue is violated.
Collapse
Affiliation(s)
- Ke Wang
- South China Normal University, Guangzhou, China
| | - Ying Fang
- South China Normal University, Guangzhou, China
| | - Qiang Guo
- Guangdong Sanjiu Brain Hospital, Guangzhou, China
| | - Lu Shen
- South China Normal University, Guangzhou, China
| | - Qi Chen
- South China Normal University, Guangzhou, China
| |
Collapse
|
11
|
Lee J, Park S. Multi-modal Representation of the Size of Space in the Human Brain. J Cogn Neurosci 2024; 36:340-361. [PMID: 38010320 DOI: 10.1162/jocn_a_02092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes the geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small- and large-sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberations. By using a multivoxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory-specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that angular gyrus and the right medial frontal gyrus had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory-specific regions and modality-integrated regions increases in the multimodal condition compared with single modality conditions. Our results suggest that spatial size perception relies on both sensory-specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
|
12
|
Lee J, Park S. Multi-modal representation of the size of space in the human brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.24.550343. [PMID: 37546991 PMCID: PMC10402083 DOI: 10.1101/2023.07.24.550343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small and large sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberation. By using a multi-voxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that AG and the right IFG pars opercularis had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory specific regions and modality-integrated regions increase in the multimodal condition compared to single modality conditions. Our results suggest that the spatial size perception relies on both sensory specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
Affiliation(s)
- Jaeeun Lee
- Department of Psychology, University of Minnesota, Minneapolis, MN
| | - Soojin Park
- Department of Psychology, Yonsei University, Seoul, South Korea
| |
Collapse
|
13
|
Shim H, Gibbs L, Rush K, Ham J, Kim S, Kim S, Choi I. Neural Mechanisms Related to the Enhanced Auditory Selective Attention Following Neurofeedback Training: Focusing on Cortical Oscillations. APPLIED SCIENCES (BASEL, SWITZERLAND) 2023; 13:8499. [PMID: 39449731 PMCID: PMC11500732 DOI: 10.3390/app13148499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/26/2024]
Abstract
Selective attention can be a useful tactic for speech-in-noise (SiN) interpretation as it strengthens cortical responses to attended sensory inputs while suppressing others. This cortical process is referred to as attentional modulation. Our earlier study showed that a neurofeedback training paradigm was effective for improving the attentional modulation of cortical auditory evoked responses. However, it was unclear how such neurofeedback training improved attentional modulation. This paper attempts to unveil what neural mechanisms underlie strengthened auditory selective attention during the neurofeedback training paradigm. Our EEG time-frequency analysis found that, when spatial auditory attention was focused, a fronto-parietal brain network was activated. Additionally, the neurofeedback training increased beta oscillation, which may imply top-down processing was used to anticipate the sound to be attended selectively with prior information. When the subjects were attending to the sound from the right, they exhibited more alpha oscillation in the right parietal cortex during the final session compared to the first, indicating improved spatial inhibitory processing to suppress sounds from the left. After the four-week training period, the temporal cortex exhibited improved attentional modulation of beta oscillation. This suggests strengthened neural activity to predict the target. Moreover, there was an improvement in the strength of attentional modulation on cortical evoked responses to sounds. The Placebo Group, who experienced similar attention training with the exception that feedback was based simply on behavioral accuracy, did not experience these training effects. These findings demonstrate how neurofeedback training effectively improves the neural mechanisms underlying auditory selective attention.
Collapse
Affiliation(s)
- Hwan Shim
- Department of Electrical and Computer Engineering Technology, Rochester Institute of Technology, Rochester, NY 14623, USA
| | - Leah Gibbs
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA
| | - Karsyn Rush
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA
| | - Jusung Ham
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA
| | - Subong Kim
- Department of Communication Sciences and Disorders, Montclair State University, Montclair, NJ 07043, USA
| | - Sungyoung Kim
- Department of Electrical and Computer Engineering Technology, Rochester Institute of Technology, Rochester, NY 14623, USA
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA
- Graduate School of Convergence Science and Technology, Seoul National University, Seoul 08826, Republic of Korea
| |
Collapse
|
14
|
Maezawa T, Kawahara JI. Processing symmetry between visual and auditory spatial representations in updating working memory. Q J Exp Psychol (Hove) 2023; 76:672-704. [PMID: 35570663 DOI: 10.1177/17470218221103253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Updating spatial representations in visual and auditory working memory relies on common processes, and the modalities should compete for attentional resources. If competition occurs, one type of spatial information is presumably weighted over the other, irrespective of sensory modality. This study used incompatible spatial information conveyed from two different cue modalities to examine relative dominance in memory updating. Participants mentally manoeuvred a designated target in a matrix according to visual or auditory stimuli that were presented simultaneously, to identify a terminal location. Prior to the navigation task, the relative perceptual saliences of the visual cues were manipulated to be equal, superior, or inferior to the auditory cues. The results demonstrate that visual and auditory information competed for attentional resources, such that visual/auditory guidance was impaired by incongruent cues delivered from the other modality. Although visual bias was generally observed in working-memory navigation, stimuli of relatively high salience interfered with or facilitated other stimuli regardless of modality, demonstrating the processing symmetry of spatial updating in visual and auditory spatial working memory. Furthermore, this processing symmetry can be identified during the encoding of sensory inputs into working-memory representations. The results imply that auditory spatial updating is comparable to visual spatial updating in that salient stimuli receive a high priority when selecting inputs and are used when tracking spatial representations.
Collapse
Affiliation(s)
- Tomoki Maezawa
- Department of Psychology, Hokkaido University, Sapporo, Japan
| | - Jun I Kawahara
- Department of Psychology, Hokkaido University, Sapporo, Japan
| |
Collapse
|
15
|
Curtis MT, Sklar AL, Coffman BA, Salisbury DF. Functional connectivity and gray matter deficits within the auditory attention circuit in first-episode psychosis. Front Psychiatry 2023; 14:1114703. [PMID: 36860499 PMCID: PMC9968732 DOI: 10.3389/fpsyt.2023.1114703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 01/24/2023] [Indexed: 02/16/2023] Open
Abstract
Background Selective attention deficits in first episode of psychosis (FEP) can be indexed by impaired attentional modulation of auditory M100. It is unknown if the pathophysiology underlying this deficit is restricted to auditory cortex or involves a distributed attention network. We examined the auditory attention network in FEP. Methods MEG was recorded from 27 FEP and 31 matched healthy controls (HC) while alternately ignoring or attending tones. A whole-brain analysis of MEG source activity during auditory M100 identified non-auditory areas with increased activity. Time-frequency activity and phase-amplitude coupling were examined in auditory cortex to identify the attentional executive carrier frequency. Attention networks were defined by phase-locking at the carrier frequency. Spectral and gray matter deficits in the identified circuits were examined in FEP. Results Attention-related activity was identified in prefrontal and parietal regions, markedly in precuneus. Theta power and phase coupling to gamma amplitude increased with attention in left primary auditory cortex. Two unilateral attention networks were identified with precuneus seeds in HC. Network synchrony was impaired in FEP. Gray matter thickness was reduced within the left hemisphere network in FEP but did not correlate with synchrony. Conclusion Several extra-auditory attention areas with attention-related activity were identified. Theta was the carrier frequency for attentional modulation in auditory cortex. Left and right hemisphere attention networks were identified, with bilateral functional deficits and left hemisphere structural deficits, though FEP showed intact auditory cortex theta phase-gamma amplitude coupling. These novel findings indicate attention-related circuitopathy early in psychosis potentially amenable to future non-invasive interventions.
Collapse
Affiliation(s)
| | | | | | - Dean F. Salisbury
- Clinical Neurophysiology Research Laboratory, Department of Psychiatry, Western Psychiatric Hospital, University of Pittsburgh School of Medicine, Pittsburgh, PA, United States
| |
Collapse
|
16
|
de la Piedra Walter M, Notbohm A, Eling P, Hildebrandt H. Audiospatial evoked potentials for the assessment of spatial attention deficits in patients with severe cerebrovascular accidents. J Clin Exp Neuropsychol 2021; 43:623-636. [PMID: 34592915 DOI: 10.1080/13803395.2021.1984397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
INTRODUCTION Neuropsychological assessment of spatial orientation in post-acute patients with large brain lesions is often limited due to additional cognitive disorders like aphasia, apraxia, or reduced responsiveness. METHODS To cope with these limitations, we developed a paradigm using passive audiospatial event-related potentials (pAERPs): Participants were requested to merely listen over headphones to horizontally moving tones followed by a short tone ("target"), presented either on the side to which the cue moved or on the opposite side. Two runs of 120 trials were presented and we registered AERPs with two electrodes, mounted at C3 and C4. Nine sub-acute patients with large left hemisphere (LH) or right hemisphere (RH) lesions and nine controls participated. RESULTS Patients had no problems completing the assessment. RH patients showed a reduced N100 for left-sided targets in all conditions. LH patients showed a diminished N100 for invalid trials and contralesional targets. CONCLUSION Measuring AERPs for moving auditory cues and with two electrodes allows investigating spatial attentional deficits in patients with large RH and LH lesions, who are often unable to perform clinical tests. Our procedure can be implemented easily in an acute and rehabilitation setting and might enable investigating spatial attentional processes even in patients with minimal conscious awareness.
Collapse
Affiliation(s)
| | - Annika Notbohm
- Department of Neurology, Klinikum Bremen-Ost, Bremen, Germany
| | - Paul Eling
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, The Netherlands
| | - Helmut Hildebrandt
- Department of Neurology, Klinikum Bremen-Ost, Bremen, Germany.,Institute of Psychology, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
17
|
Ahmad H, Tonelli A, Campus C, Capris E, Facchini V, Sandini G, Gori M. An audio-visual motor training improves audio spatial localization skills in individuals with scotomas due to retinal degenerative diseases. Acta Psychol (Amst) 2021; 219:103384. [PMID: 34365274 DOI: 10.1016/j.actpsy.2021.103384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 07/05/2021] [Accepted: 07/31/2021] [Indexed: 11/29/2022] Open
Abstract
Several studies have shown that impairments in a sensory modality can induce perceptual deficits in tasks involving the remaining senses. For example, people with retinal degenerative diseases like Macular Degeneration (MD) and with central scotoma show biased auditory localization abilities towards the visual field's scotoma area. This result indicates an auditory spatial reorganization of cross-modal processing in people with scotoma when the visual information is impaired. Recent works showed that multisensory training could be beneficial to improve spatial perception. In line with this idea, here we hypothesize that audio-visual and motor training could improve people's spatial skills with retinal degenerative diseases. In the present study, we tested this hypothesis by testing two groups of scotoma patients in an auditory and visual localization task before and after a training or rest performance. The training group was tested before and after multisensory training, while the control group performed the two tasks twice after 10 min of break. The training was done with a portable device positioned on the finger, providing spatially and temporally congruent audio and visual feedback during arm movement. Our findings show improved audio and visual localization for the training group and not for the control group. These results suggest that integrating multiple spatial sensory cues can improve the spatial perception of scotoma patients. This finding ignites further research and applications for people with central scotoma for whom rehabilitation is classically focused on training visual modality only.
Collapse
Affiliation(s)
- Hafsah Ahmad
- Robotics, Brain and Cognitive Sciences (RBCS), Genova, Italy; Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy; University of Genova, Genova, Italy; Sino-Pakistan Centre for Artificial Intelligence (SPCAI), Pak-Austria Fachhochschule: Institute of Applied Sciences and Technology (PAF-IAST), Haripur, Pakistan
| | - Alessia Tonelli
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy
| | - Claudio Campus
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy
| | | | | | - Giulio Sandini
- Robotics, Brain and Cognitive Sciences (RBCS), Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy.
| |
Collapse
|
18
|
Somers DC, Michalka SW, Tobyne SM, Noyce AL. Individual Subject Approaches to Mapping Sensory-Biased and Multiple-Demand Regions in Human Frontal Cortex. Curr Opin Behav Sci 2021; 40:169-177. [PMID: 34307791 PMCID: PMC8294130 DOI: 10.1016/j.cobeha.2021.05.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Sensory modality, widely accepted as a key factor in the functional organization of posterior cortical areas, also shapes the organization of human frontal lobes. 'Deep imaging,' or the practice of collecting a sizable amount of data on individual subjects, offers significant advantages in revealing fine-scale aspects of functional organization of the human brain. Here, we review deep imaging approaches to mapping multiple sensory-biased and multiple-demand regions within human lateral frontal cortex. In addition, we discuss how deep imaging methods can be transferred to large public data sets to further extend functional mapping at the group level. We also review how 'connectome fingerprinting' approaches, combined with deep imaging, can be used to localize fine-grained functional organization in individual subjects using resting-state data. Finally, we summarize current 'best practices' for deep imaging.
Collapse
Affiliation(s)
- David C. Somers
- Department of Psychological & Brain Sciences, Boston University, Boston, MA, USA
| | - Samantha W. Michalka
- Department of Psychological & Brain Sciences, Boston University, Boston, MA, USA
- Olin College of Engineering, Needham, MA, US
| | - Sean M. Tobyne
- Department of Psychological & Brain Sciences, Boston University, Boston, MA, USA
- Physiological Systems – Sensing, Perception and Applied Robotics Division, Charles River Analytics, Inc., Cambridge, MA, USA
| | - Abigail L. Noyce
- Department of Psychological & Brain Sciences, Boston University, Boston, MA, USA
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
19
|
Hanenberg C, Schlüter MC, Getzmann S, Lewald J. Short-Term Audiovisual Spatial Training Enhances Electrophysiological Correlates of Auditory Selective Spatial Attention. Front Neurosci 2021; 15:645702. [PMID: 34276281 PMCID: PMC8280319 DOI: 10.3389/fnins.2021.645702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 06/09/2021] [Indexed: 11/13/2022] Open
Abstract
Audiovisual cross-modal training has been proposed as a tool to improve human spatial hearing. Here, we investigated training-induced modulations of event-related potential (ERP) components that have been associated with processes of auditory selective spatial attention when a speaker of interest has to be localized in a multiple speaker ("cocktail-party") scenario. Forty-five healthy participants were tested, including younger (19-29 years; n = 21) and older (66-76 years; n = 24) age groups. Three conditions of short-term training (duration 15 min) were compared, requiring localization of non-speech targets under "cocktail-party" conditions with either (1) synchronous presentation of co-localized auditory-target and visual stimuli (audiovisual-congruency training) or (2) immediate visual feedback on correct or incorrect localization responses (visual-feedback training), or (3) presentation of spatially incongruent auditory-target and visual stimuli presented at random positions with synchronous onset (control condition). Prior to and after training, participants were tested in an auditory spatial attention task (15 min), requiring localization of a predefined spoken word out of three distractor words, which were presented with synchronous stimulus onset from different positions. Peaks of ERP components were analyzed with a specific focus on the N2, which is known to be a correlate of auditory selective spatial attention. N2 amplitudes were significantly larger after audiovisual-congruency training compared with the remaining training conditions for younger, but not older, participants. Also, at the time of the N2, distributed source analysis revealed an enhancement of neural activity induced by audiovisual-congruency training in dorsolateral prefrontal cortex (Brodmann area 9) for the younger group. These findings suggest that cross-modal processes induced by audiovisual-congruency training under "cocktail-party" conditions at a short time scale resulted in an enhancement of correlates of auditory selective spatial attention.
Collapse
Affiliation(s)
| | | | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
20
|
Heine L, Corneyllie A, Gobert F, Luauté J, Lavandier M, Perrin F. Virtually spatialized sounds enhance auditory processing in healthy participants and patients with a disorder of consciousness. Sci Rep 2021; 11:13702. [PMID: 34211035 PMCID: PMC8249625 DOI: 10.1038/s41598-021-93151-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 06/22/2021] [Indexed: 11/14/2022] Open
Abstract
Neuroscientific and clinical studies on auditory perception often use headphones to limit sound interference. In these conditions, sounds are perceived as internalized because they lack the sound-attributes that normally occur with a sound produced from a point in space around the listener. Without the spatial attention mechanisms that occur with localized sounds, auditory functional assessments could thus be underestimated. We hypothesize that adding virtually externalization and localization cues to sounds through headphones enhance sound discrimination in both healthy participants and patients with a disorder of consciousness (DOC). Hd-EEG was analyzed in 14 healthy participants and 18 patients while they listened to self-relevant and irrelevant stimuli in two forms: diotic (classic sound presentation with an "internalized" feeling) and convolved with a binaural room impulse response (to create an "externalized" feeling). Convolution enhanced the brains' discriminative response as well as the processing of irrelevant sounds itself, in both healthy participants and DOC patients. For the healthy participants, these effects could be associated with enhanced activation of both the dorsal (where/how) and ventral (what) auditory streams, suggesting that spatial attributes support speech discrimination. Thus, virtually spatialized sounds might "call attention to the outside world" and improve the sensitivity of assessment of brain function in DOC patients.
Collapse
Affiliation(s)
- Lizette Heine
- Audition Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, UCBL, INSERM U1028, CNRS UMR5292, Centre Hospitalier Le Vinatier, Bâtiment 462, Neurocampus Michel Jouvet, 95 Boulevard Pinel, Bron Cedex, 69675, Lyon, France
- Laboratoire de Tribologie et Dynamique des Systèmes UMR 5513, ENTPE, University of Lyon, Rue Maurice Audin, 69518, Vaulx-en-Velin Cedex, France
| | - Alexandra Corneyllie
- Audition Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, UCBL, INSERM U1028, CNRS UMR5292, Centre Hospitalier Le Vinatier, Bâtiment 462, Neurocampus Michel Jouvet, 95 Boulevard Pinel, Bron Cedex, 69675, Lyon, France
| | - Florent Gobert
- Audition Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, UCBL, INSERM U1028, CNRS UMR5292, Centre Hospitalier Le Vinatier, Bâtiment 462, Neurocampus Michel Jouvet, 95 Boulevard Pinel, Bron Cedex, 69675, Lyon, France
- Trajectoires Team, Lyon Neuroscience Research Center, UCBL, INSERM U1028, CNRS UMR5292, Centre Hospitalier Le Vinatier, Lyon, France
| | - Jacques Luauté
- Service de Médecine Physique et de Réadaptation, Rééducation Neurologique, Hôpital Henry-Gabrielle, CHU de Lyon, 69230, Saint-Genis-Laval, France
- Trajectoires Team, Lyon Neuroscience Research Center, UCBL, INSERM U1028, CNRS UMR5292, Centre Hospitalier Le Vinatier, Lyon, France
| | - Mathieu Lavandier
- Laboratoire de Tribologie et Dynamique des Systèmes UMR 5513, ENTPE, University of Lyon, Rue Maurice Audin, 69518, Vaulx-en-Velin Cedex, France
| | - Fabien Perrin
- Audition Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, UCBL, INSERM U1028, CNRS UMR5292, Centre Hospitalier Le Vinatier, Bâtiment 462, Neurocampus Michel Jouvet, 95 Boulevard Pinel, Bron Cedex, 69675, Lyon, France.
| |
Collapse
|
21
|
LaCroix AN, Baxter LC, Rogalsky C. Auditory attention following a left hemisphere stroke: comparisons of alerting, orienting, and executive control performance using an auditory Attention Network Test. AUDITORY PERCEPTION & COGNITION 2021; 3:238-251. [PMID: 34671722 PMCID: PMC8525781 DOI: 10.1080/25742442.2021.1922988] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 04/22/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Auditory attention is a critical foundation for successful language comprehension, yet is rarely studied in individuals with acquired language disorders. METHODS We used an auditory version of the well-studied Attention Network Test to study alerting, orienting, and executive control in 28 persons with chronic stroke (PWS). We further sought to characterize the neurobiology of each auditory attention measure in our sample using exploratory lesion-symptom mapping analyses. RESULTS PWS exhibited the expected executive control effect (i.e., decreased accuracy for incongruent compared to congruent trials), but their alerting and orienting attention were disrupted. PWS did not exhibit an alerting effect and they were actually distracted by the auditory spatial orienting cue compared to the control cue. Lesion-symptom mapping indicated that poorer alerting and orienting were associated with damage to the left retrolenticular part of the internal capsule (adjacent to the thalamus) and left posterior middle frontal gyrus (overlapping with the frontal eye fields), respectively. DISCUSSION The behavioral findings correspond to our previous work investigating alerting and spatial orienting attention in persons with aphasia in the visual modality and suggest that auditory alerting and spatial orienting attention may be impaired in PWS due to stroke lesions damaging multi-modal attention resources.
Collapse
Affiliation(s)
| | | | - Corianne Rogalsky
- College of Health Solutions, Arizona State University, Tempe, AZ USA
| |
Collapse
|
22
|
Commonalities of visual and auditory working memory in a spatial-updating task. Mem Cognit 2021; 49:1172-1187. [PMID: 33616864 DOI: 10.3758/s13421-021-01151-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/02/2021] [Indexed: 11/08/2022]
Abstract
Although visual and auditory inputs are initially processed in separate perception systems, studies have built on the idea that to maintain spatial information these modalities share a component of working memory. The present study used working memory navigation tasks to examine functional similarities and dissimilarities in the performance of updating tasks. Participants mentally updated the spatial location of a target in a virtual array in response to sequential pictorial and sonant directional cues before identifying the target's final location. We predicted that if working memory representations are modality-specific, mixed-modality cues would demonstrate a cost of modality switching relative to unimodal cues. The results indicate that updating performance using visual unimodal cues positively correlated with that using auditory unimodal cues. Task performance using unimodal cues was comparable to that using mixed modality cues. The results of a subsequent experiment involving updating of target traces were consistent with those of the preceding experiments and support the view of modality-nonspecific memory.
Collapse
|
23
|
Simões EN, Carvalho ALN, Schmidt SL. The Role of Visual and Auditory Stimuli in Continuous Performance Tests: Differential Effects on Children With ADHD. J Atten Disord 2021; 25:53-62. [PMID: 29671360 DOI: 10.1177/1087054718769149] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Objective: Continuous performance tests (CPTs) usually utilize visual stimuli. A previous investigation showed that inattention is partially independent of modality, but response inhibition is modality-specific. Here we aimed to compare performance on visual and auditory CPTs in ADHD and in healthy controls. Method: The sample consisted of 160 elementary and high school students (43 ADHD, 117 controls). For each sensory modality, five variables were extracted: commission errors (CEs) and omission errors (OEs), reaction time (RT), variability of reaction time (VRT), and coefficient of variability (CofV = VRT / RT). Results: The ADHD group exhibited higher rates for all test variables. The discriminant analysis indicated that auditory OE was the most reliable variable for discriminating between groups, followed by visual CE, auditory CE, and auditory CofV. Discriminant equation classified ADHD with 76.3% accuracy. Conclusion: Auditory parameters in the inattention domain (OE and VRT) can discriminate ADHD from controls. For the hyperactive/impulsive domain (CE), the two modalities are equally important.
Collapse
|
24
|
Orienting Attention to Short-Term Memory Representations via Sensory Modality and Semantic Category Retro-Cues. eNeuro 2020; 7:ENEURO.0018-20.2020. [PMID: 33139321 PMCID: PMC7716432 DOI: 10.1523/eneuro.0018-20.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Revised: 10/19/2020] [Accepted: 10/23/2020] [Indexed: 01/21/2023] Open
Abstract
There is growing interest in characterizing the neural mechanisms underlying the interactions between attention and memory. Current theories posit that reflective attention to memory representations generally involves a fronto-parietal attentional control network. The present study aimed to test this idea by manipulating how a particular short-term memory (STM) representation is accessed, that is, based on its input sensory modality or semantic category, during functional magnetic resonance imaging (fMRI). Human participants performed a novel variant of the retro-cue paradigm, in which they were presented with both auditory and visual non-verbal stimuli followed by Modality, Semantic, or Uninformative retro-cues. Modality and, to a lesser extent, Semantic retro-cues facilitated response time relative to Uninformative retro-cues. The univariate and multivariate pattern analyses (MVPAs) of fMRI time-series revealed three key findings. First, the posterior parietal cortex (PPC), including portions of the intraparietal sulcus (IPS) and ventral angular gyrus (AG), had activation patterns that spatially overlapped for both modality-based and semantic-based reflective attention. Second, considering both the univariate and multivariate analyses, Semantic retro-cues were associated with a left-lateralized fronto-parietal network. Finally, the experimental design enabled us to examine how dividing attention cross-modally within STM modulates the brain regions involved in reflective attention. This analysis revealed that univariate activation within bilateral portions of the PPC increased when participants simultaneously attended both auditory and visual memory representations. Therefore, prefrontal and parietal regions are flexibly recruited during reflective attention, depending on the representational feature used to selectively access STM representations.
Collapse
|
25
|
Chen L, Liao HI. Microsaccadic Eye Movements but not Pupillary Dilation Response Characterizes the Crossmodal Freezing Effect. Cereb Cortex Commun 2020; 1:tgaa072. [PMID: 34296132 PMCID: PMC8153075 DOI: 10.1093/texcom/tgaa072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 09/24/2020] [Accepted: 09/25/2020] [Indexed: 11/14/2022] Open
Abstract
In typical spatial orienting tasks, the perception of crossmodal (e.g., audiovisual) stimuli evokes greater pupil dilation and microsaccade inhibition than unisensory stimuli (e.g., visual). The characteristic pupil dilation and microsaccade inhibition has been observed in response to "salient" events/stimuli. Although the "saliency" account is appealing in the spatial domain, whether this occurs in the temporal context remains largely unknown. Here, in a brief temporal scale (within 1 s) and with the working mechanism of involuntary temporal attention, we investigated how eye metric characteristics reflect the temporal dynamics of perceptual organization, with and without multisensory integration. We adopted the crossmodal freezing paradigm using the classical Ternus apparent motion. Results showed that synchronous beeps biased the perceptual report for group motion and triggered the prolonged sound-induced oculomotor inhibition (OMI), whereas the sound-induced OMI was not obvious in a crossmodal task-free scenario (visual localization without audiovisual integration). A general pupil dilation response was observed in the presence of sounds in both visual Ternus motion categorization and visual localization tasks. This study provides the first empirical account of crossmodal integration by capturing microsaccades within a brief temporal scale; OMI but not pupillary dilation response characterizes task-specific audiovisual integration (shown by the crossmodal freezing effect).
Collapse
Affiliation(s)
- Lihan Chen
- Department of Brain and Cognitive Sciences, Schools of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100871, China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China
| | - Hsin-I Liao
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, 243-0198, Japan
| |
Collapse
|
26
|
Wu T, Spagna A, Chen C, Schulz KP, Hof PR, Fan J. Supramodal Mechanisms of the Cognitive Control Network in Uncertainty Processing. Cereb Cortex 2020; 30:6336-6349. [PMID: 32734281 DOI: 10.1093/cercor/bhaa189] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 04/29/2020] [Accepted: 06/16/2020] [Indexed: 01/27/2023] Open
Abstract
Information processing under conditions of uncertainty requires the involvement of cognitive control. Despite behavioral evidence of the supramodal function (i.e., independent of sensory modality) of cognitive control, the underlying neural mechanism needs to be directly tested. This study used functional magnetic imaging together with visual and auditory perceptual decision-making tasks to examine brain activation as a function of uncertainty in the two stimulus modalities. The results revealed a monotonic increase in activation in the cortical regions of the cognitive control network (CCN) as a function of uncertainty in the visual and auditory modalities. The intrinsic connectivity between the CCN and sensory regions was similar for the visual and auditory modalities. Furthermore, multivariate patterns of activation in the CCN predicted the level of uncertainty within and across stimulus modalities. These findings suggest that the CCN implements cognitive control by processing uncertainty as abstract information independent of stimulus modality.
Collapse
Affiliation(s)
- Tingting Wu
- Department of Psychology, Queens College, The City University of New York, Queens, NY 11367, USA
| | - Alfredo Spagna
- Department of Psychology, Columbia University in the City of New York, New York, NY 10025, USA
| | - Chao Chen
- Departments of Biomedical Informatics, Stony Brook University, Stony Brook, NY 11794, USA
| | - Kurt P Schulz
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Patrick R Hof
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA.,Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Jin Fan
- Department of Psychology, Queens College, The City University of New York, Queens, NY 11367, USA
| |
Collapse
|
27
|
Salomão N, Rabelo K, Basílio-de-Oliveira C, Basílio-de-Oliveira R, Geraldo L, Lima F, dos Santos F, Nuovo G, Oliveira ERA, Paes M. Fatal Dengue Cases Reveal Brain Injury and Viral Replication in Brain-Resident Cells Associated with the Local Production of Pro-Inflammatory Mediators. Viruses 2020; 12:E603. [PMID: 32486462 PMCID: PMC7354550 DOI: 10.3390/v12060603] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 04/06/2020] [Accepted: 04/16/2020] [Indexed: 12/14/2022] Open
Abstract
Dengue is an arboviral disease caused by dengue virus (DENV), which is transmitted to humans by Aedes aegypti mosquitoes. Infection by DENV most commonly results in a mild flu-like illness; however, the disease has been increasingly associated with neurological symptomatology. This association draws attention to further investigations on the impact of DENV infection in the host's central nervous system. Here, we analyzed brain samples of three fatal dengue cases that occurred in 2002 during an outbreak in Rio de Janeiro, Brazil. Brain tissues of these cases were marked by histopathological alterations, such as degenerated neurons, demyelination, hemorrhage, edema, and increased numbers of astrocytes and microglial cells. Samples were also characterized by lymphocytic infiltrates mainly composed of CD8 T cells. DENV replication was evidenced in neurons, microglia and endothelial cells through immunohistochemistry and in situ hybridization techniques. Pro-inflammatory cytokines, such as TNF-α and IFN-γ were detected in microglia, while endothelial cells were marked by the expression of RANTES/CCL5. Cytoplasmic HMGB1 and the production of nitric oxide were also found in neurons and microglial cells. This work highlights the possible participation of several local pro-inflammatory mediators in the establishment of dengue neuropathogenesis.
Collapse
Affiliation(s)
- Natália Salomão
- Interdisciplinary Medical Research Laboratory Rio de Janeiro, Oswaldo Cruz Foundation, 21040-900 Rio de Janeiro, Brazil;
| | - Kíssila Rabelo
- Ultrastructure and Tissue Biology Laboratory Rio de Janeiro, Rio de Janeiro State University, 20551-030 Rio de Janeiro, Brazil;
| | - Carlos Basílio-de-Oliveira
- Pathological Anatomy, Gaffrée Guinle University Hospital Rio de Janeiro, Federal University of the State of Rio de Janeiro, 20270-004 Rio de Janeiro, Brazil; (C.B.-d.-O.); (R.B.-d.-O.)
| | - Rodrigo Basílio-de-Oliveira
- Pathological Anatomy, Gaffrée Guinle University Hospital Rio de Janeiro, Federal University of the State of Rio de Janeiro, 20270-004 Rio de Janeiro, Brazil; (C.B.-d.-O.); (R.B.-d.-O.)
| | - Luiz Geraldo
- Glial Cell Biology Laboratory, Institute of Biomedical Sciences Rio de Janeiro, Federal University of Rio de Janeiro, 21941-590 Rio de Janeiro, Brazil; (L.G.); (F.L.)
| | - Flávia Lima
- Glial Cell Biology Laboratory, Institute of Biomedical Sciences Rio de Janeiro, Federal University of Rio de Janeiro, 21941-590 Rio de Janeiro, Brazil; (L.G.); (F.L.)
| | - Flávia dos Santos
- Viral Immunology Laboratory, Oswaldo Cruz Institute Rio de Janeiro, Oswaldo Cruz Foundation, 21040-900 Rio de Janeiro, Brazil;
| | - Gerard Nuovo
- Ohio State University Comprehensive Cancer Center, Ohio State University Foundation, Columbus, OH 43210, USA;
- Phylogeny Medical Laboratory Columbus, Ohio State University Foundation, Columbus, OH 43214, USA
| | - Edson R. A. Oliveira
- Department of Microbiology and Immunology Chicago, University of Illinois at Chicago, Chicago, IL 60612, USA;
| | - Marciano Paes
- Interdisciplinary Medical Research Laboratory Rio de Janeiro, Oswaldo Cruz Foundation, 21040-900 Rio de Janeiro, Brazil;
| |
Collapse
|
28
|
Deng Y, Choi I, Shinn-Cunningham B. Topographic specificity of alpha power during auditory spatial attention. Neuroimage 2020; 207:116360. [PMID: 31760150 PMCID: PMC9883080 DOI: 10.1016/j.neuroimage.2019.116360] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 10/06/2019] [Accepted: 11/13/2019] [Indexed: 01/31/2023] Open
Abstract
Visual and somatosensory spatial attention both induce parietal alpha (8-14 Hz) oscillations whose topographical distribution depends on the direction of spatial attentional focus. In the auditory domain, contrasts of parietal alpha power for leftward and rightward attention reveal qualitatively similar lateralization; however, it is not clear whether alpha lateralization changes monotonically with the direction of auditory attention as it does for visual spatial attention. In addition, most previous studies of alpha oscillation did not consider individual differences in alpha frequency, but simply analyzed power in a fixed spectral band. Here, we recorded electroencephalography in human subjects when they directed attention to one of five azimuthal locations. After a cue indicating the direction of an upcoming target sequence of spoken syllables (yet before the target began), alpha power changed in a task-specific manner. Individual peak alpha frequencies differed consistently between central electrodes and parieto-occipital electrodes, suggesting multiple neural generators of task-related alpha. Parieto-occipital alpha increased over the hemisphere ipsilateral to attentional focus compared to the contralateral hemisphere, and changed systematically as the direction of attention shifted from far left to far right. These results showing that parietal alpha lateralization changes smoothly with the direction of auditory attention as in visual spatial attention provide further support to the growing evidence that the frontoparietal attention network is supramodal.
Collapse
Affiliation(s)
- Yuqi Deng
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, 52242, USA
| | - Barbara Shinn-Cunningham
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA,Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA,Corresponding author. Baker Hall 254G, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, 15213, USA. (B. Shinn-Cunningham)
| |
Collapse
|
29
|
Deng Y, Choi I, Shinn-Cunningham B, Baumgartner R. Impoverished auditory cues limit engagement of brain networks controlling spatial selective attention. Neuroimage 2019; 202:116151. [PMID: 31493531 DOI: 10.1016/j.neuroimage.2019.116151] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Revised: 08/02/2019] [Accepted: 08/31/2019] [Indexed: 12/30/2022] Open
Abstract
Spatial selective attention enables listeners to process a signal of interest in natural settings. However, most past studies on auditory spatial attention used impoverished spatial cues: presenting competing sounds to different ears, using only interaural differences in time (ITDs) and/or intensity (IIDs), or using non-individualized head-related transfer functions (HRTFs). Here we tested the hypothesis that impoverished spatial cues impair spatial auditory attention by only weakly engaging relevant cortical networks. Eighteen normal-hearing listeners reported the content of one of two competing syllable streams simulated at roughly +30° and -30° azimuth. The competing streams consisted of syllables from two different-sex talkers. Spatialization was based on natural spatial cues (individualized HRTFs), individualized IIDs, or generic ITDs. We measured behavioral performance as well as electroencephalographic markers of selective attention. Behaviorally, subjects recalled target streams most accurately with natural cues. Neurally, spatial attention significantly modulated early evoked sensory response magnitudes only for natural cues, not in conditions using only ITDs or IIDs. Consistent with this, parietal oscillatory power in the alpha band (8-14 Hz; associated with filtering out distracting events from unattended directions) showed significantly less attentional modulation with isolated spatial cues than with natural cues. Our findings support the hypothesis that spatial selective attention networks are only partially engaged by impoverished spatial auditory cues. These results not only suggest that studies using unnatural spatial cues underestimate the neural effects of spatial auditory attention, they also illustrate the importance of preserving natural spatial cues in assistive listening devices to support robust attentional control.
Collapse
Affiliation(s)
- Yuqi Deng
- Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Inyong Choi
- Communication Sciences & Disorders, University of Iowa, Iowa City, IA, 52242, USA
| | - Barbara Shinn-Cunningham
- Biomedical Engineering, Boston University, Boston, MA, 02215, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| | - Robert Baumgartner
- Biomedical Engineering, Boston University, Boston, MA, 02215, USA; Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria.
| |
Collapse
|
30
|
Hanenberg C, Getzmann S, Lewald J. Transcranial direct current stimulation of posterior temporal cortex modulates electrophysiological correlates of auditory selective spatial attention in posterior parietal cortex. Neuropsychologia 2019; 131:160-170. [PMID: 31145907 DOI: 10.1016/j.neuropsychologia.2019.05.023] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 05/03/2019] [Accepted: 05/25/2019] [Indexed: 01/12/2023]
Abstract
Speech perception in "cocktail-party" situations, in which a sound source of interest has to be extracted out of multiple irrelevant sounds, poses a remarkable challenge to the human auditory system. Studies on structural and electrophysiological correlates of auditory selective spatial attention revealed critical roles of the posterior temporal cortex and the N2 event-related potential (ERP) component in the underlying processes. Here, we explored effects of transcranial direct current stimulation (tDCS) to posterior temporal cortex on neurophysiological correlates of auditory selective spatial attention, with a specific focus on the N2. In a single-blind, sham-controlled crossover design with baseline and follow-up measurements, monopolar anodal and cathodal tDCS was applied for 16 min to the right posterior superior temporal cortex. Two age groups of human subjects, a younger (n = 20; age 18-30 yrs) and an older group (n = 19; age 66-77 yrs), completed an auditory free-field multiple-speakers localization task while ERPs were recorded. The ERP data showed an offline effect of anodal, but not cathodal, tDCS immediately after DC offset for targets contralateral, but not ipsilateral, to the hemisphere of tDCS, without differences between groups. This effect mainly consisted in a substantial increase of the N2 amplitude by 0.9 μV (SE 0.4 μV; d = 0.40) compared with sham tDCS. At the same point in time, cortical source localization revealed a reduction of activity in ipsilateral (right) posterior parietal cortex. Also, localization error was improved after anodal, but not cathodal, tDCS. Given that both the N2 and the posterior parietal cortex are involved in processes of auditory selective spatial attention, these results suggest that anodal tDCS specifically enhanced inhibitory attentional brain processes underlying the focusing onto a target sound source, possibly by improved suppression of irrelevant distracters.
Collapse
Affiliation(s)
- Christina Hanenberg
- Ruhr University Bochum, Faculty of Psychology, D-44780, Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, D-44139, Dortmund, Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, D-44139, Dortmund, Germany
| | - Jörg Lewald
- Ruhr University Bochum, Faculty of Psychology, D-44780, Bochum, Germany.
| |
Collapse
|
31
|
Schmidt SL, Simões EDN, Novais Carvalho AL. Association Between Auditory and Visual Continuous Performance Tests in Students With ADHD. J Atten Disord 2019; 23:635-640. [PMID: 27864429 DOI: 10.1177/1087054716679263] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Continuous Performance Tests (CPTs) are known to measure inattention and impulsivity in students with ADHD. Many CPTs utilize a visual format. It is accepted that auditory tasks reflect attentional demand more closely in the classroom. Thus, the association between deficits found by auditory and visual CPTs needs to be studied. We hypothesized that impulsivity would be dependent on sensory modality and inattention would be a unitary cross-modal construct. METHOD Forty-four students with ADHD performed two CPTs (visual and auditory). We analyzed correlations between the variables examined by the two tasks. RESULTS There were strong correlations between variables measuring inattention. Correlations between auditory and visual measures of impulsivity were weak. CONCLUSION Inattention is partially independent of modality. In contrast, response inhibition is modality-specific. Although ADHD is defined regardless of modality, hyperactive students may exhibit deficits in the auditory modality but not in the visual modality or vice versa.
Collapse
Affiliation(s)
- Sergio Luís Schmidt
- 1 Federal University of the State of Rio de Janeiro, Brazil.,2 Federal University of Juiz de Fora, Brazil.,3 State University of Rio de Janeiro, Brazil
| | | | | |
Collapse
|
32
|
Rajan A, Meyyappan S, Walker H, Henry Samuel IB, Hu Z, Ding M. Neural mechanisms of internal distraction suppression in visual attention. Cortex 2019; 117:77-88. [PMID: 30933692 DOI: 10.1016/j.cortex.2019.02.026] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2018] [Revised: 01/12/2019] [Accepted: 02/26/2019] [Indexed: 11/24/2022]
Abstract
When performing a demanding cognitive task, internal distraction in the form of task-irrelevant thoughts and mind wandering can shift our attention away from the task, negatively affecting task performance. Behaviorally, individuals with higher executive function indexed by higher working memory capacity (WMC) exhibit less mind wandering during cognitive tasks, but the underlying neural mechanisms are unknown. To address this problem, we recorded functional magnetic resonance imaging (fMRI) data from subjects performing a cued visual attention task, and assessed their WMC in a separate experiment. Applying machine learning and time-series analysis techniques, we showed that (1) higher WMC individuals experienced lower internal distraction through stronger suppression of posterior cingulate cortex (PCC) activity, (2) higher WMC individuals had better neural representations of attended information as evidenced by higher multivoxel decoding accuracy of cue-related activities in the dorsal attention network (DAN), (3) the positive relationship between WMC and DAN decoding accuracy was mediated by suppression of PCC activity, (4) the dorsal anterior cingulate (dACC) was a source of top-down signals that regulate PCC activity as evidenced by the negative association between Granger-causal influence dACC→PCC and PCC activity levels, and (5) higher WMC individuals exhibiting stronger dACC→PCC Granger-causal influence. These results shed light on the neural mechanisms underlying the executive suppression of internal distraction in tasks requiring externally oriented attention and provide an explanation of the individual differences in such suppression.
Collapse
Affiliation(s)
- Abhijit Rajan
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA
| | - Sreenivasan Meyyappan
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA
| | - Harrison Walker
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA
| | - Immanuel Babu Henry Samuel
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA
| | - Zhenhong Hu
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA
| | - Mingzhou Ding
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA.
| |
Collapse
|
33
|
Rogers CS, Payne L, Maharjan S, Wingfield A, Sekuler R. Older adults show impaired modulation of attentional alpha oscillations: Evidence from dichotic listening. Psychol Aging 2019; 33:246-258. [PMID: 29658746 DOI: 10.1037/pag0000238] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Auditory attention is critical for selectively listening to speech from a single talker in a multitalker environment (e.g., Cherry, 1953). Listening in such situations is notoriously more difficult and more poorly encoded to long-term memory in older than in young adults (Tun, O'Kane, & Wingfield, 2002). Recent work by Payne, Rogers, Wingfield, and Sekuler (2017) in young adults demonstrated a neural correlate of auditory attention in the directed dichotic listening task (DDLT), where listeners attend to one ear while ignoring the other. Measured using electroencephalography, differences in alpha band power (8-14 Hz) between left and right hemisphere parietal regions mark the direction to which auditory attention is focused. Little prior research has been conducted on alpha power modulations in older adults, particularly with regard to auditory attention directed toward speech stimuli. In the current study, an older adult sample was administered the DDLT and delayed recognition procedures used by Payne et al. (2017). Compared to young adults, older adults showed reduced selective attention in the DDLT, evidenced by a higher rate of intrusions from the unattended ear. Moreover, older adults did not exhibit attention-related alpha modulation evidenced by young adults, nor did their event-related potentials (ERPs) to recognition probes differentiate between attended or unattended probes. Older adults' delayed recognition did not reveal a pattern of suppression of unattended items evidenced by young adults. These results serve as evidence for an age-related decline in selective auditory attention, potentially mediated by age-related decline in the ability to modulate alpha oscillations. (PsycINFO Database Record
Collapse
Affiliation(s)
- Chad S Rogers
- Volen National Center for Complex Systems, Brandeis University
| | - Lisa Payne
- Volen National Center for Complex Systems, Brandeis University
| | - Sujala Maharjan
- Volen National Center for Complex Systems, Brandeis University
| | | | - Robert Sekuler
- Volen National Center for Complex Systems, Brandeis University
| |
Collapse
|
34
|
Lewald J, Schlüter MC, Getzmann S. Cortical processing of location changes in a “cocktail-party” situation: Spatial oddball effects on electrophysiological correlates of auditory selective attention. Hear Res 2018; 365:49-61. [DOI: 10.1016/j.heares.2018.04.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 04/12/2018] [Accepted: 04/25/2018] [Indexed: 11/24/2022]
|
35
|
Pokhoday M, Scheepers C, Shtyrov Y, Myachykov A. Motor (but not auditory) attention affects syntactic choice. PLoS One 2018; 13:e0195547. [PMID: 29659592 PMCID: PMC5902030 DOI: 10.1371/journal.pone.0195547] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 03/23/2018] [Indexed: 12/02/2022] Open
Abstract
Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker’s attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker’s syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain.
Collapse
Affiliation(s)
- Mikhail Pokhoday
- Centre for Cognition and Decision Making, National Research University Higher School of Economics, Russian Federation
- * E-mail:
| | - Christoph Scheepers
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Yury Shtyrov
- Centre for Cognition and Decision Making, National Research University Higher School of Economics, Russian Federation
- Centre of Functionally Integrative Neuroscience (CFIN), Department of Clinical Medicine, Aarhus University, Denmark
| | - Andriy Myachykov
- Centre for Cognition and Decision Making, National Research University Higher School of Economics, Russian Federation
- Department of Psychology, Northumbria University, Newcastle-upon-Tyne, United Kingdom
| |
Collapse
|
36
|
Zheng Y, Wu C, Li J, Li R, Peng H, She S, Ning Y, Li L. Schizophrenia alters intra-network functional connectivity in the caudate for detecting speech under informational speech masking conditions. BMC Psychiatry 2018; 18:90. [PMID: 29618332 PMCID: PMC5885301 DOI: 10.1186/s12888-018-1675-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Accepted: 03/26/2018] [Indexed: 01/17/2023] Open
Abstract
BACKGROUND Speech recognition under noisy "cocktail-party" environments involves multiple perceptual/cognitive processes, including target detection, selective attention, irrelevant signal inhibition, sensory/working memory, and speech production. Compared to health listeners, people with schizophrenia are more vulnerable to masking stimuli and perform worse in speech recognition under speech-on-speech masking conditions. Although the schizophrenia-related speech-recognition impairment under "cocktail-party" conditions is associated with deficits of various perceptual/cognitive processes, it is crucial to know whether the brain substrates critically underlying speech detection against informational speech masking are impaired in people with schizophrenia. METHODS Using functional magnetic resonance imaging (fMRI), this study investigated differences between people with schizophrenia (n = 19, mean age = 33 ± 10 years) and their matched healthy controls (n = 15, mean age = 30 ± 9 years) in intra-network functional connectivity (FC) specifically associated with target-speech detection under speech-on-speech-masking conditions. RESULTS The target-speech detection performance under the speech-on-speech-masking condition in participants with schizophrenia was significantly worse than that in matched healthy participants (healthy controls). Moreover, in healthy controls, but not participants with schizophrenia, the strength of intra-network FC within the bilateral caudate was positively correlated with the speech-detection performance under the speech-masking conditions. Compared to controls, patients showed altered spatial activity pattern and decreased intra-network FC in the caudate. CONCLUSIONS In people with schizophrenia, the declined speech-detection performance under speech-on-speech masking conditions is associated with reduced intra-caudate functional connectivity, which normally contributes to detecting target speech against speech masking via its functions of suppressing masking-speech signals.
Collapse
Affiliation(s)
- Yingjun Zheng
- 0000 0000 8653 1072grid.410737.6The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital), Guangzhou, 510370 China
| | - Chao Wu
- 0000 0004 1789 9964grid.20513.35Faculty of Psychology, Beijing Normal University, Beijing, 100875 China
| | - Juanhua Li
- 0000 0000 8653 1072grid.410737.6The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital), Guangzhou, 510370 China
| | - Ruikeng Li
- 0000 0000 8653 1072grid.410737.6The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital), Guangzhou, 510370 China
| | - Hongjun Peng
- 0000 0000 8653 1072grid.410737.6The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital), Guangzhou, 510370 China
| | - Shenglin She
- 0000 0000 8653 1072grid.410737.6The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital), Guangzhou, 510370 China
| | - Yuping Ning
- 0000 0000 8653 1072grid.410737.6The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital), Guangzhou, 510370 China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Key Laboratory on Machine Perception (Ministry of Education), Peking University, 5 Yiheyuan Road, Beijing, 100080, People's Republic of China. .,Beijing Institute for Brain Disorder, Capital Medical University, Beijing, China.
| |
Collapse
|
37
|
Is Listening in Noise Worth It? The Neurobiology of Speech Recognition in Challenging Listening Conditions. Ear Hear 2018; 37 Suppl 1:101S-10S. [PMID: 27355759 DOI: 10.1097/aud.0000000000000300] [Citation(s) in RCA: 97] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
This review examines findings from functional neuroimaging studies of speech recognition in noise to provide a neural systems level explanation for the effort and fatigue that can be experienced during speech recognition in challenging listening conditions. Neuroimaging studies of speech recognition consistently demonstrate that challenging listening conditions engage neural systems that are used to monitor and optimize performance across a wide range of tasks. These systems appear to improve speech recognition in younger and older adults, but sustained engagement of these systems also appears to produce an experience of effort and fatigue that may affect the value of communication. When considered in the broader context of the neuroimaging and decision making literature, the speech recognition findings from functional imaging studies indicate that the expected value, or expected level of speech recognition given the difficulty of listening conditions, should be considered when measuring effort and fatigue. The authors propose that the behavioral economics or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance.
Collapse
|
38
|
Bareham CA, Georgieva SD, Kamke MR, Lloyd D, Bekinschtein TA, Mattingley JB. Role of the right inferior parietal cortex in auditory selective attention: An rTMS study. Cortex 2017; 99:30-38. [PMID: 29127879 DOI: 10.1016/j.cortex.2017.10.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 07/28/2017] [Accepted: 10/07/2017] [Indexed: 10/18/2022]
Abstract
Selective attention is the process of directing limited capacity resources to behaviourally relevant stimuli while ignoring competing stimuli that are currently irrelevant. Studies in healthy human participants and in individuals with focal brain lesions have suggested that the right parietal cortex is crucial for resolving competition for attention. Following right-hemisphere damage, for example, patients may have difficulty reporting a brief, left-sided stimulus if it occurs with a competitor on the right, even though the same left stimulus is reported normally when it occurs alone. Such "extinction" of contralesional stimuli has been documented for all the major sense modalities, but it remains unclear whether its occurrence reflects involvement of one or more specific subregions of the temporo-parietal cortex. Here we employed repetitive transcranial magnetic stimulation (rTMS) over the right hemisphere to examine the effect of disruption of two candidate regions - the supramarginal gyrus (SMG) and the superior temporal gyrus (STG) - on auditory selective attention. Eighteen neurologically normal, right-handed participants performed an auditory task, in which they had to detect target digits presented within simultaneous dichotic streams of spoken distractor letters in the left and right channels, both before and after 20 min of 1 Hz rTMS over the SMG, STG or a somatosensory control site (S1). Across blocks, participants were asked to report on auditory streams in the left, right, or both channels, which yielded focused and divided attention conditions. Performance was unchanged for the two focused attention conditions, regardless of stimulation site, but was selectively impaired for contralateral left-sided targets in the divided attention condition following stimulation of the right SMG, but not the STG or S1. Our findings suggest a causal role for the right inferior parietal cortex in auditory selective attention.
Collapse
Affiliation(s)
- Corinne A Bareham
- Queensland Brain Institute, The University of Queensland, Australia; Department of Clinical Neurosciences, The University of Cambridge, United Kingdom.
| | | | - Marc R Kamke
- Queensland Brain Institute, The University of Queensland, Australia
| | - David Lloyd
- Queensland Brain Institute, The University of Queensland, Australia
| | - Tristan A Bekinschtein
- Department of Psychology, The University of Cambridge, United Kingdom; Behavioural and Clinical Neurosciences Institute, University of Cambridge, United Kingdom
| | - Jason B Mattingley
- Queensland Brain Institute, The University of Queensland, Australia; School of Psychology, The University of Queensland, Australia
| |
Collapse
|
39
|
The Right Temporoparietal Junction Supports Speech Tracking During Selective Listening: Evidence from Concurrent EEG-fMRI. J Neurosci 2017; 37:11505-11516. [PMID: 29061698 DOI: 10.1523/jneurosci.1007-17.2017] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Revised: 08/28/2017] [Accepted: 09/05/2017] [Indexed: 11/21/2022] Open
Abstract
Listening selectively to one out of several competing speakers in a "cocktail party" situation is a highly demanding task. It relies on a widespread cortical network, including auditory sensory, but also frontal and parietal brain regions involved in controlling auditory attention. Previous work has shown that, during selective listening, ongoing neural activity in auditory sensory areas is dominated by the attended speech stream, whereas competing input is suppressed. The relationship between these attentional modulations in the sensory tracking of the attended speech stream and frontoparietal activity during selective listening is, however, not understood. We studied this question in young, healthy human participants (both sexes) using concurrent EEG-fMRI and a sustained selective listening task, in which one out of two competing speech streams had to be attended selectively. An EEG-based speech envelope reconstruction method was applied to assess the strength of the cortical tracking of the to-be-attended and the to-be-ignored stream during selective listening. Our results show that individual speech envelope reconstruction accuracies obtained for the to-be-attended speech stream were positively correlated with the amplitude of sustained BOLD responses in the right temporoparietal junction, a core region of the ventral attention network. This brain region further showed task-related functional connectivity to secondary auditory cortex and regions of the frontoparietal attention network, including the intraparietal sulcus and the inferior frontal gyrus. This suggests that the right temporoparietal junction is involved in controlling attention during selective listening, allowing for a better cortical tracking of the attended speech stream.SIGNIFICANCE STATEMENT Listening selectively to one out of several simultaneously talking speakers in a "cocktail party" situation is a highly demanding task. It activates a widespread network of auditory sensory and hierarchically higher frontoparietal brain regions. However, how these different processing levels interact during selective listening is not understood. Here, we investigated this question using fMRI and concurrently acquired scalp EEG. We found that activation levels in the right temporoparietal junction correlate with the sensory representation of a selectively attended speech stream. In addition, this region showed significant functional connectivity to both auditory sensory and other frontoparietal brain areas during selective listening. This suggests that the right temporoparietal junction contributes to controlling selective auditory attention in "cocktail party" situations.
Collapse
|
40
|
Shinn-Cunningham B. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2976-2988. [PMID: 29049598 PMCID: PMC5945067 DOI: 10.1044/2017_jslhr-h-17-0080] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 06/23/2017] [Accepted: 07/05/2017] [Indexed: 05/28/2023]
Abstract
PURPOSE This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. METHOD The results from neuroscience and psychoacoustics are reviewed. RESULTS In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." CONCLUSIONS How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. PRESENTATION VIDEO http://cred.pubs.asha.org/article.aspx?articleid=2601617.
Collapse
Affiliation(s)
- Barbara Shinn-Cunningham
- Center for Research in Sensory Communication and Emerging Neural Technology, Boston University, MA
| |
Collapse
|
41
|
Evidence for cue-independent spatial representation in the human auditory cortex during active listening. Proc Natl Acad Sci U S A 2017; 114:E7602-E7611. [PMID: 28827357 DOI: 10.1073/pnas.1707522114] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
Collapse
|
42
|
Sensory-Biased and Multiple-Demand Processing in Human Lateral Frontal Cortex. J Neurosci 2017; 37:8755-8766. [PMID: 28821668 DOI: 10.1523/jneurosci.0660-17.2017] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Revised: 07/27/2017] [Accepted: 08/01/2017] [Indexed: 11/21/2022] Open
Abstract
The functionality of much of human lateral frontal cortex (LFC) has been characterized as "multiple demand" (MD) as these regions appear to support a broad range of cognitive tasks. In contrast to this domain-general account, recent evidence indicates that portions of LFC are consistently selective for sensory modality. Michalka et al. (2015) reported two bilateral regions that are biased for visual attention, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), interleaved with two bilateral regions that are biased for auditory attention, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). In the present study, we use fMRI to examine both the multiple-demand and sensory-bias hypotheses within caudal portions of human LFC (both men and women participated). Using visual and auditory 2-back tasks, we replicate the finding of two bilateral visual-biased and two bilateral auditory-biased LFC regions, corresponding to sPCS and iPCS and to tgPCS and cIFS, and demonstrate high within-subject reliability of these regions over time and across tasks. In addition, we assess MD responsiveness using BOLD signal recruitment and multi-task activation indices. In both, we find that the two visual-biased regions, sPCS and iPCS, exhibit stronger MD responsiveness than do the auditory-biased LFC regions, tgPCS and cIFS; however, neither reaches the degree of MD responsiveness exhibited by dorsal anterior cingulate/presupplemental motor area or by anterior insula. These results reconcile two competing views of LFC by demonstrating the coexistence of sensory specialization and MD functionality, especially in visual-biased LFC structures.SIGNIFICANCE STATEMENT Lateral frontal cortex (LFC) is known to play a number of critical roles in supporting human cognition; however, the functional organization of LFC remains controversial. The "multiple demand" (MD) hypothesis suggests that LFC regions provide domain-general support for cognition. Recent evidence challenges the MD view by demonstrating that a preference for sensory modality, vision or audition, defines four discrete LFC regions. Here, the sensory-biased LFC results are reproduced using a new task, and MD responsiveness of these regions is tested. The two visual-biased regions exhibit MD behavior, whereas the auditory-biased regions have no more than weak MD responses. These findings help to reconcile two competing views of LFC functional organization.
Collapse
|
43
|
Shinn-Cunningham B, Best V, Lee AKC. Auditory Object Formation and Selection. SPRINGER HANDBOOK OF AUDITORY RESEARCH 2017. [DOI: 10.1007/978-3-319-51662-2_2] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
44
|
Payne L, Rogers CS, Wingfield A, Sekuler R. A right-ear bias of auditory selective attention is evident in alpha oscillations. Psychophysiology 2016; 54:528-535. [PMID: 28039860 DOI: 10.1111/psyp.12815] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2015] [Accepted: 09/13/2016] [Indexed: 11/27/2022]
Abstract
Auditory selective attention makes it possible to pick out one speech stream that is embedded in a multispeaker environment. We adapted a cued dichotic listening task to examine suppression of a speech stream lateralized to the nonattended ear, and to evaluate the effects of attention on the right ear's well-known advantage in the perception of linguistic stimuli. After being cued to attend to input from either their left or right ear, participants heard two different four-word streams presented simultaneously to the separate ears. Following each dichotic presentation, participants judged whether a spoken probe word had been in the attended ear's stream. We used EEG signals to track participants' spatial lateralization of auditory attention, which is marked by interhemispheric differences in EEG alpha (8-14 Hz) power. A right-ear advantage (REA) was evident in faster response times and greater sensitivity in distinguishing attended from unattended words. Consistent with the REA, we found strongest parietal and right frontotemporal alpha modulation during the attend-right condition. These findings provide evidence for a link between selective attention and the REA during directed dichotic listening.
Collapse
Affiliation(s)
- Lisa Payne
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts, USA
| | - Chad S Rogers
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts, USA
| | - Arthur Wingfield
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts, USA
| | - Robert Sekuler
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts, USA
| |
Collapse
|
45
|
Grossberg S. Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support. Neural Netw 2016; 87:38-95. [PMID: 28088645 DOI: 10.1016/j.neunet.2016.11.003] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Revised: 10/21/2016] [Accepted: 11/20/2016] [Indexed: 10/20/2022]
Abstract
The hard problem of consciousness is the problem of explaining how we experience qualia or phenomenal experiences, such as seeing, hearing, and feeling, and knowing what they are. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how emergent properties of several brain mechanisms interacting together embody detailed properties of individual conscious psychological experiences. This article summarizes evidence that Adaptive Resonance Theory, or ART, accomplishes this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that "all conscious states are resonant states" as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional experiences. ART has reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. Psychological and neurobiological data in both normal individuals and clinical patients are clarified by this classification. This analysis also explains why not all resonances become conscious, and why not all brain dynamics are resonant. The global organization of the brain into computationally complementary cortical processing streams (complementary computing), and the organization of the cerebral cortex into characteristic layers of cells (laminar computing), figure prominently in these explanations of conscious and unconscious processes. Alternative models of consciousness are also discussed.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA; Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering Boston University, 677 Beacon Street, Boston, MA 02215, USA.
| |
Collapse
|
46
|
Spatial and non-spatial aspects of visual attention: Interactive cognitive mechanisms and neural underpinnings. Neuropsychologia 2016; 92:9-19. [DOI: 10.1016/j.neuropsychologia.2016.05.021] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2015] [Revised: 04/07/2016] [Accepted: 05/19/2016] [Indexed: 11/16/2022]
|
47
|
Puschmann S, Huster RJ, Thiel CM. Mapping the spatiotemporal dynamics of processing task-relevant and task-irrelevant sound feature changes using concurrent EEG-fMRI. Hum Brain Mapp 2016; 37:3400-16. [PMID: 27280466 PMCID: PMC6867321 DOI: 10.1002/hbm.23248] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2015] [Revised: 04/01/2016] [Accepted: 04/24/2016] [Indexed: 11/09/2022] Open
Abstract
The cortical processing of changes in auditory input involves auditory sensory regions as well as different frontoparietal brain networks. The spatiotemporal dynamics of the activation spread across these networks has, however, not been investigated in detail so far. We here approached this issue using concurrent functional magnetic resonance imaging (fMRI) and electroencephalography (EEG), providing us with simultaneous information on both the spatial and temporal patterns of change-related activity. We applied an auditory stimulus categorization task with switching categorization rules, allowing to analyze change-related responses as a function of the changing sound feature (pitch or duration) and the task relevance of the change. Our data show the successive progression of change-related activity from regions involved in early change detection to the ventral and dorsal attention networks, and finally the central executive network. While early change detection was found to recruit feature-specific networks involving auditory sensory but also frontal and parietal brain regions, the later spread of activity across the frontoparietal attention and executive networks was largely independent of the changing sound feature, suggesting the existence of a general feature-independent processing pathway of change-related information. Task relevance did not modulate early auditory sensory processing, but was mainly found to affect processing in frontal brain regions. Hum Brain Mapp 37:3400-3416, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Sebastian Puschmann
- Biological Psychology LabDepartment of PsychologyCluster of Excellence “Hearing4all,”European Medical School, Carl Von Ossietzky UniversityOldenburgGermany
| | - René J. Huster
- Department of PsychologyUniversity of OsloOsloNorway
- The Mind Research NetworkAlbuquerqueNew MexicoUSA
| | - Christiane M. Thiel
- Biological Psychology LabDepartment of PsychologyCluster of Excellence “Hearing4all,”European Medical School, Carl Von Ossietzky UniversityOldenburgGermany
- Research Center Neurosensory ScienceCarl Von Ossietzky UniversityOldenburgGermany
| |
Collapse
|
48
|
Braga RM, Hellyer PJ, Wise RJS, Leech R. Auditory and visual connectivity gradients in frontoparietal cortex. Hum Brain Mapp 2016; 38:255-270. [PMID: 27571304 PMCID: PMC5215394 DOI: 10.1002/hbm.23358] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Revised: 08/09/2016] [Accepted: 08/15/2016] [Indexed: 11/06/2022] Open
Abstract
A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal-ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior-anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top-down modulation of modality-specific information to occur within higher-order cortex. This could provide a potentially faster and more efficient pathway by which top-down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long-range connections to sensory cortices. Hum Brain Mapp 38:255-270, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Rodrigo M Braga
- Center for Brain Science, Harvard University, Cambridge, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Charlestown, Massachusetts.,The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom
| | - Peter J Hellyer
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom.,Centre for Neuroimaging Sciences, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Richard J S Wise
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom
| | - Robert Leech
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom
| |
Collapse
|
49
|
Braga RM, Fu RZ, Seemungal BM, Wise RJS, Leech R. Eye Movements during Auditory Attention Predict Individual Differences in Dorsal Attention Network Activity. Front Hum Neurosci 2016; 10:164. [PMID: 27242465 PMCID: PMC4860869 DOI: 10.3389/fnhum.2016.00164] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2015] [Accepted: 04/01/2016] [Indexed: 11/13/2022] Open
Abstract
The neural mechanisms supporting auditory attention are not fully understood. A dorsal frontoparietal network of brain regions is thought to mediate the spatial orienting of attention across all sensory modalities. Key parts of this network, the frontal eye fields (FEF) and the superior parietal lobes (SPL), contain retinotopic maps and elicit saccades when stimulated. This suggests that their recruitment during auditory attention might reflect crossmodal oculomotor processes; however this has not been confirmed experimentally. Here we investigate whether task-evoked eye movements during an auditory task can predict the magnitude of activity within the dorsal frontoparietal network. A spatial and non-spatial listening task was used with on-line eye-tracking and functional magnetic resonance imaging (fMRI). No visual stimuli or cues were used. The auditory task elicited systematic eye movements, with saccade rate and gaze position predicting attentional engagement and the cued sound location, respectively. Activity associated with these separate aspects of evoked eye-movements dissociated between the SPL and FEF. However these observed eye movements could not account for all the activation in the frontoparietal network. Our results suggest that the recruitment of the SPL and FEF during attentive listening reflects, at least partly, overt crossmodal oculomotor processes during non-visual attention. Further work is needed to establish whether the network’s remaining contribution to auditory attention is through covert crossmodal processes, or is directly involved in the manipulation of auditory information.
Collapse
Affiliation(s)
- Rodrigo M Braga
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital CampusLondon, UK; Center for Brain Science, Harvard UniversityCambridge, MA, USA; Aathinoula A. Martinos Center for Biomedical ImagingCharlestown, MA, USA
| | - Richard Z Fu
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| | - Barry M Seemungal
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| | - Richard J S Wise
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| | - Robert Leech
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| |
Collapse
|
50
|
Ceravolo L, Frühholz S, Grandjean D. Proximal vocal threat recruits the right voice-sensitive auditory cortex. Soc Cogn Affect Neurosci 2016; 11:793-802. [PMID: 26746180 DOI: 10.1093/scan/nsw004] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 01/04/2016] [Indexed: 11/14/2022] Open
Abstract
The accurate estimation of the proximity of threat is important for biological survival and to assess relevant events of everyday life. We addressed the question of whether proximal as compared with distal vocal threat would lead to a perceptual advantage for the perceiver. Accordingly, we sought to highlight the neural mechanisms underlying the perception of proximal vs distal threatening vocal signals by the use of functional magnetic resonance imaging. Although we found that the inferior parietal and superior temporal cortex of human listeners generally decoded the spatial proximity of auditory vocalizations, activity in the right voice-sensitive auditory cortex was specifically enhanced for proximal aggressive relative to distal aggressive voices as compared with neutral voices. Our results shed new light on the processing of imminent danger signaled by proximal vocal threat and show the crucial involvement of the right mid voice-sensitive auditory cortex in such processing.
Collapse
Affiliation(s)
- Leonardo Ceravolo
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, Swiss Center for Affective Sciences, University of Geneva, CH-1202 Geneva, Switzerland and
| | - Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, Swiss Center for Affective Sciences, University of Geneva, CH-1202 Geneva, Switzerland and Department of Psychology, University of Zurich, 8050 Zurich, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, Swiss Center for Affective Sciences, University of Geneva, CH-1202 Geneva, Switzerland and
| |
Collapse
|