1
|
Ogawa A. Time-varying measures of cerebral network centrality correlate with visual saliency during movie watching. Brain Behav 2021; 11:e2334. [PMID: 34435748 PMCID: PMC8442596 DOI: 10.1002/brb3.2334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 07/05/2021] [Accepted: 08/07/2021] [Indexed: 12/12/2022] Open
Abstract
The extensive development of graph-theoretic analysis for functional connectivity has revealed the multifaceted characteristics of brain networks. Network centralities identify the principal functional regions, individual differences, and hub structure in brain networks. Neuroimaging studies using movie-watching have investigated brain function under naturalistic stimuli. Visual saliency is one of the promising measures for revealing cognition and emotions driven by naturalistic stimuli. This study investigated whether the visual saliency in movies was associated with network centrality. The study examined eigenvector centrality (EC), which is a measure of a region's influence in the brain network, and the participation coefficient (PC), which reflects the hub structure in the brain, was used for comparison. Static and time-varying EC and PC were analyzed by a parcel-based technique. While EC was correlated with brain activity in parcels in the visual and auditory areas during movie-watching, it was only correlated with parcels in the visual areas in the retinotopy task. In addition, high PC was consistently observed in parcels in the putative hub both during the tasks and the resting-state condition. Time-varying EC in the parietal parcels and time-varying PC in the primary sensory parcels significantly correlated with visual saliency in the movies. These results suggest that time-varying centralities in brain networks are distinctively associated with perceptual processing and subsequent higher processing of visual saliency.
Collapse
Affiliation(s)
- Akitoshi Ogawa
- Faculty of Medicine, Juntendo University, Bunkyo-ku, Tokyo, Japan.,Brain Science Institute, Tamagawa University, Machida, Tokyo, Japan
| |
Collapse
|
2
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
3
|
Movies and narratives as naturalistic stimuli in neuroimaging. Neuroimage 2020; 224:117445. [PMID: 33059053 PMCID: PMC7805386 DOI: 10.1016/j.neuroimage.2020.117445] [Citation(s) in RCA: 78] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 10/06/2020] [Accepted: 10/09/2020] [Indexed: 01/06/2023] Open
Abstract
Using movies and narratives as naturalistic stimuli in human neuroimaging studies has yielded significant advances in understanding of cognitive and emotional functions. The relevant literature was reviewed, with emphasis on how the use of naturalistic stimuli has helped advance scientific understanding of human memory, attention, language, emotions, and social cognition in ways that would have been difficult otherwise. These advances include discovering a cortical hierarchy of temporal receptive windows, which supports processing of dynamic information that accumulates over several time scales, such as immediate reactions vs. slowly emerging patterns in social interactions. Naturalistic stimuli have also helped elucidate how the hippocampus supports segmentation and memorization of events in day-to-day life and have afforded insights into attentional brain mechanisms underlying our ability to adopt specific perspectives during natural viewing. Further, neuroimaging studies with naturalistic stimuli have revealed the role of the default-mode network in narrative-processing and in social cognition. Finally, by robustly eliciting genuine emotions, these stimuli have helped elucidate the brain basis of both basic and social emotions apparently manifested as highly overlapping yet distinguishable patterns of brain activity.
Collapse
|
4
|
Functional Imaging of Visuospatial Attention in Complex and Naturalistic Conditions. Curr Top Behav Neurosci 2020. [PMID: 30547430 DOI: 10.1007/7854_2018_73] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
One of the ultimate goals of cognitive neuroscience is to understand how the brain works in the real world. Functional imaging with naturalistic stimuli provides us with the opportunity to study the brain in situations similar to the everyday life. This includes the processing of complex stimuli that can trigger many types of signals related both to the physical characteristics of the external input and to the internal knowledge that we have about natural objects and environments. In this chapter, I will first outline different types of stimuli that have been used in naturalistic imaging studies. These include static pictures, short video clips, full-length movies, and virtual reality, each comprising specific advantages and disadvantages. Next, I will turn to the main issue of visual-spatial orienting in naturalistic conditions and its neural substrates. I will discuss different classes of internal signals, related to objects, scene structure, and long-term memory. All of these, together with external signals about stimulus salience, have been found to modulate the activity and the connectivity of the frontoparietal attention networks. I will conclude by pointing out some promising future directions for functional imaging with naturalistic stimuli. Despite this field of research is still in its early days, I consider that it will play a major role in bridging the gap between standard laboratory paradigms and mechanisms of brain functioning in the real world.
Collapse
|
5
|
Dittrich S, Noesselt T. Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments. Front Psychol 2018; 9:368. [PMID: 29618999 PMCID: PMC5871701 DOI: 10.3389/fpsyg.2018.00368] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Accepted: 03/06/2018] [Indexed: 11/24/2022] Open
Abstract
Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research.
Collapse
Affiliation(s)
- Sandra Dittrich
- Department of Biological Psychology, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Tömme Noesselt
- Department of Biological Psychology, Otto von Guericke University Magdeburg, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| |
Collapse
|
6
|
Bordier C, Macaluso E. Time-resolved detection of stimulus/task-related networks, via clustering of transient intersubject synchronization. Hum Brain Mapp 2015; 36:3404-25. [PMID: 26095530 PMCID: PMC5008218 DOI: 10.1002/hbm.22852] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2014] [Revised: 05/08/2015] [Accepted: 05/11/2015] [Indexed: 11/06/2022] Open
Abstract
Several methods are available for the identification of functional networks of brain areas using functional magnetic resonance imaging (fMRI) time-series. These typically assume a fixed relationship between the signal of the areas belonging to the same network during the entire time-series (e.g., positive correlation between the areas belonging to the same network), or require a priori information about when this relationship may change (task-dependent changes of connectivity). We present a fully data-driven method that identifies transient network configurations that are triggered by the external input and that, therefore, include only regions involved in stimulus/task processing. Intersubject synchronization with short sliding time-windows was used to identify if/when any area showed stimulus/task-related responses. Next, a first clustering step grouped together areas that became engaged concurrently and repetitively during the time-series (stimulus/task-related networks). Finally, for each network, a second clustering step grouped together all the time-windows with the same BOLD signal. The final output consists of a set of network configurations that show stimulus/task-related activity at specific time-points during the fMRI time-series. We label these configurations: "brain modes" (bModes). The method was validated using simulated datasets and a real fMRI experiment with multiple tasks and conditions. Future applications include the investigation of brain functions using complex and naturalistic stimuli.
Collapse
Affiliation(s)
- Cécile Bordier
- Neuroimaging LaboratoryIRCCS, Santa Lucia Foundationvia Ardeatina 306Rome00179Italy
| | - Emiliano Macaluso
- Neuroimaging LaboratoryIRCCS, Santa Lucia Foundationvia Ardeatina 306Rome00179Italy
| |
Collapse
|
7
|
Ogawa A, Macaluso E. Orienting of visuo-spatial attention in complex 3D space: Search and detection. Hum Brain Mapp 2015; 36:2231-47. [PMID: 25691253 PMCID: PMC4682464 DOI: 10.1002/hbm.22767] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2014] [Revised: 01/31/2015] [Accepted: 02/03/2015] [Indexed: 11/26/2022] Open
Abstract
The ability to detect changes in the environment is necessary for appropriate interactions with the external world. Changes in the background go more unnoticed than foreground changes, possibly because attention prioritizes processing of foreground/near stimuli. Here, we investigated the detectability of foreground and background changes within natural scenes and the influence of stereoscopic depth cues on this. Using a flicker paradigm, we alternated a pair of images that were exactly same or differed for one single element (i.e., a color change of one object in the scene). The participants were asked to find the change that occurred either in a foreground or background object, while viewing the stimuli either with binocular and monocular cues (bmC) or monocular cues only (mC). The behavioral results showed faster and more accurate detections for foreground changes and overall better performance in bmC than mC conditions. The imaging results highlighted the involvement of fronto‐parietal attention controlling networks during active search and target detection. These attention networks did not show any differential effect as function of the presence/absence of the binocular cues, or the detection of foreground/background changes. By contrast, the lateral occipital cortex showed greater activation for detections in foreground compared to background, while area V3A showed a main effect of bmC vs. mC, specifically during search. These findings indicate that visual search with binocular cues does not impose any specific requirement on attention‐controlling fronto‐parietal networks, while the enhanced detection of front/near objects in the bmC condition reflects bottom‐up sensory processes in visual cortex. Hum Brain Mapp 36:2231–2247, 2015. © 2015 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Akitoshi Ogawa
- Neuroimaging Laboratory, Santa Lucia Foundation, Via Ardeatina 306, Rome, Italy
| | | |
Collapse
|
8
|
Fang J, Hu X, Han J, Jiang X, Zhu D, Guo L, Liu T. Data-driven analysis of functional brain interactions during free listening to music and speech. Brain Imaging Behav 2014; 9:162-77. [PMID: 24526569 DOI: 10.1007/s11682-014-9293-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Natural stimulus functional magnetic resonance imaging (N-fMRI) such as fMRI acquired when participants were watching video streams or listening to audio streams has been increasingly used to investigate functional mechanisms of the human brain in recent years. One of the fundamental challenges in functional brain mapping based on N-fMRI is to model the brain's functional responses to continuous, naturalistic and dynamic natural stimuli. To address this challenge, in this paper we present a data-driven approach to exploring functional interactions in the human brain during free listening to music and speech streams. Specifically, we model the brain responses using N-fMRI by measuring the functional interactions on large-scale brain networks with intrinsically established structural correspondence, and perform music and speech classification tasks to guide the systematic identification of consistent and discriminative functional interactions when multiple subjects were listening music and speech in multiple categories. The underlying premise is that the functional interactions derived from N-fMRI data of multiple subjects should exhibit both consistency and discriminability. Our experimental results show that a variety of brain systems including attention, memory, auditory/language, emotion, and action networks are among the most relevant brain systems involved in classic music, pop music and speech differentiation. Our study provides an alternative approach to investigating the human brain's mechanism in comprehension of complex natural music and speech.
Collapse
Affiliation(s)
- Jun Fang
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | | | | | | | | | | | | |
Collapse
|