1
|
Huang H, Doebler P, Mertins B. Short-time AOIs-based representative scanpath identification and scanpath aggregation. Behav Res Methods 2024; 56:6051-6066. [PMID: 38195788 PMCID: PMC11335822 DOI: 10.3758/s13428-023-02332-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/26/2023] [Indexed: 01/11/2024]
Abstract
A new algorithm to identify a representative scanpath in a sample is presented and evaluated with eye-tracking data. According to Gestalt theory, each fixation of the scanpath should be on an area of interest (AOI) of the stimuli. As with existing methods, we first identify the AOIs and then extract the fixations of the representative scanpath from the AOIs. In contrast to existing methods, we propose a new concept of short-time AOI and extract the fixations of representative scanpath from the short-time AOIs. Our method outperforms the existing methods on two publicly available datasets. Our method can be applied to arbitrary visual stimuli, including static stimuli without natural segmentation, as well as dynamic stimuli. Our method also provides a solution for issues caused by the selection of scanpath similarity.
Collapse
Affiliation(s)
- He Huang
- Department of Statistics, TU Dortmund University, 44227, Dortmund, Germany.
| | - Philipp Doebler
- Department of Statistics, TU Dortmund University, 44227, Dortmund, Germany
| | - Barbara Mertins
- Department of Statistics, TU Dortmund University, 44227, Dortmund, Germany
- Departments of Cultural Studies, TU Dortmund University, 44227, Dortmund, Germany
| |
Collapse
|
2
|
Hou W, Cheng R, Zhao Z, Liao H, Li J. Atypical and variable attention patterns reveal reduced contextual priors in children with autism spectrum disorder. Autism Res 2024; 17:1572-1585. [PMID: 38975627 DOI: 10.1002/aur.3194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 06/23/2024] [Indexed: 07/09/2024]
Abstract
Accumulating evidence suggests that individuals with autism spectrum disorder (ASD) show impairments in using contextual priors to predict others' actions and make intention inference. Yet less is known about whether and how children with ASD acquire contextual priors during action observation and how contextual priors relate to their action prediction and intention inference. To form proper contextual priors, individuals need to observe the social scenes in a reliable manner and focus on socially relevant information. By employing a data-driven scan path method and areas of interest (AOI)-based analysis, the current study investigated how contextual priors would relate to action prediction and intention understanding in 4-to-9-year-old children with ASD (N = 56) and typically developing (TD) children (N = 50) during free viewing of dynamic social scenes with different intentions. Results showed that children with ASD exhibited higher intra-subject variability when scanning social scenes and reduced attention to socially relevant areas. Moreover, children with high-level action prediction and intention understanding showed lower intra-subject variability and increased attention to socially relevant areas. These findings suggest that altered fixation patterns might restrain children with ASD from acquiring proper contextual priors, which has cascading downstream effects on their action prediction and intention understanding.
Collapse
Affiliation(s)
- Wenwen Hou
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Rong Cheng
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen, China
| | - Zhong Zhao
- College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen, China
| | - Haotian Liao
- College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen, China
| | - Jing Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
3
|
Kaduk T, Goeke C, Finger H, König P. Webcam eye tracking close to laboratory standards: Comparing a new webcam-based system and the EyeLink 1000. Behav Res Methods 2024; 56:5002-5022. [PMID: 37821751 PMCID: PMC11289017 DOI: 10.3758/s13428-023-02237-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2023] [Indexed: 10/13/2023]
Abstract
This paper aims to compare a new webcam-based eye-tracking system, integrated into the Labvanced platform for online experiments, to a "gold standard" lab-based eye tracker (EyeLink 1000 - SR Research). Specifically, we simultaneously recorded data with both eye trackers in five different tasks, analyzing their real-time performance. These tasks were a subset of a standardized test battery for eye trackers, including a Large Grid task, Smooth Pursuit eye movements, viewing natural images, and two Head Movements tasks (roll, yaw). The results show that the webcam-based system achieved an overall accuracy of 1.4°, and a precision of 1.1° (standard deviation (SD) across subjects), an error of about 0.5° larger than the EyeLink system. Interestingly, both accuracy (1.3°) and precision (0.9°) were slightly better for centrally presented targets, the region of interest in many psychophysical experiments. Remarkably, the correlation of raw gaze samples between the EyeLink and webcam-based was at about 90% for the Large Grid task and about 80% for Free View and Smooth Pursuit. Overall, these results put the performance of the webcam-based system roughly on par with mobile eye-tracking devices (Ehinger et al. PeerJ, 7, e7086, 2019; Tonsen et al., 2020) and demonstrate substantial improvement compared to existing webcam eye-tracking solutions (Papoutsaki et al., 2017).
Collapse
Affiliation(s)
- Tobiasz Kaduk
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.
- Research and Development Division, Scicovery GmbH, Paderborn, Germany.
| | - Caspar Goeke
- Research and Development Division, Scicovery GmbH, Paderborn, Germany
| | - Holger Finger
- Research and Development Division, Scicovery GmbH, Paderborn, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
4
|
Liu XH, Gan L, Zhang ZT, Yu PK, Dai J. Probing the processing of facial expressions in monkeys via time perception and eye tracking. Zool Res 2023; 44:882-893. [PMID: 37545418 PMCID: PMC10559096 DOI: 10.24272/j.issn.2095-8137.2023.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 08/04/2023] [Indexed: 08/08/2023] Open
Abstract
Accurately recognizing facial expressions is essential for effective social interactions. Non-human primates (NHPs) are widely used in the study of the neural mechanisms underpinning facial expression processing, yet it remains unclear how well monkeys can recognize the facial expressions of other species such as humans. In this study, we systematically investigated how monkeys process the facial expressions of conspecifics and humans using eye-tracking technology and sophisticated behavioral tasks, namely the temporal discrimination task (TDT) and face scan task (FST). We found that monkeys showed prolonged subjective time perception in response to Negative facial expressions in monkeys while showing longer reaction time to Negative facial expressions in humans. Monkey faces also reliably induced divergent pupil contraction in response to different expressions, while human faces and scrambled monkey faces did not. Furthermore, viewing patterns in the FST indicated that monkeys only showed bias toward emotional expressions upon observing monkey faces. Finally, masking the eye region marginally decreased the viewing duration for monkey faces but not for human faces. By probing facial expression processing in monkeys, our study demonstrates that monkeys are more sensitive to the facial expressions of conspecifics than those of humans, thus shedding new light on inter-species communication through facial expressions between NHPs and humans.
Collapse
Affiliation(s)
- Xin-He Liu
- Shenzhen Technological Research Center for Primate Translational Medicine, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
- CAS Key Laboratory of Brain Connectome and Manipulation, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
- Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Lu Gan
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Zhi-Ting Zhang
- Shenzhen Technological Research Center for Primate Translational Medicine, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
- CAS Key Laboratory of Brain Connectome and Manipulation, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
- Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Pan-Ke Yu
- Shenzhen Technological Research Center for Primate Translational Medicine, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Ji Dai
- Shenzhen Technological Research Center for Primate Translational Medicine, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
- CAS Key Laboratory of Brain Connectome and Manipulation, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
- Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
- University of Chinese Academy of Sciences, Beijing 100049, China. E-mail:
| |
Collapse
|
5
|
Russ BE, Koyano KW, Day-Cooney J, Perwez N, Leopold DA. Temporal continuity shapes visual responses of macaque face patch neurons. Neuron 2023; 111:903-914.e3. [PMID: 36630962 PMCID: PMC10023462 DOI: 10.1016/j.neuron.2022.12.021] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 09/09/2022] [Accepted: 12/13/2022] [Indexed: 01/12/2023]
Abstract
Macaque inferior temporal cortex neurons respond selectively to complex visual images, with recent work showing that they are also entrained reliably by the evolving content of natural movies. To what extent does temporal continuity itself shape the responses of high-level visual neurons? We addressed this question by measuring how cells in face-selective regions of the macaque visual cortex were affected by the manipulation of a movie's temporal structure. Sampling a 5-min movie at 1 s intervals, we measured neural responses to randomized, brief stimuli of different lengths, ranging from 800 ms dynamic movie snippets to 100 ms static frames. We found that the disruption of temporal continuity strongly altered neural response profiles, particularly in the early response period after stimulus onset. The results suggest that models of visual system function based on discrete and randomized visual presentations may not translate well to the brain's natural modes of operation.
Collapse
Affiliation(s)
- Brian E Russ
- Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, Bethesda, MD 20814, USA; Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY 10962, USA; Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Department of Psychiatry, New York University at Langone, New York City, NY 10016, USA.
| | - Kenji W Koyano
- Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, Bethesda, MD 20814, USA
| | - Julian Day-Cooney
- Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, Bethesda, MD 20814, USA
| | - Neda Perwez
- Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, Bethesda, MD 20814, USA
| | - David A Leopold
- Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, Bethesda, MD 20814, USA; Neurophysiology Imaging Facility, National Institute of Mental Health, National Institute of Neurological Disorders and Stroke, National Eye Institute, Bethesda, MD 20814, USA
| |
Collapse
|
6
|
Jing M, Kadooka K, Franchak J, Kirkorian HL. The effect of narrative coherence and visual salience on children's and adults' gaze while watching video. J Exp Child Psychol 2023; 226:105562. [PMID: 36257254 DOI: 10.1016/j.jecp.2022.105562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 09/12/2022] [Accepted: 09/14/2022] [Indexed: 11/05/2022]
Abstract
Low-level visual features (e.g., motion, contrast) predict eye gaze during video viewing. The current study investigated the effect of narrative coherence on the extent to which low-level visual salience predicts eye gaze. Eye movements were recorded as 4-year-olds (n = 20) and adults (n = 20) watched a cohesive versus random sequence of video shots from a 4.5-min full vignette from Sesame Street. Overall, visual salience was a stronger predictor of gaze in adults than in children, especially when viewing a random shot sequence. The impact of narrative coherence on children's gaze was limited to the short period of time surrounding cuts to new video shots. The discussion considers potential direct effects of visual salience as well as incidental effects due to overlap between salient features and semantic content. The findings are also discussed in the context of developing video comprehension.
Collapse
Affiliation(s)
- Mengguo Jing
- Department of Human Development and Family Studies, University of Wisconsin-Madison, Madison, WI 53705, USA.
| | - Kellan Kadooka
- Department of Psychology, University of California, Riverside, Riverside, CA 92521, USA
| | - John Franchak
- Department of Psychology, University of California, Riverside, Riverside, CA 92521, USA
| | - Heather L Kirkorian
- Department of Human Development and Family Studies, University of Wisconsin-Madison, Madison, WI 53705, USA
| |
Collapse
|
7
|
Franchak JM, Kadooka K. Age differences in orienting to faces in dynamic scenes depend on face centering, not visual saliency. INFANCY 2022; 27:1032-1051. [PMID: 35932474 DOI: 10.1111/infa.12492] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The current study investigated how infants (6-24 months), children (2-12 years), and adults differ in how visual cues-visual saliency and centering-guide their attention to faces in videos. We report a secondary analysis of Kadooka and Franchak (2020), in which observers' eye movements were recorded during viewing of television clips containing a variety of faces. For every face on every video frame, we calculated its visual saliency (based on both static and dynamic image features) and calculated how close the face was to the center of the image. Results revealed that participants of every age looked more often at each face when it was more salient compared to less salient. In contrast, centering did not increase the likelihood that infants looked at a given face, but in later childhood and adulthood, centering became a stronger cue for face looking. A control analysis determined that the age-related change in centering was specific to face looking; participants of all ages were more likely to look at the center of the image, and this center bias did not change with age. The implications for using videos in educational and diagnostic contexts are discussed.
Collapse
|
8
|
Patel GH, Arkin SC, Ruiz-Betancourt D, DeBaun H, Strauss NE, Bartel LP, Grinband J, Martinez A, Berman RA, Leopold DA, Javitt DC. What you see is what you get: visual scanning failures of naturalistic social scenes in schizophrenia. Psychol Med 2021; 51:2923-2932. [PMID: 32498743 PMCID: PMC7751380 DOI: 10.1017/s0033291720001646] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
BACKGROUND Impairments in social cognition contribute significantly to disability in schizophrenia patients (SzP). Perception of facial expressions is critical for social cognition. Intact perception requires an individual to visually scan a complex dynamic social scene for transiently moving facial expressions that may be relevant for understanding the scene. The relationship of visual scanning for these facial expressions and social cognition remains unknown. METHODS In 39 SzP and 27 healthy controls (HC), we used eye-tracking to examine the relationship between performance on The Awareness of Social Inference Test (TASIT), which tests social cognition using naturalistic video clips of social situations, and visual scanning, measuring each individual's relative to the mean of HC. We then examined the relationship of visual scanning to the specific visual features (motion, contrast, luminance, faces) within the video clips. RESULTS TASIT performance was significantly impaired in SzP for trials involving sarcasm (p < 10-5). Visual scanning was significantly more variable in SzP than HC (p < 10-6), and predicted TASIT performance in HC (p = 0.02) but not SzP (p = 0.91), differing significantly between groups (p = 0.04). During the visual scanning, SzP were less likely to be viewing faces (p = 0.0001) and less likely to saccade to facial motion in peripheral vision (p = 0.008). CONCLUSIONS SzP show highly significant deficits in the use of visual scanning of naturalistic social scenes to inform social cognition. Alterations in visual scanning patterns may originate from impaired processing of facial motion within peripheral vision. Overall, these results highlight the utility of naturalistic stimuli in the study of social cognition deficits in schizophrenia.
Collapse
Affiliation(s)
- Gaurav H. Patel
- Columbia University Medical Center
- New York State Psychiatric Institute
| | | | | | | | | | - Laura P. Bartel
- Columbia University Medical Center
- New York State Psychiatric Institute
| | - Jack Grinband
- Columbia University Medical Center
- New York State Psychiatric Institute
| | | | | | | | - Daniel C. Javitt
- Columbia University Medical Center
- New York State Psychiatric Institute
- Nathan Kline Institute
| |
Collapse
|
9
|
Ossmy O, Han D, Kaplan BE, Xu M, Bianco C, Mukamel R, Adolph KE. Children do not distinguish efficient from inefficient actions during observation. Sci Rep 2021; 11:18106. [PMID: 34518566 PMCID: PMC8438080 DOI: 10.1038/s41598-021-97354-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 07/13/2021] [Indexed: 11/08/2022] Open
Abstract
Observation is a powerful way to learn efficient actions from others. However, the role of observers' motor skill in assessing efficiency of others is unknown. Preschoolers are notoriously poor at performing multi-step actions like grasping the handle of a tool. Preschoolers (N = 22) and adults (N = 22) watched video-recorded actors perform efficient and inefficient tool use. Eye tracking showed that preschoolers and adults looked equally long at the videos, but adults looked longer than children at how actors grasped the tool. Deep learning analyses of participants' eye gaze distinguished efficient from inefficient grasps for adults, but not for children. Moreover, only adults showed differential action-related pupil dilation and neural activity (suppressed oscillation power in the mu frequency) while observing efficient vs. inefficient grasps. Thus, children observe multi-step actions without "seeing" whether the initial step is efficient. Findings suggest that observer's own motor efficiency determines whether they can perceive action efficiency in others.
Collapse
Affiliation(s)
- Ori Ossmy
- Department of Psychology, Center for Neural Science, New York University, 6 Washington Place, Room 403, New York, NY, 10003, USA.
| | - Danyang Han
- Department of Psychology, Center for Neural Science, New York University, 6 Washington Place, Room 403, New York, NY, 10003, USA
| | - Brianna E Kaplan
- Department of Psychology, Center for Neural Science, New York University, 6 Washington Place, Room 403, New York, NY, 10003, USA
| | - Melody Xu
- Department of Psychology, Center for Neural Science, New York University, 6 Washington Place, Room 403, New York, NY, 10003, USA
| | - Catherine Bianco
- Department of Psychology, Center for Neural Science, New York University, 6 Washington Place, Room 403, New York, NY, 10003, USA
| | - Roy Mukamel
- School of Psychological Sciences, Sagol School of Neuroscience, Tel-Aviv University, Tel Aviv, Israel
| | - Karen E Adolph
- Department of Psychology, Center for Neural Science, New York University, 6 Washington Place, Room 403, New York, NY, 10003, USA
| |
Collapse
|
10
|
Russ BE, Petkov CI, Kwok SC, Zhu Q, Belin P, Vanduffel W, Hamed SB. Common functional localizers to enhance NHP & cross-species neuroscience imaging research. Neuroimage 2021; 237:118203. [PMID: 34048898 PMCID: PMC8529529 DOI: 10.1016/j.neuroimage.2021.118203] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 05/15/2021] [Accepted: 05/24/2021] [Indexed: 11/25/2022] Open
Abstract
Functional localizers are invaluable as they can help define regions of interest, provide cross-study comparisons, and most importantly, allow for the aggregation and meta-analyses of data across studies and laboratories. To achieve these goals within the non-human primate (NHP) imaging community, there is a pressing need for the use of standardized and validated localizers that can be readily implemented across different groups. The goal of this paper is to provide an overview of the value of localizer protocols to imaging research and we describe a number of commonly used or novel localizers within NHPs, and keys to implement them across studies. As has been shown with the aggregation of resting-state imaging data in the original PRIME-DE submissions, we believe that the field is ready to apply the same initiative for task-based functional localizers in NHP imaging. By coming together to collect large datasets across research group, implementing the same functional localizers, and sharing the localizers and data via PRIME-DE, it is now possible to fully test their robustness, selectivity and specificity. To do this, we reviewed a number of common localizers and we created a repository of well-established localizer that are easily accessible and implemented through the PRIME-RE platform.
Collapse
Affiliation(s)
- Brian E Russ
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, United States; Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York City, NY, United States; Department of Psychiatry, New York University at Langone, New York City, NY, United States.
| | - Christopher I Petkov
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, United Kingdom
| | - Sze Chai Kwok
- Shanghai Key Laboratory of Brain Functional Genomics, Key Laboratory of Brain Functional Genomics Ministry of Education, Shanghai Key Laboratory of Magnetic Resonance, Affiliated Mental Health Center (ECNU), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China; Division of Natural and Applied Sciences, Duke Kunshan University, Kunshan, Jiangsu, China; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Qi Zhu
- Cognitive Neuroimaging Unit, INSERM, CEA, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France; Laboratory for Neuro-and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven, 3000, Belgium
| | - Pascal Belin
- Institut de Neurosciences de La Timone, Aix-Marseille Université et CNRS, Marseille, 13005, France
| | - Wim Vanduffel
- Laboratory for Neuro-and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven, 3000, Belgium; Leuven Brain Institute, KU Leuven, Leuven, 3000, Belgium; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA 02144, United States.
| | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, UMR 5229, Université de Lyon - CNRS, France.
| |
Collapse
|
11
|
The application of noninvasive, restraint-free eye-tracking methods for use with nonhuman primates. Behav Res Methods 2021; 53:1003-1030. [PMID: 32935327 DOI: 10.3758/s13428-020-01465-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Over the past 50 years there has been a strong interest in applying eye-tracking techniques to study a myriad of questions related to human and nonhuman primate psychological processes. Eye movements and fixations can provide qualitative and quantitative insights into cognitive processes of nonverbal populations such as nonhuman primates, clarifying the evolutionary, physiological, and representational underpinnings of human cognition. While early attempts at nonhuman primate eye tracking were relatively crude, later, more sophisticated and sensitive techniques required invasive protocols and the use of restraint. In the past decade, technology has advanced to a point where noninvasive eye-tracking techniques, developed for use with human participants, can be applied for use with nonhuman primates in a restraint-free manner. Here we review the corpus of recent studies (N=32) that take such an approach. Despite the growing interest in eye-tracking research, there is still little consensus on "best practices," both in terms of deploying test protocols or reporting methods and results. Therefore, we look to advances made in the field of developmental psychology, as well as our own collective experiences using eye trackers with nonhuman primates, to highlight key elements that researchers should consider when designing noninvasive restraint-free eye-tracking research protocols for use with nonhuman primates. Beyond promoting best practices for research protocols, we also outline an ideal approach for reporting such research and highlight future directions for the field.
Collapse
|
12
|
Abstract
In order to understand ecologically meaningful social behaviors and their neural substrates in humans and other animals, researchers have been using a variety of social stimuli in the laboratory with a goal of extracting specific processes in real-life scenarios. However, certain stimuli may not be sufficiently effective at evoking typical social behaviors and neural responses. Here, we review empirical research employing different types of social stimuli by classifying them into five levels of naturalism. We describe the advantages and limitations while providing selected example studies for each level. We emphasize the important trade-off between experimental control and ecological validity across the five levels of naturalism. Taking advantage of newly emerging tools, such as real-time videos, virtual avatars, and wireless neural sampling techniques, researchers are now more than ever able to adopt social stimuli at a higher level of naturalism to better capture the dynamics and contingency of real-life social interaction.
Collapse
Affiliation(s)
- Siqi Fan
- Department of Psychology, Yale University, New Haven, CT 06520, USA
| | - Olga Dal Monte
- Department of Psychology, Yale University, New Haven, CT 06520, USA
- Department of Psychology, University of Turin, Torino, Italy
| | - Steve W.C. Chang
- Department of Psychology, Yale University, New Haven, CT 06520, USA
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06510, USA
- Kavli Institute for Neuroscience, Yale University School of Medicine, New Haven, CT 06510, USA
- Wu Tsai Institute, Yale University, New Haven, CT 06510, USA
| |
Collapse
|
13
|
Exploring How White-Faced Sakis Control Digital Visual Enrichment Systems. Animals (Basel) 2021; 11:ani11020557. [PMID: 33672657 PMCID: PMC7924172 DOI: 10.3390/ani11020557] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 01/25/2021] [Accepted: 01/29/2021] [Indexed: 11/18/2022] Open
Abstract
Simple Summary Many zoo-housed primates use visual computer systems for enrichment but little is known about how monkeys would choose to control these devices. Here we investigate what visual enrichment white-faced saki monkeys would trigger and what effect these videos have on their behaviour. To study this, we built an interactive screen device that would trigger visual stimuli and track the sakis’ interactions when using the system. Over several weeks, we found that the sakis would trigger underwater and worm videos significantly more than animal, abstract art and forest videos, and the control condition of no-stimuli. Further, the sakis triggered the animal video significantly less often over all other conditions. Yet, viewing their interactions over time, the sakis’ usage of the device followed a bell curve, suggesting novelty and habituation factors. As such, it is unknown if the stumli or devices novelty and habituation caused the changes in the sakis interactions over time. These results also indicated that the different visual stimuli conditions significantly reduced the sakis’ scratching behaviour with the visual stimuli conditions compared to the control condition. Further, the usage of visual stimuli did not increase the sakis’ looking at and sitting in front of the screen behaviours. These results highlight problems in defining interactivity and screen usage with monkeys and screens from looking behaviours and proximity alone. Abstract Computer-enabled screen systems containing visual elements have long been employed with captive primates for assessing preference, reactions and for husbandry reasons. These screen systems typically play visual enrichment to primates without them choosing to trigger the system and without their consent. Yet, what videos primates, especially monkeys, would prefer to watch of their own volition and how to design computers and methods that allow choice is an open question. In this study, we designed and tested, over several weeks, an enrichment system that facilitates white-faced saki monkeys to trigger different visual stimuli in their regular zoo habitat while automatically logging and recording their interaction. By analysing this data, we show that the sakis triggered underwater and worm videos over the forest, abstract art, and animal videos, and a control condition of no-stimuli. We also note that the sakis used the device significantly less when playing animal videos compared to other conditions. Yet, plotting the data over time revealed an engagement bell curve suggesting confounding factors of novelty and habituation. As such, it is unknown if the stimuli or device usage curve caused the changes in the sakis interactions over time. Looking at the sakis’ behaviours and working with zoo personnel, we noted that the stimuli conditions resulted in significantly decreasing the sakis’ scratching behaviour. For the research community, this study builds on methods that allow animals to control computers in a zoo environment highlighting problems in quantifying animal interactions with computer devices.
Collapse
|
14
|
Maylott SE, Paukner A, Ahn YA, Simpson EA. Human and monkey infant attention to dynamic social and nonsocial stimuli. Dev Psychobiol 2020; 62:841-857. [PMID: 32424813 PMCID: PMC7944642 DOI: 10.1002/dev.21979] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 03/23/2020] [Accepted: 03/31/2020] [Indexed: 12/14/2022]
Abstract
The present study explored behavioral norms for infant social attention in typically developing human and nonhuman primate infants. We examined the normative development of attention to dynamic social and nonsocial stimuli longitudinally in macaques (Macaca mulatta) at 1, 3, and 5 months of age (N = 75) and humans at 2, 4, 6, 8, and 13 months of age (N = 69) using eye tracking. All infants viewed concurrently played silent videos-one social video and one nonsocial video. Both macaque and human infants were faster to look to the social than the nonsocial stimulus, and both species grew faster to orient to the social stimulus with age. Further, macaque infants' social attention increased linearly from 1 to 5 months. In contrast, human infants displayed a nonlinear pattern of social interest, with initially greater attention to the social stimulus, followed by a period of greater interest in the nonsocial stimulus, and then a rise in social interest from 6 to 13 months. Overall, human infants looked longer than macaque infants, suggesting humans have more sustained attention in the first year of life. These findings highlight potential species similarities and differences, and reflect a first step in establishing baseline patterns of early social attention development.
Collapse
Affiliation(s)
- Sarah E Maylott
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Annika Paukner
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| | - Yeojin A Ahn
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | |
Collapse
|
15
|
Franchak JM. Visual exploratory behavior and its development. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.07.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
16
|
Avni I, Meiri G, Bar‐Sinai A, Reboh D, Manelis L, Flusser H, Michaelovski A, Menashe I, Dinstein I. Children with autism observe social interactions in an idiosyncratic manner. Autism Res 2019; 13:935-946. [DOI: 10.1002/aur.2234] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 09/13/2019] [Accepted: 10/07/2019] [Indexed: 12/28/2022]
Affiliation(s)
- Inbar Avni
- Cognitive and Brain Sciences Department Ben Gurion University of the Negev Be'er Sheva Israel
- National Autism Research Center of Israel Ben Gurion University of the Negev Be'er Sheva Israel
| | - Gal Meiri
- National Autism Research Center of Israel Ben Gurion University of the Negev Be'er Sheva Israel
- Pre‐school Psychiatry Unit Soroka Medical Center Be'er Sheba Israel
| | - Asif Bar‐Sinai
- National Autism Research Center of Israel Ben Gurion University of the Negev Be'er Sheva Israel
- Psychology Department Ben Gurion University of the Negev Be'er Sheva Israel
| | - Doron Reboh
- National Autism Research Center of Israel Ben Gurion University of the Negev Be'er Sheva Israel
- Psychology Department Ben Gurion University of the Negev Be'er Sheva Israel
| | - Liora Manelis
- National Autism Research Center of Israel Ben Gurion University of the Negev Be'er Sheva Israel
- Psychology Department Ben Gurion University of the Negev Be'er Sheva Israel
| | - Hagit Flusser
- National Autism Research Center of Israel Ben Gurion University of the Negev Be'er Sheva Israel
- Child Development Institute Soroka Medical Center Be'er Sheva Israel
| | - Analya Michaelovski
- National Autism Research Center of Israel Ben Gurion University of the Negev Be'er Sheva Israel
- Child Development Institute Soroka Medical Center Be'er Sheva Israel
| | - Idan Menashe
- National Autism Research Center of Israel Ben Gurion University of the Negev Be'er Sheva Israel
- Public Health Department Ben‐Gurion University Be'er Sheva Israel
| | - Ilan Dinstein
- Cognitive and Brain Sciences Department Ben Gurion University of the Negev Be'er Sheva Israel
- National Autism Research Center of Israel Ben Gurion University of the Negev Be'er Sheva Israel
- Psychology Department Ben Gurion University of the Negev Be'er Sheva Israel
| |
Collapse
|
17
|
The comparative anatomy of frontal eye fields in primates. Cortex 2019; 118:51-64. [DOI: 10.1016/j.cortex.2019.02.023] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 01/24/2019] [Accepted: 02/22/2019] [Indexed: 12/25/2022]
|
18
|
Nastase SA, Gazzola V, Hasson U, Keysers C. Measuring shared responses across subjects using intersubject correlation. Soc Cogn Affect Neurosci 2019; 14:667-685. [PMID: 31099394 PMCID: PMC6688448 DOI: 10.1093/scan/nsz037] [Citation(s) in RCA: 131] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 05/10/2019] [Accepted: 05/13/2019] [Indexed: 12/18/2022] Open
Abstract
Our capacity to jointly represent information about the world underpins our social experience. By leveraging one individual's brain activity to model another's, we can measure shared information across brains-even in dynamic, naturalistic scenarios where an explicit response model may be unobtainable. Introducing experimental manipulations allows us to measure, for example, shared responses between speakers and listeners or between perception and recall. In this tutorial, we develop the logic of intersubject correlation (ISC) analysis and discuss the family of neuroscientific questions that stem from this approach. We also extend this logic to spatially distributed response patterns and functional network estimation. We provide a thorough and accessible treatment of methodological considerations specific to ISC analysis and outline best practices.
Collapse
Affiliation(s)
- Samuel A Nastase
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| | - Valeria Gazzola
- Social Brain Lab, Netherlands Institute for Neuroscience, KNAW, 105BA Amsterdam, The Netherlands
- Department of Psychology, University of Amsterdam, 1018 WV Amsterdam, The Netherlands
| | - Uri Hasson
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| | - Christian Keysers
- Social Brain Lab, Netherlands Institute for Neuroscience, KNAW, 105BA Amsterdam, The Netherlands
- Department of Psychology, University of Amsterdam, 1018 WV Amsterdam, The Netherlands
| |
Collapse
|
19
|
Weinberg-Wolf H, Chang SWC. Differences in how macaques monitor others: Does serotonin play a central role? WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2019; 10:e1494. [PMID: 30775852 PMCID: PMC6570566 DOI: 10.1002/wcs.1494] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 01/11/2019] [Accepted: 01/11/2019] [Indexed: 01/22/2023]
Abstract
Primates must balance the need to monitor other conspecifics to gain social information while not losing other resource opportunities. We consolidate evidence across the fields of primatology, psychology, and neuroscience to examine individual, population, and species differences in how primates, particularly macaques, monitor conspecifics. We particularly consider the role of serotonin in mediating social competency via social attention, aggression, and dominance behaviors. Finally, we consider how the evolution of variation in social tolerance, aggression, and social monitoring might be explained by differences in serotonergic function in macaques. This article is categorized under: Economics > Interactive Decision-Making Psychology > Comparative Psychology Neuroscience > Behavior Cognitive Biology > Evolutionary Roots of Cognition.
Collapse
Affiliation(s)
| | - Steve W C Chang
- Department of Psychology, Yale University, New Haven, Connecticut
- Department of Neuroscience, Yale University School of Medicine, New Haven, Connecticut
- Kavli Institute for Neuroscience, Yale University School of Medicine, New Haven, Connecticut
| |
Collapse
|
20
|
Guo K, Li Z, Yan Y, Li W. Viewing heterospecific facial expressions: an eye-tracking study of human and monkey viewers. Exp Brain Res 2019; 237:2045-2059. [PMID: 31165915 PMCID: PMC6647127 DOI: 10.1007/s00221-019-05574-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Accepted: 05/31/2019] [Indexed: 11/03/2022]
Abstract
Common facial expressions of emotion have distinctive patterns of facial muscle movements that are culturally similar among humans, and perceiving these expressions is associated with stereotypical gaze allocation at local facial regions that are characteristic for each expression, such as eyes in angry faces. It is, however, unclear to what extent this 'universality' view can be extended to process heterospecific facial expressions, and how 'social learning' process contributes to heterospecific expression perception. In this eye-tracking study, we examined face-viewing gaze allocation of human (including dog owners and non-dog owners) and monkey observers while exploring expressive human, chimpanzee, monkey and dog faces (positive, neutral and negative expressions in human and dog faces; neutral and negative expressions in chimpanzee and monkey faces). Human observers showed species- and experience-dependent expression categorization accuracy. Furthermore, both human and monkey observers demonstrated different face-viewing gaze distributions which were also species dependent. Specifically, humans predominately attended at human eyes but animal mouth when judging facial expressions. Monkeys' gaze distributions in exploring human and monkey faces were qualitatively different from exploring chimpanzee and dog faces. Interestingly, the gaze behaviour of both human and monkey observers were further affected by their prior experience of the viewed species. It seems that facial expression processing is species dependent, and social learning may play a significant role in discriminating even rudimentary types of heterospecific expressions.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, Lincoln, LN6 7TS, UK.
| | - Zhihan Li
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| | - Yin Yan
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| | - Wu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| |
Collapse
|
21
|
Ma Z, Wu J, Zhong SH, Jiang J, Heinen SJ. Human Eye Movements Reveal Video Frame Importance. COMPUTER 2019; 52:48-57. [PMID: 33746238 PMCID: PMC7975628 DOI: 10.1109/mc.2019.2903246] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Human eye movements indicate important spatial information in static images as well as videos. Yet videos contain additional temporal information and convey a storyline. Video summarization is a technique that reduces video size, but maintains the essence of the storyline. Here, the authors explore whether eye movement patterns reflect frame importance during video viewing and facilitate video summarization. Eye movements were recorded while subjects watched videos from the SumMe video summarization dataset. The authors find more gaze consistency for selected than unselected frames. They further introduce a novel multi-stream deep learning model for video summarization that incorporates subjects' eye movement information. Gaze data improved the model's performance over that observed when only the frames' physical attributes were used. The results suggest that eye movement patterns reflect cognitive processing of sequential information that helps select important video frames, and provide an innovative algorithm that uses gaze information in video summarization.
Collapse
Affiliation(s)
- Zheng Ma
- The Smith-Kettlewell Eye Research Institute
| | | | | | | | | |
Collapse
|
22
|
Nummenmaa L, Lahnakoski JM, Glerean E. Sharing the social world via intersubject neural synchronisation. Curr Opin Psychol 2018; 24:7-14. [PMID: 29550395 DOI: 10.1016/j.copsyc.2018.02.021] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2018] [Revised: 02/27/2018] [Accepted: 02/28/2018] [Indexed: 11/30/2022]
Abstract
Sociability and capability of shared mental states are hallmarks of the human species, and pursuing shared goals oftentimes requires coordinating both behaviour and mental states. Here we review recent work using indices of intersubject neural synchronisation for measuring similarity of mental states across individuals. We discuss the methodological advances and limitations in the analyses based on intersubject synchrony, and discuss how these kinds of model-free analysis techniques enable the investigation of the brain basis of complex social processes. We argue that similarity of brain activity across individuals can be used, under certain conditions, to index the similarity of their subjective states of consciousness, and thus be used for investigating brain basis of mutual understanding and cooperation.
Collapse
Affiliation(s)
- Lauri Nummenmaa
- Turku PET Centre, University of Turku, 20520 Turku, Finland; Department of Psychology, University of Turku, Finland.
| | - Juha M Lahnakoski
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, 80804 Munich, Germany
| | - Enrico Glerean
- Turku PET Centre, University of Turku, 20520 Turku, Finland; Department of Neuroscience and Biomedical Engineering, Aalto University, Finland
| |
Collapse
|
23
|
Kano F, Shepherd SV, Hirata S, Call J. Primate social attention: Species differences and effects of individual experience in humans, great apes, and macaques. PLoS One 2018; 13:e0193283. [PMID: 29474416 PMCID: PMC5825077 DOI: 10.1371/journal.pone.0193283] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Accepted: 02/07/2018] [Indexed: 11/18/2022] Open
Abstract
When viewing social scenes, humans and nonhuman primates focus on particular features, such as the models' eyes, mouth, and action targets. Previous studies reported that such viewing patterns vary significantly across individuals in humans, and also across closely-related primate species. However, the nature of these individual and species differences remains unclear, particularly among nonhuman primates. In large samples of human and nonhuman primates, we examined species differences and the effects of experience on patterns of gaze toward social movies. Experiment 1 examined the species differences across rhesus macaques, nonhuman apes (bonobos, chimpanzees, and orangutans), and humans while they viewed movies of various animals' species-typical behaviors. We found that each species had distinct viewing patterns of the models' faces, eyes, mouths, and action targets. Experiment 2 tested the effect of individuals' experience on chimpanzee and human viewing patterns. We presented movies depicting natural behaviors of chimpanzees to three groups of chimpanzees (individuals from a zoo, a sanctuary, and a research institute) differing in their early social and physical experiences. We also presented the same movies to human adults and children differing in their expertise with chimpanzees (experts vs. novices) or movie-viewing generally (adults vs. preschoolers). Individuals varied within each species in their patterns of gaze toward models' faces, eyes, mouths, and action targets depending on their unique individual experiences. We thus found that the viewing patterns for social stimuli are both individual- and species-specific in these closely-related primates. Such individual/species-specificities are likely related to both individual experience and species-typical temperament, suggesting that primate individuals acquire their unique attentional biases through both ontogeny and evolution. Such unique attentional biases may help them learn efficiently about their particular social environments.
Collapse
Affiliation(s)
- Fumihiro Kano
- Kumamoto Sanctuary, Wildlife Research Center, Kyoto University, Kumamoto, Japan
| | | | - Satoshi Hirata
- Kumamoto Sanctuary, Wildlife Research Center, Kyoto University, Kumamoto, Japan
| | - Josep Call
- Department of Developmental and Comparative Psychology, Max-Planck Institute for Evolutionary Anthropology, Leipzig, Germany
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, United Kingdom
| |
Collapse
|
24
|
Hutson JP, Smith TJ, Magliano JP, Loschky LC. What is the role of the film viewer? The effects of narrative comprehension and viewing task on gaze control in film. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2017; 2:46. [PMID: 29214207 PMCID: PMC5698392 DOI: 10.1186/s41235-017-0080-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 10/04/2017] [Indexed: 11/23/2022]
Abstract
Film is ubiquitous, but the processes that guide viewers’ attention while viewing film narratives are poorly understood. In fact, many film theorists and practitioners disagree on whether the film stimulus (bottom-up) or the viewer (top-down) is more important in determining how we watch movies. Reading research has shown a strong connection between eye movements and comprehension, and scene perception studies have shown strong effects of viewing tasks on eye movements, but such idiosyncratic top-down control of gaze in film would be anathema to the universal control mainstream filmmakers typically aim for. Thus, in two experiments we tested whether the eye movements and comprehension relationship similarly held in a classic film example, the famous opening scene of Orson Welles’ Touch of Evil (Welles & Zugsmith, Touch of Evil, 1958). Comprehension differences were compared with more volitionally controlled task-based effects on eye movements. To investigate the effects of comprehension on eye movements during film viewing, we manipulated viewers’ comprehension by starting participants at different points in a film, and then tracked their eyes. Overall, the manipulation created large differences in comprehension, but only produced modest differences in eye movements. To amplify top-down effects on eye movements, a task manipulation was designed to prioritize peripheral scene features: a map task. This task manipulation created large differences in eye movements when compared to participants freely viewing the clip for comprehension. Thus, to allow for strong, volitional top-down control of eye movements in film, task manipulations need to make features that are important to narrative comprehension irrelevant to the viewing task. The evidence provided by this experimental case study suggests that filmmakers’ belief in their ability to create systematic gaze behavior across viewers is confirmed, but that this does not indicate universally similar comprehension of the film narrative.
Collapse
Affiliation(s)
- John P Hutson
- Department of Psychological Sciences, Kansas State University, 492 Bluemont Hall, 1100 Mid-campus Dr, Manhattan, KS 66506 USA
| | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London, Malet St, London, WC1E 7HX UK
| | - Joseph P Magliano
- Department of Psychology, Northern Illinois University, 361 Psychology-Computer Science Building, DeKalb, IL 60115 USA
| | - Lester C Loschky
- Department of Psychological Sciences, Kansas State University, 492 Bluemont Hall, 1100 Mid-campus Dr, Manhattan, KS 66506 USA
| |
Collapse
|
25
|
Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque. J Neurosci 2017; 36:9580-9. [PMID: 27629710 DOI: 10.1523/jneurosci.1152-16.2016] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2016] [Accepted: 07/22/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Visual motion responses in the brain are shaped by two distinct sources: the physical movement of objects in the environment and motion resulting from one's own actions. The latter source, termed visual reafference, stems from movements of the head and body, and in primates from the frequent saccadic eye movements that mark natural vision. To study the relative contribution of reafferent and stimulus motion during natural vision, we measured fMRI activity in the brains of two macaques as they freely viewed >50 hours of naturalistic video footage depicting dynamic social interactions. We used eye movements obtained during scanning to estimate the level of reafferent retinal motion at each moment in time. We also estimated the net stimulus motion by analyzing the video content during the same time periods. Mapping the responses to these distinct sources of retinal motion, we found a striking dissociation in the distribution of visual responses throughout the brain. Reafferent motion drove fMRI activity in the early retinotopic areas V1, V2, V3, and V4, particularly in their central visual field representations, as well as lateral aspects of the caudal inferotemporal cortex (area TEO). However, stimulus motion dominated fMRI responses in the superior temporal sulcus, including areas MT, MST, and FST as well as more rostral areas. We discuss this pronounced separation of motion processing in the context of natural vision, saccadic suppression, and the brain's utilization of corollary discharge signals. SIGNIFICANCE STATEMENT Visual motion arises not only from events in the external world, but also from the movements of the observer. For example, even if objects are stationary in the world, the act of walking through a room or shifting one's eyes causes motion on the retina. This "reafferent" motion propagates into the brain as signals that must be interpreted in the context of real object motion. The delineation of whole-brain responses to stimulus versus self-generated retinal motion signals is critical for understanding visual perception and is of pragmatic importance given the increasing use of naturalistic viewing paradigms. The present study uses fMRI to demonstrate that the brain exhibits a fundamentally different pattern of responses to these two sources of retinal motion.
Collapse
|
26
|
Segraves MA, Kuo E, Caddigan S, Berthiaume EA, Kording KP. Predicting rhesus monkey eye movements during natural-image search. J Vis 2017; 17:12. [PMID: 28355625 PMCID: PMC5373813 DOI: 10.1167/17.3.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
There are three prominent factors that can predict human visual-search behavior in natural scenes: the distinctiveness of a location (salience), similarity to the target (relevance), and features of the environment that predict where the object might be (context). We do not currently know how well these factors are able to predict macaque visual search, which matters because it is arguably the most popular model for asking how the brain controls eye movements. Here we trained monkeys to perform the pedestrian search task previously used for human subjects. Salience, relevance, and context models were all predictive of monkey eye fixations and jointly about as precise as for humans. We attempted to disrupt the influence of scene context on search by testing the monkeys with an inverted set of the same images. Surprisingly, the monkeys were able to locate the pedestrian at a rate similar to that for upright images. The best predictions of monkey fixations in searching inverted images were obtained by rotating the results of the model predictions for the original image. The fact that the same models can predict human and monkey search behavior suggests that the monkey can be used as a good model for understanding how the human brain enables natural-scene search.
Collapse
Affiliation(s)
- Mark A Segraves
- Department of Neurobiology, Weinberg College of Arts and Sciences, Northwestern University, Evanston, IL, USA
| | - Emory Kuo
- Department of Neurobiology, Weinberg College of Arts and Sciences, Northwestern University, Evanston, IL, USA
| | - Sara Caddigan
- Department of Neurobiology, Weinberg College of Arts and Sciences, Northwestern University, Evanston, IL, USA
| | - Emily A Berthiaume
- Department of Neurobiology, Weinberg College of Arts and Sciences, Northwestern University, Evanston, IL, USA
| | - Konrad P Kording
- Departments of Physical Medicine and Rehabilitation and Physiology, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| |
Collapse
|
27
|
Wilming N, Kietzmann TC, Jutras M, Xue C, Treue S, Buffalo EA, König P. Differential Contribution of Low- and High-level Image Content to Eye Movements in Monkeys and Humans. Cereb Cortex 2017; 27:279-293. [PMID: 28077512 PMCID: PMC5942390 DOI: 10.1093/cercor/bhw399] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Accepted: 12/13/2016] [Indexed: 11/25/2022] Open
Abstract
Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans.
Collapse
Affiliation(s)
- Niklas Wilming
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.,Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA.,Yerkes National Primate Research Center, Atlanta, GA 30329, USA.,Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Washington National Primate Research Center, Seattle, WA 09195, USA
| | - Tim C Kietzmann
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.,Medical Research Council, Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK
| | - Megan Jutras
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA.,Yerkes National Primate Research Center, Atlanta, GA 30329, USA.,Washington National Primate Research Center, Seattle, WA 09195, USA
| | - Cheng Xue
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, Goettingen University, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany
| | - Elizabeth A Buffalo
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA.,Yerkes National Primate Research Center, Atlanta, GA 30329, USA.,Washington National Primate Research Center, Seattle, WA 09195, USA
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.,Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
28
|
Matsuda YT, Myowa-Yamakoshi M, Hirata S. Familiar face + novel face = familiar face? Representational bias in the perception of morphed faces in chimpanzees. PeerJ 2016; 4:e2304. [PMID: 27602275 PMCID: PMC4991860 DOI: 10.7717/peerj.2304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2015] [Accepted: 07/07/2016] [Indexed: 11/30/2022] Open
Abstract
Highly social animals possess a well-developed ability to distinguish the faces of familiar from novel conspecifics to induce distinct behaviors for maintaining society. However, the behaviors of animals when they encounter ambiguous faces of familiar yet novel conspecifics, e.g., strangers with faces resembling known individuals, have not been well characterised. Using a morphing technique and preferential-looking paradigm, we address this question via the chimpanzee’s facial–recognition abilities. We presented eight subjects with three types of stimuli: (1) familiar faces, (2) novel faces and (3) intermediate morphed faces that were 50% familiar and 50% novel faces of conspecifics. We found that chimpanzees spent more time looking at novel faces and scanned novel faces more extensively than familiar or intermediate faces. Interestingly, chimpanzees looked at intermediate faces in a manner similar to familiar faces with regards to the fixation duration, fixation count, and saccade length for facial scanning, even though the participant was encountering the intermediate faces for the first time. We excluded the possibility that subjects merely detected and avoided traces of morphing in the intermediate faces. These findings suggest a bias for a feeling-of-familiarity that chimpanzees perceive familiarity with an intermediate face by detecting traces of a known individual, as 50% alternation is sufficient to perceive familiarity.
Collapse
Affiliation(s)
| | | | - Satoshi Hirata
- Wildlife Research Center, Kyoto University, Kyoto, Japan
| |
Collapse
|
29
|
Abstract
Interest has flourished in studying both the spatial and temporal aspects of eye movement behavior. This has sparked the development of a large number of new methods to compare scanpaths. In the present work, we present a detailed overview of common scanpath comparison measures. Each of these measures was developed to solve a specific problem, but quantifies different aspects of scanpath behavior and requires different data-processing techniques. To understand these differences, we applied each scanpath comparison method to data from an encoding and recognition experiment and compared their ability to reveal scanpath similarities within and between individuals looking at natural scenes. Results are discussed in terms of the unique aspects of scanpath behavior that the different methods quantify. We conclude by making recommendations for choosing an appropriate scanpath comparison measure.
Collapse
|
30
|
Franchak JM, Heeger DJ, Hasson U, Adolph KE. Free Viewing Gaze Behavior in Infants and Adults. INFANCY 2016; 21:262-287. [PMID: 27134573 PMCID: PMC4847438 DOI: 10.1111/infa.12119] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2015] [Accepted: 09/14/2015] [Indexed: 11/29/2022]
Abstract
The current study investigated age differences in free viewing gaze behavior. Adults and 6-, 9-, 12-, and 24-month-old infants watched a 60-s Sesame Street video clip while their eye movements were recorded. Adults displayed high inter-subject consistency in eye movements; they tended to fixate the same places at the same. Infants showed weaker consistency between observers and inter-subject consistency increased with age. Across age groups, the influence of both bottom-up features (fixating visually-salient areas) and top-down features (looking at faces) increased. Moreover, individual differences in fixating bottom-up and top-down features predicted whether infants' eye movements were consistent with those of adults, even when controlling for age. However, this relation was moderated by the number of faces available in the scene, suggesting that the development of adult-like viewing involves learning when to prioritize looking at bottom-up and top-down features.
Collapse
Affiliation(s)
- John M Franchak
- Department of Psychology, University of California, Riverside
| | - David J Heeger
- Department of Psychology and Center for Neural Science, New York University
| | - Uri Hasson
- Department of Psychology and the Neuroscience Institute, Princeton University
| | - Karen E Adolph
- Department of Psychology and Center for Neural Science, New York University
| |
Collapse
|
31
|
Mühlenbeck C, Liebal K, Pritsch C, Jacobsen T. Differences in the Visual Perception of Symmetric Patterns in Orangutans (Pongo pygmaeus abelii) and Two Human Cultural Groups: A Comparative Eye-Tracking Study. Front Psychol 2016; 7:408. [PMID: 27065184 PMCID: PMC4811873 DOI: 10.3389/fpsyg.2016.00408] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2015] [Accepted: 03/07/2016] [Indexed: 11/27/2022] Open
Abstract
Symmetric structures are of importance in relation to aesthetic preference. To investigate whether the preference for symmetric patterns is unique to humans, independent of their cultural background, we compared two human populations with distinct cultural backgrounds (Namibian hunter-gatherers and German town dwellers) with one species of non-human great apes (Orangutans) in their viewing behavior regarding symmetric and asymmetric patterns in two levels of complexity. In addition, the human participants were asked to give their aesthetic evaluation of a subset of the presented patterns. The results showed that humans of both cultural groups fixated on symmetric patterns for a longer period of time, regardless of the pattern’s complexity. On the contrary, Orangutans did not clearly differentiate between symmetric and asymmetric patterns, but were much faster in processing the presented stimuli and scanned the complete screen, while both human groups rested on the symmetric pattern after a short scanning time. The aesthetic evaluation test revealed that the fixation preference for symmetric patterns did not match with the aesthetic evaluation in the Hai//om group, whereas in the German group aesthetic evaluation was in accordance with the fixation preference in 60 percent of the cases. It can be concluded that humans prefer well-ordered structures in visual processing tasks, most likely because of a positive processing bias for symmetry, which Orangutans did not show in this task, and that, in humans, an aesthetic preference does not necessarily accompany the fixation preference.
Collapse
Affiliation(s)
- Cordelia Mühlenbeck
- Department of Education and Psychology, Freie Universität Berlin Berlin, Germany
| | - Katja Liebal
- Department of Education and Psychology, Freie Universität Berlin Berlin, Germany
| | - Carla Pritsch
- Department of Education and Psychology, Freie Universität BerlinBerlin, Germany; Graduate School "Languages of Emotion," Freie Universität BerlinBerlin, Germany
| | - Thomas Jacobsen
- Experimental Psychology Unit, Helmut Schmidt University - University of the Federal Armed Forces Hamburg Hamburg, Germany
| |
Collapse
|
32
|
Loschky LC, Larson AM, Magliano JP, Smith TJ. What Would Jaws Do? The Tyranny of Film and the Relationship between Gaze and Higher-Level Narrative Film Comprehension. PLoS One 2015; 10:e0142474. [PMID: 26606606 PMCID: PMC4659561 DOI: 10.1371/journal.pone.0142474] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Accepted: 10/22/2015] [Indexed: 11/18/2022] Open
Abstract
What is the relationship between film viewers' eye movements and their film comprehension? Typical Hollywood movies induce strong attentional synchrony-most viewers look at the same things at the same time. Thus, we asked whether film viewers' eye movements would differ based on their understanding-the mental model hypothesis-or whether any such differences would be overwhelmed by viewers' attentional synchrony-the tyranny of film hypothesis. To investigate this question, we manipulated the presence/absence of prior film context and measured resulting differences in film comprehension and eye movements. Viewers watched a 12-second James Bond movie clip, ending just as a critical predictive inference should be drawn that Bond's nemesis, "Jaws," would fall from the sky onto a circus tent. The No-context condition saw only the 12-second clip, but the Context condition also saw the preceding 2.5 minutes of the movie before seeing the critical 12-second portion. Importantly, the Context condition viewers were more likely to draw the critical inference and were more likely to perceive coherence across the entire 6 shot sequence (as shown by event segmentation), indicating greater comprehension. Viewers' eye movements showed strong attentional synchrony in both conditions as compared to a chance level baseline, but smaller differences between conditions. Specifically, the Context condition viewers showed slightly, but significantly, greater attentional synchrony and lower cognitive load (as shown by fixation probability) during the critical first circus tent shot. Thus, overall, the results were more consistent with the tyranny of film hypothesis than the mental model hypothesis. These results suggest the need for a theory that encompasses processes from the perception to the comprehension of film.
Collapse
Affiliation(s)
- Lester C. Loschky
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, United States of America
- * E-mail:
| | - Adam M. Larson
- Department of Psychology, University of Findlay, Findlay, OH, United States of America
| | - Joseph P. Magliano
- Department of Psychology, Northern Illinois University, DeKalb, IL, United States of America
| | - Tim J. Smith
- Department of Psychology, Birkbeck University of London, London, United Kingdom
| |
Collapse
|
33
|
Kretch KS, Adolph KE. Active vision in passive locomotion: real-world free viewing in infants and adults. Dev Sci 2015; 18:736-50. [PMID: 25438618 PMCID: PMC4447601 DOI: 10.1111/desc.12251] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2013] [Accepted: 08/18/2014] [Indexed: 11/26/2022]
Abstract
Visual exploration in infants and adults has been studied using two very different paradigms: free viewing of flat screen displays in desk-mounted eye-tracking studies and real-world visual guidance of action in head-mounted eye-tracking studies. To test whether classic findings from screen-based studies generalize to real-world visual exploration and to compare natural visual exploration in infants and adults, we tested observers in a new paradigm that combines critical aspects of both previous techniques: free viewing during real-world visual exploration. Mothers and their 9-month-old infants wore head-mounted eye trackers while mothers carried their infants in a forward-facing infant carrier through a series of indoor hallways. Demands for visual guidance of action were minimal in mothers and absent for infants, so both engaged in free viewing while moving through the environment. Similar to screen-based studies, during free viewing in the real world low-level saliency was related to gaze direction. In contrast to screen-based studies, only infants - not adults - were biased to look at people, participants of both ages did not show a classic center bias, and mothers and infants did not display high levels of inter-observer consistency. Results indicate that several aspects of visual exploration of a flat screen display do not generalize to visual exploration in the real world.
Collapse
|
34
|
Hanke M, Halchenko YO. A communication hub for a decentralized collaboration on studying real-life cognition. F1000Res 2015; 4:62. [PMID: 26097689 PMCID: PMC4457109 DOI: 10.12688/f1000research.6229.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/09/2015] [Indexed: 11/25/2022] Open
Abstract
Studying the brain’s behavior in situations of real-life complexity is crucial for an understanding of brain function as a whole. However, methodological difficulties and a general lack of public resources are hindering scientific progress in this domain. This channel will serve as a communication hub to collect relevant resources and curate knowledge about working paradigms, available resources, and analysis techniques.
Collapse
Affiliation(s)
- Michael Hanke
- Department of Psychology, University of Magdeburg, Magdeburg, Germany ; Center for Behavioral Brain Sciences, Magdeburg, Germany ; INCF Data-sharing taskforce, Karolinska Institute, Stockholm, Sweden
| | - Yaroslav O Halchenko
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA ; INCF Data-sharing taskforce, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
35
|
Single-unit activity during natural vision: diversity, consistency, and spatial sensitivity among AF face patch neurons. J Neurosci 2015; 35:5537-48. [PMID: 25855170 DOI: 10.1523/jneurosci.3825-14.2015] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Several visual areas within the STS of the macaque brain respond strongly to faces and other biological stimuli. Determining the principles that govern neural responses in this region has proven challenging, due in part to the inherently complex stimulus domain of dynamic biological stimuli that are not captured by an easily parameterized stimulus set. Here we investigated neural responses in one fMRI-defined face patch in the anterior fundus (AF) of the STS while macaques freely view complex videos rich with natural social content. Longitudinal single-unit recordings allowed for the accumulation of each neuron's responses to repeated video presentations across sessions. We found that individual neurons, while diverse in their response patterns, were consistently and deterministically driven by the video content. We used principal component analysis to compute a family of eigenneurons, which summarized 24% of the shared population activity in the first two components. We found that the most prominent component of AF activity reflected an interaction between visible body region and scene layout. Close-up shots of faces elicited the strongest neural responses, whereas far away shots of faces or close-up shots of hindquarters elicited weak or inhibitory responses. Sensitivity to the apparent proximity of faces was also observed in gamma band local field potential. This category-selective sensitivity to spatial scale, together with the known exchange of anatomical projections of this area with regions involved in visuospatial analysis, suggests that the AF face patch may be specialized in aspects of face perception that pertain to the layout of a social scene.
Collapse
|
36
|
Suda Y, Kitazawa S. A model of face selection in viewing video stories. Sci Rep 2015; 5:7666. [PMID: 25597621 PMCID: PMC4297980 DOI: 10.1038/srep07666] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2014] [Accepted: 12/03/2014] [Indexed: 11/27/2022] Open
Abstract
When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the “peak” face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment.
Collapse
Affiliation(s)
- Yuki Suda
- Department of Neurophysiology, Graduate School of Medicine, Juntendo University, Bunkyo, Tokyo, 113-8421, JAPAN
| | - Shigeru Kitazawa
- 1] Department of Neurophysiology, Graduate School of Medicine, Juntendo University, Bunkyo, Tokyo, 113-8421, JAPAN [2] Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, 565-0871, JAPAN [3] Department of Brain Physiology, Graduate School of Medicine, Osaka University, Suita, Osaka, 565-0871, JAPAN [4] Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, and Osaka University, Suita, Osaka, 565-0871, JAPAN
| |
Collapse
|
37
|
Russ BE, Leopold DA. Functional MRI mapping of dynamic visual features during natural viewing in the macaque. Neuroimage 2015; 109:84-94. [PMID: 25579448 DOI: 10.1016/j.neuroimage.2015.01.012] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2014] [Revised: 10/24/2014] [Accepted: 01/05/2015] [Indexed: 10/24/2022] Open
Abstract
The ventral visual pathway of the primate brain is specialized to respond to stimuli in certain categories, such as the well-studied face selective patches in the macaque inferotemporal cortex. To what extent does response selectivity determined using brief presentations of isolated stimuli predict activity during the free viewing of a natural, dynamic scene, where features are superimposed in space and time? To approach this question, we obtained fMRI activity from the brains of three macaques viewing extended video clips containing a range of social and nonsocial content and compared the fMRI time courses to a family of feature models derived from the movie content. Starting with more than two dozen feature models extracted from each movie, we created functional maps based on features whose time courses were nearly orthogonal, focusing primarily on faces, motion content, and contrast level. Activity mapping using the face feature model readily yielded functional regions closely resembling face patches obtained using a block design in the same animals. Overall, the motion feature model dominated responses in nearly all visually driven areas, including the face patches as well as ventral visual areas V4, TEO, and TE. Control experiments presenting dynamic movies, whose content was free of animals, demonstrated that biological movement critically contributed to the predominance of motion in fMRI responses. These results highlight the value of natural viewing paradigms for studying the brain's functional organization and also underscore the paramount contribution of magnocellular input to the ventral visual pathway during natural vision.
Collapse
Affiliation(s)
- Brian E Russ
- Section on Cognitive Neurophysiology and Imaging, Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD20892, United States.
| | - David A Leopold
- Section on Cognitive Neurophysiology and Imaging, Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD20892, United States; Neurophysiology Imaging Facility, National Institute of Mental Health, National Institute of Neurological Disorders and Stroke, National Eye Institute, National Institutes of Health, Bethesda, MD20892, United States
| |
Collapse
|
38
|
Popivanov ID, Jastorff J, Vanduffel W, Vogels R. Tolerance of macaque middle STS body patch neurons to shape-preserving stimulus transformations. J Cogn Neurosci 2014; 27:1001-16. [PMID: 25390202 DOI: 10.1162/jocn_a_00762] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Functional imaging studies in human and nonhuman primates have demonstrated regions in the brain that show category selectivity for faces or (headless) bodies. Recent fMRI-guided single unit studies of the macaque face category-selective regions have increased our understanding of the response properties of single neurons in these face patches. However, much less is known about the response properties of neurons in the fMRI-defined body category-selective regions ("body patches"). Recently, we reported that the majority of single neurons in one fMRI-defined body patch, the mid-STS body patch, responded more strongly to bodies compared with other objects. Here we assessed the tolerance of these neurons' responses and stimulus preference for shape-preserving image transformations. After mapping the receptive field of the single neurons, we found that their stimulus preference showed a high degree of tolerance for changes in the position and size of the stimulus. However, their response strongly depended on the in-plane orientation of a body. The selectivity of most neurons was, to a large degree, preserved when silhouettes were presented instead of the original textured and shaded images, suggesting that mainly shape-based features are driving these neurons. In a human psychophysical study, we showed that the information present in silhouettes is largely sufficient for body versus nonbody categorization. These data suggest that mid-STS body patch neurons respond predominantly to oriented shape features that are prevalent in images of bodies. Their responses can inform position- and retinal size-invariant body categorization and discrimination based on shape.
Collapse
|
39
|
Hanke M, Baumgartner FJ, Ibe P, Kaule FR, Pollmann S, Speck O, Zinke W, Stadler J. A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie. Sci Data 2014; 1:140003. [PMID: 25977761 PMCID: PMC4322572 DOI: 10.1038/sdata.2014.3] [Citation(s) in RCA: 94] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2013] [Accepted: 01/22/2014] [Indexed: 11/21/2022] Open
Abstract
Here we present a high-resolution functional magnetic resonance (fMRI) dataset - 20 participants recorded at high field strength (7 Tesla) during prolonged stimulation with an auditory feature film ("Forrest Gump"). In addition, a comprehensive set of auxiliary data (T1w, T2w, DTI, susceptibility-weighted image, angiography) as well as measurements to assess technical and physiological noise components have been acquired. An initial analysis confirms that these data can be used to study common and idiosyncratic brain response patterns to complex auditory stimulation. Among the potential uses of this dataset are the study of auditory attention and cognition, language and music perception, and social perception. The auxiliary measurements enable a large variety of additional analysis strategies that relate functional response patterns to structural properties of the brain. Alongside the acquired data, we provide source code and detailed information on all employed procedures - from stimulus creation to data analysis. In order to facilitate replicative and derived works, only free and open-source software was utilized.
Collapse
Affiliation(s)
- Michael Hanke
- Department of Psychology II, University of Magdeburg, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Magdeburg, Germany
- INCF Data-sharing taskforce
| | | | - Pierre Ibe
- Department of Psychology II, University of Magdeburg, Magdeburg, Germany
| | - Falko R. Kaule
- Department of Psychology II, University of Magdeburg, Magdeburg, Germany
- Visual Processing Laboratory, Ophthalmic Department, University of Magdeburg, Magdeburg, Germany
| | - Stefan Pollmann
- Department of Psychology II, University of Magdeburg, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Oliver Speck
- Center for Behavioral Brain Sciences, Magdeburg, Germany
- Leibniz Institute for Neurobiology, Magdeburg, Germany
- Department of Biomagnetic Resonance, University of Magdeburg, Magdeburg, Germany
- German Center for Neurodegenerative Disease (DZNE), site Magdeburg, Germany
| | - Wolf Zinke
- Department of Psychology II, University of Magdeburg, Magdeburg, Germany
| | - Jörg Stadler
- Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
40
|
Krishna BS, Ipata AE, Bisley JW, Gottlieb J, Goldberg ME. Extrafoveal preview benefit during free-viewing visual search in the monkey. J Vis 2014; 14:6. [PMID: 24403392 PMCID: PMC5077276 DOI: 10.1167/14.1.6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2013] [Accepted: 11/21/2013] [Indexed: 11/24/2022] Open
Abstract
Previous studies have shown that subjects require less time to process a stimulus at the fovea after a saccade if they have viewed the same stimulus in the periphery immediately prior to the saccade. This extrafoveal preview benefit indicates that information about the visual form of an extrafoveally viewed stimulus can be transferred across a saccade. Here, we extend these findings by demonstrating and characterizing a similar extrafoveal preview benefit in monkeys during a free-viewing visual search task. We trained two monkeys to report the orientation of a target among distractors by releasing one of two bars with their hand; monkeys were free to move their eyes during the task. Both monkeys took less time to indicate the orientation of the target after foveating it, when the target lay closer to the fovea during the previous fixation. An extrafoveal preview benefit emerged even if there was more than one intervening saccade between the preview and the target fixation, indicating that information about target identity could be transferred across more than one saccade and could be obtained even if the search target was not the goal of the next saccade. An extrafoveal preview benefit was also found for distractor stimuli. These results aid future physiological investigations of the extrafoveal preview benefit.
Collapse
Affiliation(s)
- B. Suresh Krishna
- Mahoney-Keck Center for Brain and Behavior Research, New York State Psychiatric Institute, New York, NY, USA
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany
| | - Anna E. Ipata
- Mahoney-Keck Center for Brain and Behavior Research, New York State Psychiatric Institute, New York, NY, USA
- Department of Neuroscience, Kavli Neuroscience Institute, Columbia University College of Physicians and Surgeons, New York, NY, USA
| | - James W. Bisley
- Mahoney-Keck Center for Brain and Behavior Research, New York State Psychiatric Institute, New York, NY, USA
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Jacqueline Gottlieb
- Mahoney-Keck Center for Brain and Behavior Research, New York State Psychiatric Institute, New York, NY, USA
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany
| | - Michael E. Goldberg
- Mahoney-Keck Center for Brain and Behavior Research, New York State Psychiatric Institute, New York, NY, USA
- Department of Neuroscience, Kavli Neuroscience Institute, Columbia University College of Physicians and Surgeons, New York, NY, USA
- Departments of Neurology, Psychiatry, and Ophthalmology, Columbia University College of Physicians and Surgeons, New York, NY, USA
| |
Collapse
|
41
|
Abstract
Recurrence quantification analysis (RQA) has been successfully used for describing dynamic systems that are too complex to be characterized adequately by standard methods in time series analysis. More recently, RQA has been used for analyzing the coordination of gaze patterns between cooperating individuals. Here, we extend RQA to the characterization of fixation sequences, and we show that the global and local temporal characteristics of fixation sequences can be captured by a small number of RQA measures that have a clear interpretation in this context. We applied RQA to the analysis of a study in which observers looked at different scenes under natural or gaze-contingent viewing conditions, and we found large differences in the RQA measures between the viewing conditions, indicating that RQA is a powerful new tool for the analysis of the temporal patterns of eye movement behavior.
Collapse
|
42
|
Maranesi M, Ugolotti Serventi F, Bruni S, Bimbi M, Fogassi L, Bonini L. Monkey gaze behaviour during action observation and its relationship to mirror neuron activity. Eur J Neurosci 2013; 38:3721-30. [PMID: 24118599 DOI: 10.1111/ejn.12376] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2013] [Revised: 08/21/2013] [Accepted: 09/02/2013] [Indexed: 11/27/2022]
Abstract
Mirror neurons (MNs) of the monkey ventral premotor cortex (area F5) are a class of cells that match the visual descriptions of others' actions with correspondent motor representations in the observer's brain. Several human studies suggest that one's own motor representations activated during action observation play a role in directing proactive eye movements to the site of the upcoming hand-target interaction. However, there are no data on the possible relationship between gaze behaviour and MN activity. Here we addressed this issue by simultaneously recording eye position and F5 MN activity in two macaques during free observation of a grasping action. More than half of the recorded neurons discharged stronger when the monkey looked at the action than when it did not look at it, but their firing rate was better predicted by 'when' rather than by 'how long' the monkey gazed at the location of the upcoming hand-target interaction. Interestingly, the onset of MN response was linked to the onset of the experimenter's movement, thus making motor representations potentially exploitable to drive eye movements. Furthermore, MNs discharged stronger and earlier when the gaze was 'proactive' compared with 'reactive', indicating that gaze behaviour influences MN activity. We propose that feedforward, automatic representations of other's actions could lead eye movements that, in turn, would provide the motor system with feedback information that enhances the neural representations of the ongoing action.
Collapse
Affiliation(s)
- Monica Maranesi
- Italian Institute of Technology (IIT), Brain Center for Social and Motor Cognition (BCSMC), via Volturno 39, 43125, Parma, Italy
| | | | | | | | | | | |
Collapse
|
43
|
Millan MJ, Bales KL. Towards improved animal models for evaluating social cognition and its disruption in schizophrenia: the CNTRICS initiative. Neurosci Biobehav Rev 2013; 37:2166-80. [PMID: 24090822 DOI: 10.1016/j.neubiorev.2013.09.012] [Citation(s) in RCA: 97] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2013] [Revised: 09/17/2013] [Accepted: 09/19/2013] [Indexed: 01/22/2023]
Abstract
Social cognition refers to processes used to monitor and interpret social signals from others, to decipher their state of mind, emotional status and intentions, and select appropriate social behaviour. Social cognition is sophisticated in humans, being embedded with verbal language and enacted in a complex cultural environment. Its disruption characterises the entire course of schizophrenia and is correlated with poor functional outcome. Further, deficits in social cognition are related to impairment in other cognitive domains, positive symptoms (paranoia and delusions) and negative symptoms (social withdrawal and reduced motivation). In light of the significance and inadequate management of social cognition deficits, there is a need for translatable experimental procedures for their study, and identification of effective pharmacotherapy. No single paradigm captures the multi-dimensional nature of social cognition, and procedures for assessing ability to infer mental states are not well-developed for experimental therapeutic settings. Accordingly, a recent CNTRICS meeting prioritised procedures for measuring a specific construct: "acquisition and recognition of affective (emotional) states", coupled to individual recognition. Two complementary paradigms for refinement were identified: social recognition/preference in rodents, and visual tracking of social scenes in non-human primates (NHPs). Social recognition is disrupted in genetic, developmental or pharmacological disease models for schizophrenia, and performance in both procedures is improved by the neuropeptide oxytocin. The present article surveys a broad range of procedures for studying social cognition in rodents and NHPs, discusses advantages and drawbacks, and focuses on development of social recognition/preference and gaze-following paradigms for improved study of social cognition deficits in schizophrenia and their potential treatment.
Collapse
Affiliation(s)
- Mark J Millan
- Unit for Research and Discovery in Neuroscience, IDR Servier, 125 Chemin de Ronde, 78290 Croissy-sur-Seine, France.
| | | |
Collapse
|
44
|
Foulsham T, Sanderson LA. Look who's talking? Sound changes gaze behaviour in a dynamic social scene. VISUAL COGNITION 2013. [DOI: 10.1080/13506285.2013.849785] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
45
|
Abstract
Primate evolution has been accompanied by complex reorganizations in brain anatomy and function. Little is known, however, about the relationship between anatomical and functional changes induced through primate evolution. Using functional magnetic resonance imaging, we assessed spatial and temporal correspondences of cortical networks in humans and monkeys. We provided evidence for topologically and functionally correspondent networks in sensory-motor and attention regions. More specifically, we revealed a possible monkey equivalent of the human ventral attention network. For other human networks, such as the language and the default-mode networks, we detected topological correspondent networks in the monkey, but with different functional signatures. Furthermore, we observed two lateralized human frontoparietal networks in the cortical regions displaying the greatest evolutionary expansion, having neither topological nor functional monkey correspondents. This finding may indicate that these two human networks are evolutionarily novel. Thus, our findings confirm the existence of networks where evolution has conserved both topology and function but also suggest that functions of structurally preserved networks can diverge over time and that novel, hence human-specific networks, have emerged during human evolution.
Collapse
|
46
|
Kano F, Tomonaga M. Head-mounted eye tracking of a chimpanzee under naturalistic conditions. PLoS One 2013; 8:e59785. [PMID: 23544099 PMCID: PMC3609798 DOI: 10.1371/journal.pone.0059785] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2012] [Accepted: 02/18/2013] [Indexed: 12/03/2022] Open
Abstract
This study offers a new method for examining the bodily, manual, and eye movements of a chimpanzee at the micro-level. A female chimpanzee wore a lightweight head-mounted eye tracker (60 Hz) on her head while engaging in daily interactions with the human experimenter. The eye tracker recorded her eye movements accurately while the chimpanzee freely moved her head, hands, and body. Three video cameras recorded the bodily and manual movements of the chimpanzee from multiple angles. We examined how the chimpanzee viewed the experimenter in this interactive setting and how the eye movements were related to the ongoing interactive contexts and actions. We prepared two experimentally defined contexts in each session: a face-to-face greeting phase upon the appearance of the experimenter in the experimental room, and a subsequent face-to-face task phase that included manual gestures and fruit rewards. Overall, the general viewing pattern of the chimpanzee, measured in terms of duration of individual fixations, length of individual saccades, and total viewing duration of the experimenter’s face/body, was very similar to that observed in previous eye-tracking studies that used non-interactive situations, despite the differences in the experimental settings. However, the chimpanzee viewed the experimenter and the scene objects differently depending on the ongoing context and actions. The chimpanzee viewed the experimenter’s face and body during the greeting phase, but viewed the experimenter’s face and hands as well as the fruit reward during the task phase. These differences can be explained by the differential bodily/manual actions produced by the chimpanzee and the experimenter during each experimental phase (i.e., greeting gestures, task cueing). Additionally, the chimpanzee’s viewing pattern varied depending on the identity of the experimenter (i.e., the chimpanzee’s prior experience with the experimenter). These methods and results offer new possibilities for examining the natural gaze behavior of chimpanzees.
Collapse
Affiliation(s)
- Fumihiro Kano
- Primate Research Institute, Kyoto University, Inuyama, Aichi, Japan.
| | | |
Collapse
|
47
|
Social interactions through the eyes of macaques and humans. PLoS One 2013; 8:e56437. [PMID: 23457569 PMCID: PMC3574082 DOI: 10.1371/journal.pone.0056437] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2012] [Accepted: 01/09/2013] [Indexed: 11/19/2022] Open
Abstract
Group-living primates frequently interact with each other to maintain social bonds as well as to compete for valuable resources. Observing such social interactions between group members provides individuals with essential information (e.g. on the fighting ability or altruistic attitude of group companions) to guide their social tactics and choice of social partners. This process requires individuals to selectively attend to the most informative content within a social scene. It is unclear how non-human primates allocate attention to social interactions in different contexts, and whether they share similar patterns of social attention to humans. Here we compared the gaze behaviour of rhesus macaques and humans when free-viewing the same set of naturalistic images. The images contained positive or negative social interactions between two conspecifics of different phylogenetic distance from the observer; i.e. affiliation or aggression exchanged by two humans, rhesus macaques, Barbary macaques, baboons or lions. Monkeys directed a variable amount of gaze at the two conspecific individuals in the images according to their roles in the interaction (i.e. giver or receiver of affiliation/aggression). Their gaze distribution to non-conspecific individuals was systematically varied according to the viewed species and the nature of interactions, suggesting a contribution of both prior experience and innate bias in guiding social attention. Furthermore, the monkeys' gaze behavior was qualitatively similar to that of humans, especially when viewing negative interactions. Detailed analysis revealed that both species directed more gaze at the face than the body region when inspecting individuals, and attended more to the body region in negative than in positive social interactions. Our study suggests that monkeys and humans share a similar pattern of role-sensitive, species- and context-dependent social attention, implying a homologous cognitive mechanism of social attention between rhesus macaques and humans.
Collapse
|
48
|
Bethell EJ, Holmes A, Maclarnon A, Semple S. Evidence that emotion mediates social attention in rhesus macaques. PLoS One 2012; 7:e44387. [PMID: 22952968 PMCID: PMC3431396 DOI: 10.1371/journal.pone.0044387] [Citation(s) in RCA: 83] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2012] [Accepted: 08/03/2012] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND Recent work on non-human primates indicates that the allocation of social attention is mediated by characteristics of the attending animal, such as social status and genotype, as well as by the value of the target to which attention is directed. Studies of humans indicate that an individual's emotion state also plays a crucial role in mediating their social attention; for example, individuals look for longer towards aggressive faces when they are feeling more anxious, and this bias leads to increased negative arousal and distraction from other ongoing tasks. To our knowledge, no studies have tested for an effect of emotion state on allocation of social attention in any non-human species. METHODOLOGY We presented captive adult male rhesus macaques with pairs of adult male conspecific face images - one with an aggressive expression, one with a neutral expression - and recorded gaze towards these images. Each animal was tested twice, once during a putatively stressful condition (i.e. following a veterinary health check), and once during a neutral (or potentially positive) condition (i.e. a period of environmental enrichment). Initial analyses revealed that behavioural indicators of anxiety and stress were significantly higher after the health check than during enrichment, indicating that the former caused a negative shift in emotional state. PRINCIPLE FINDINGS The macaques showed initial vigilance for aggressive faces across both conditions, but subsequent responses differed between conditions. Following the health check, initial vigilance was followed by rapid and sustained avoidance of aggressive faces. By contrast, during the period of enrichment, the macaques showed sustained attention towards the same aggressive faces. CONCLUSIONS/SIGNIFICANCE These data provide, to our knowledge, the first evidence that shifts in emotion state mediate social attention towards and away from facial cues of emotion in a non-human animal. This work provides novel insights into the evolution of emotion-attention interactions in humans, and mechanisms of social behaviour in non-human primates, and may have important implications for understanding animal psychological wellbeing.
Collapse
Affiliation(s)
- Emily J Bethell
- Centre for Research in Evolutionary and Environmental Anthropology, University of Roehampton, London, United Kingdom.
| | | | | | | |
Collapse
|
49
|
Mantini D, Corbetta M, Romani GL, Orban GA, Vanduffel W. Data-driven analysis of analogous brain networks in monkeys and humans during natural vision. Neuroimage 2012; 63:1107-18. [PMID: 22992489 DOI: 10.1016/j.neuroimage.2012.08.042] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2012] [Revised: 08/09/2012] [Accepted: 08/10/2012] [Indexed: 11/30/2022] Open
Abstract
Inferences about functional correspondences between functional networks of human and non-human primates largely rely on proximity and anatomical expansion models. However, it has been demonstrated that topologically correspondent areas in two species can have different functional properties, suggesting that anatomy-based approaches should be complemented with alternative methods to perform functional comparisons. We have recently shown that comparative analyses based on temporal correlations of sensory-driven fMRI responses can reveal functional correspondent areas in monkeys and humans without relying on spatial assumptions. Inter-species activity correlation (ISAC) analyses require the definition of seed areas in one species to reveal functional correspondences across the cortex of the same and other species. Here we propose an extension of the ISAC method that does not rely on any seed definition, hence a method void of any spatial assumption. Specifically, we apply independent component analysis (ICA) separately to monkey and human data to define species-specific networks of areas with coherent stimulus-related activity. Then, we use a hierarchical cluster analysis to identify ICA-based ISAC clusters of monkey and human networks with similar timecourses. We implemented this approach on fMRI data collected in monkeys and humans during movie watching, a condition that evokes widespread sensory-driven activity throughout large portions of the cortex. Using ICA-based ISAC, we detected seven monkey-human clusters. The timecourses of several clusters showed significant correspondences either with the motion energy in the movie or with eye-movement parameters. Five of the clusters spanned putative homologous functional networks in either primary or extrastriate visual regions, whereas two clusters included higher-level visual areas at topological locations that are not predicted by cortical surface expansion models. Overall, our ICA-based ISAC analysis complemented the findings of our previous seed-based investigations, and suggested that functional processes can be executed by brain networks in different species that are functionally but not necessarily anatomically correspondent. Overall, our method provides a novel approach to reveal evolution-driven functional changes in the primate brain with no spatial assumptions.
Collapse
Affiliation(s)
- Dante Mantini
- Laboratory of Neuro- and Psychophysiology, KU Leuven Medical School, Leuven, Belgium
| | | | | | | | | |
Collapse
|
50
|
Interspecies activity correlations reveal functional correspondence between monkey and human brain areas. Nat Methods 2012; 9:277-82. [PMID: 22306809 PMCID: PMC3438906 DOI: 10.1038/nmeth.1868] [Citation(s) in RCA: 76] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2011] [Accepted: 12/02/2011] [Indexed: 11/23/2022]
Abstract
Evolution-driven functional changes in the primate brain are typically assessed by aligning monkey and human activation maps using cortical surface expansion models. These models use putative homologous areas as registration landmarks, assuming they are functionally correspondent. In cases where functional changes have occurred in an area, this assumption prohibits to reveal whether other areas may have assumed lost functions. Here we describe a method to examine functional correspondences across species. Without making spatial assumptions, we assess similarities in sensory-driven functional magnetic resonance imaging responses between monkey (Macaca mulatta) and human brain areas by means of temporal correlation. Using natural vision data, we reveal regions for which functional processing has shifted to topologically divergent locations during evolution. We conclude that substantial evolution-driven functional reorganizations have occurred, not always consistent with cortical expansion processes. This novel framework for evaluating changes in functional architecture is crucial to building more accurate evolutionary models.
Collapse
|