1
|
Brocard S, Voinov PV, Bickel B, Zuberbühler K. Spontaneous Encoding of Event Roles in Hominids. Open Mind (Camb) 2025; 9:559-575. [PMID: 40337356 PMCID: PMC12058332 DOI: 10.1162/opmi_a_00202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Accepted: 03/05/2025] [Indexed: 05/09/2025] Open
Abstract
When observing social interactions, humans rapidly and spontaneously encode events in terms of agents, patients and causal relations. This propensity can be made visible empirically with the switch cost paradigm, a reaction time experiment and well-established tool of cognitive psychology. We adapted the paradigm for non-human primates to test whether non-linguistic animals encoded event roles in the same way. Both human and non-human participants were requested to attend to different social interactions between two artificially coloured (blue or green) actors and to target the actor masked by a specified colour (e.g., blue), regardless of her role. We found that when we switched the targeted colour mask from agents to patients (or vice versa) the processing time significantly increased in both hominid species (i.e., human and chimpanzee), suggesting that event roles were spontaneously encoded and subsequently interfered with our simplistic colour search task. We concluded that the propensity to encode social events in terms of agents and patients was a common feature of hominid cognition, as demonstrated in several human and one chimpanzee participant, pointing towards an evolutionarily old and phylogenetically shared cognitive mechanism central to language processing.
Collapse
Affiliation(s)
- Sarah Brocard
- Department of Comparative Cognition, Institute of Biology, University of Neuchatel, Neuchatel, Switzerland
| | - Pavel V. Voinov
- Center for the Evolutionary Origins of Human Behavior (EHUB), Kyoto University, Kyoto, Japan
| | - Balthasar Bickel
- Institute for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Klaus Zuberbühler
- Department of Comparative Cognition, Institute of Biology, University of Neuchatel, Neuchatel, Switzerland
- School of Psychology & Neuroscience, University of St Andrews, St Andrews, Scotland (UK)
| |
Collapse
|
2
|
Brocard S, Wilson VAD, Berton C, Zuberbühler K, Bickel B. A universal preference for animate agents in hominids. iScience 2024; 27:109996. [PMID: 38883826 PMCID: PMC11177197 DOI: 10.1016/j.isci.2024.109996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 03/15/2024] [Accepted: 05/14/2024] [Indexed: 06/18/2024] Open
Abstract
When conversing, humans instantaneously predict meaning from fragmentary and ambiguous mspeech, long before utterance completion. They do this by integrating priors (initial assumptions about the world) with contextual evidence to rapidly decide on the most likely meaning. One powerful prior is attentional preference for agents, which biases sentence processing but universally so only if agents are animate. Here, we investigate the evolutionary origins of this preference, by allowing chimpanzees, gorillas, orangutans, human children, and adults to freely choose between agents and patients in still images, following video clips depicting their dyadic interaction. All participants preferred animate (and occasionally inanimate) agents, although the effect was attenuated if patients were also animate. The findings suggest that a preference for animate agents evolved before language and is not reducible to simple perceptual biases. To conclude, both humans and great apes prefer animate agents in decision tasks, echoing a universal prior in human language processing.
Collapse
Affiliation(s)
- Sarah Brocard
- Department of Comparative Cognition, Institute of Biology, University of Neuchatel, Neuchatel, Switzerland
| | - Vanessa A D Wilson
- Department of Comparative Cognition, Institute of Biology, University of Neuchatel, Neuchatel, Switzerland
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Chloé Berton
- Department of Comparative Cognition, Institute of Biology, University of Neuchatel, Neuchatel, Switzerland
| | - Klaus Zuberbühler
- Department of Comparative Cognition, Institute of Biology, University of Neuchatel, Neuchatel, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- School of Psychology & Neuroscience, University of St Andrews, St Andrews, Scotland (UK)
| | - Balthasar Bickel
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| |
Collapse
|
3
|
Kamal M, Möbius M, Bartella AK, Lethaus B. Perception of aesthetic features after surgical treatment of craniofacial malformations by observers of the same age: An eye-tracking study. J Craniomaxillofac Surg 2023; 51:708-715. [PMID: 37813772 DOI: 10.1016/j.jcms.2023.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 02/11/2023] [Accepted: 09/30/2023] [Indexed: 10/11/2023] Open
Abstract
The aim of this study is to evaluate where exactly children and adolescents of the same group look when they interact with each other, and attempt to record and analyse the data recorded by eye-tracking technology. MATERIALS AND METHODS 60 subjects participated in the study, evenly divided into three age categories of 20 each in pre-school/primary school age (5-9 years), early adolescence (10-14 years) and late adolescence/transition to adulthood (15-19 years). Age groups were matched and categorized to be used both for creating the picture series and testing. Photographs of patients with both unilateral and bilateral cleft lip and palate were used to create the series of images which consisted of a total of 15 photos, 5 of which were photos of patients with surgically treated cleft deformity and 10 control photos with healthy faces, that were presented in random order. Using the eye-tracking module, the data on "area of first view" (area of initial attention), "area with longest view" (area of sustained attention), "time until view in this area" (time of initial attention) and "frequency of view in each area" (time of sustained attention) were calculated. RESULTS Across all groups, there was no significant difference for the individual regions for the parameters of initial attention (area of first view), while the time until first fixation of one of the AOIs (time until view in this area) was significant for all facial regions. A predictable path of the facial scan is abandoned when secondary facial deformities are present and attention is focused more on the region of an existing deformity, which are the nose and mouth regions. CONCLUSIONS There are significant differences in both male and female participants' viewing of faces with and without secondary cleft deformity. While in the age group of the younger test persons it was still the mouth region that received special attention from the male viewers, this shifted in the male test persons of the middle age group to the nose region, which was fixed significantly more often and faster. In the female participants, the mouth and nose regions were each looked at for twice as long compared to the healthy faces, making both the mouth and the nose region are in the focus of observation.
Collapse
Affiliation(s)
- Mohammad Kamal
- Department of Surgical Sciences, College of Dentistry, Health Sciences Center, Kuwait University, Safat, Kuwait.
| | - Marianne Möbius
- Department of Oral and Maxillofacial Surgery, Leipzig University Hospital, Leipzig, Germany.
| | - Alexander K Bartella
- Department of Oral and Maxillofacial Surgery, Leipzig University Hospital, Leipzig, Germany.
| | - Bernd Lethaus
- Department of Oral and Maxillofacial Surgery, Leipzig University Hospital, Leipzig, Germany.
| |
Collapse
|
4
|
Wilson VAD, Bethell EJ, Nawroth C. The use of gaze to study cognition: limitations, solutions, and applications to animal welfare. Front Psychol 2023; 14:1147278. [PMID: 37205074 PMCID: PMC10185774 DOI: 10.3389/fpsyg.2023.1147278] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 04/17/2023] [Indexed: 05/21/2023] Open
Abstract
The study of gaze responses, typically using looking time paradigms, has become a popular approach to improving our understanding of cognitive processes in non-verbal individuals. Our interpretation of data derived from these paradigms, however, is constrained by how we conceptually and methodologically approach these problems. In this perspective paper, we outline the application of gaze studies in comparative cognitive and behavioral research and highlight current limitations in the interpretation of commonly used paradigms. Further, we propose potential solutions, including improvements to current experimental approaches, as well as broad-scale benefits of technology and collaboration. Finally, we outline the potential benefits of studying gaze responses from an animal welfare perspective. We advocate the implementation of these proposals across the field of animal behavior and cognition to aid experimental validity, and further advance our knowledge on a variety of cognitive processes and welfare outcomes.
Collapse
Affiliation(s)
- Vanessa A. D. Wilson
- Department of Comparative Cognition, Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- *Correspondence: Vanessa A. D. Wilson,
| | - Emily J. Bethell
- Research Centre in Evolutionary Anthropology and Palaeoecology, Liverpool John Moores University, Liverpool, United Kingdom
| | - Christian Nawroth
- Institute of Behavioural Physiology, Research Institute for Farm Animal Biology (FBN), Dummerstorf, Germany
| |
Collapse
|
5
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 55] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
6
|
Albuquerque N, Resende B. Dogs functionally respond to and use emotional information from human expressions. EVOLUTIONARY HUMAN SCIENCES 2022; 5:e2. [PMID: 37587944 PMCID: PMC10426098 DOI: 10.1017/ehs.2022.57] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 09/11/2022] [Accepted: 11/21/2022] [Indexed: 01/11/2023] Open
Abstract
Emotions are critical for humans, not only feeling and expressing them, but also reading the emotional expressions of others. For a long time, this ability was thought to be exclusive to people; however, there is now evidence that other animals also rely on emotion perception to guide their behaviour and to adjust their actions in such way as to guarantee success in their social groups. This is the case for domestic dogs, who have tremendously complex abilities to perceive the emotional expressions not only of their conspecifics but also of human beings. In this paper we discuss dogs' capacities to read human emotions. More than perception, though, are dogs able to use this emotional information in a functional way? Does reading emotional expressions allow them to live functional social lives? Dogs can respond functionally to emotional expressions and can use the emotional information they obtain from others during problem-solving, that is, acquiring information from faces and body postures allows them to make decisions. Here, we tackle questions related to the abilities of responding to and using emotional information from human expressions in a functional way and discuss how far dogs can go when reading our emotions.
Collapse
Affiliation(s)
| | - Briseida Resende
- Institute of Psychology, University of São Paulo, São Paulo, Brazil
| |
Collapse
|
7
|
Broda MD, de Haas B. Individual differences in looking at persons in scenes. J Vis 2022; 22:9. [PMID: 36342691 PMCID: PMC9652713 DOI: 10.1167/jov.22.12.9] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 09/23/2022] [Indexed: 11/09/2022] Open
Abstract
Individuals freely viewing complex scenes vary in their fixation behavior. The most prominent and reliable dimension of such individual differences is the tendency to fixate faces. However, much less is known about how observers distribute fixations across other body parts of persons in scenes and how individuals may vary in this regard. Here, we aimed to close this gap. We expanded a popular annotated stimulus set (Xu, Jiang, Wang, Kankanhalli, & Zhao, 2014) with 6,365 hand-delineated pixel masks for the body parts of 1,136 persons embedded in 700 complex scenes, which we publish with this article (https://osf.io/ynujz/). This resource allowed us to analyze the person-directed fixations of 103 participants freely viewing these scenes. We found large and reliable individual differences in the distribution of fixations across person features. Individual fixation tendencies formed two anticorrelated clusters, one for the eyes, head, and the inner face and one for body features (torsi, arms, legs, and hands). Interestingly, the tendency to fixate mouths was independent of the face cluster. Finally, our results show that observers who tend to avoid person fixations in general, particularly do so for the face region. These findings underscore the role of individual differences in fixation behavior and reveal underlying dimensions. They are further in line with a recently proposed push-pull relationship between cortical tuning for faces and bodies. They may also aid the comparison of special populations to general variation.
Collapse
Affiliation(s)
- Maximilian Davide Broda
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
8
|
Wilson VAD, Zuberbühler K, Bickel B. The evolutionary origins of syntax: Event cognition in nonhuman primates. SCIENCE ADVANCES 2022; 8:eabn8464. [PMID: 35731868 PMCID: PMC9216513 DOI: 10.1126/sciadv.abn8464] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Accepted: 05/05/2022] [Indexed: 06/15/2023]
Abstract
Languages tend to encode events from the perspective of agents, placing them first and in simpler forms than patients. This agent bias is mirrored by cognition: Agents are more quickly recognized than patients and generally attract more attention. This leads to the hypothesis that key aspects of language structure are fundamentally rooted in a cognition that decomposes events into agents, actions, and patients, privileging agents. Although this type of event representation is almost certainly universal across languages, it remains unclear whether the underlying cognition is uniquely human or more widespread in animals. Here, we review a range of evidence from primates and other animals, which suggests that agent-based event decomposition is phylogenetically older than humans. We propose a research program to test this hypothesis in great apes and human infants, with the goal to resolve one of the major questions in the evolution of language, the origins of syntax.
Collapse
Affiliation(s)
- Vanessa A. D. Wilson
- Department of Comparative Cognition, Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Klaus Zuberbühler
- Department of Comparative Cognition, Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- School of Psychology and Neuroscience, University of St Andrews, St. Andrews, Scotland
| | - Balthasar Bickel
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| |
Collapse
|
9
|
Priscilla Martinez-Cedillo A, Dent K, Foulsham T. Do cognitive load and ADHD traits affect the tendency to prioritise social information in scenes? Q J Exp Psychol (Hove) 2022; 75:1904-1918. [PMID: 34844477 PMCID: PMC9424720 DOI: 10.1177/17470218211066475] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
We report two experiments investigating the effect of working memory (WM)
load on selective attention. Experiment 1 was a modified version of
Lavie et al. and confirmed that increasing memory load disrupted
performance in the classic flanker task. Experiment 2 used the same
manipulation of WM load to probe attention during the viewing of
complex scenes while also investigating individual differences in
attention deficit hyperactivity disorder (ADHD) traits. In the
image-viewing task, we measured the degree to which fixations targeted
each of two crucial objects: (1) a social object (a person in the
scene) and (2) a non-social object of higher or lower physical
salience. We compared the extent to which increasing WM load would
change the pattern of viewing of the physically salient and socially
salient objects. If attending to the social item requires greater
default voluntary top-down resources, then the viewing of social
objects should show stronger modulation by WM load compared with
viewing of physically salient objects. The results showed that the
social object was fixated to a greater degree than the other object
(regardless of physical salience). Increased salience drew fixations
away from the background leading to slightly increased fixations on
the non-social object, without changing fixations on the social
object. Increased levels of ADHD-like traits were associated with
fewer fixations on the social object, but only in the high-salient,
low-load condition. Importantly, WM load did not affect the number of
fixations on the social object. Such findings suggest rather
surprisingly that attending to a social area in complex stimuli is not
dependent on the availability of voluntary top-down resources.
Collapse
Affiliation(s)
| | - Kevin Dent
- Department of Psychology, University of Essex, Colchester, UK
| | - Tom Foulsham
- Department of Psychology, University of Essex, Colchester, UK
| |
Collapse
|
10
|
Albuquerque N, Mills DS, Guo K, Wilkinson A, Resende B. Dogs can infer implicit information from human emotional expressions. Anim Cogn 2021; 25:231-240. [PMID: 34390430 PMCID: PMC8940826 DOI: 10.1007/s10071-021-01544-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 07/25/2021] [Accepted: 07/31/2021] [Indexed: 10/27/2022]
Abstract
The ability to infer emotional states and their wider consequences requires the establishment of relationships between the emotional display and subsequent actions. These abilities, together with the use of emotional information from others in social decision making, are cognitively demanding and require inferential skills that extend beyond the immediate perception of the current behaviour of another individual. They may include predictions of the significance of the emotional states being expressed. These abilities were previously believed to be exclusive to primates. In this study, we presented adult domestic dogs with a social interaction between two unfamiliar people, which could be positive, negative or neutral. After passively witnessing the actors engaging silently with each other and with the environment, dogs were given the opportunity to approach a food resource that varied in accessibility. We found that the available emotional information was more relevant than the motivation of the actors (i.e. giving something or receiving something) in predicting the dogs' responses. Thus, dogs were able to access implicit information from the actors' emotional states and appropriately use the affective information to make context-dependent decisions. The findings demonstrate that a non-human animal can actively acquire information from emotional expressions, infer some form of emotional state and use this functionally to make decisions.
Collapse
Affiliation(s)
- Natalia Albuquerque
- Institute of Psychology, University of São Paulo, São Paulo, Brazil. .,School of Life Sciences, University of Lincoln, Lincoln, UK.
| | - Daniel S Mills
- School of Life Sciences, University of Lincoln, Lincoln, UK
| | - Kun Guo
- School of Psychology, University of Lincoln, Lincoln, UK
| | - Anna Wilkinson
- School of Life Sciences, University of Lincoln, Lincoln, UK
| | - Briseida Resende
- Institute of Psychology, University of São Paulo, São Paulo, Brazil
| |
Collapse
|
11
|
Mowlavi Vardanjani M, Ghasemian S, Sheibani V, A Mansouri F. The effects of emotional stimuli and oxytocin on inhibition ability and response execution in macaque monkeys. Behav Brain Res 2021; 413:113409. [PMID: 34111470 DOI: 10.1016/j.bbr.2021.113409] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Revised: 06/04/2021] [Accepted: 06/06/2021] [Indexed: 11/30/2022]
Abstract
Social and emotional content of environmental stimuli influence executive control of behavior. There has been a great variability in the behavioral effects of emotional stimuli in humans. These variabilities might arise from other contextual factors, such as specific task demands or natural hormones, which potentially interact with emotional stimuli in modulating executive functions. This study aimed at examining the effects of social-emotional visual stimuli and a natural hormone (oxytocin) on inhibition ability and response execution of macaque monkeys. In a crossover design, monkeys received inhaled oxytocin or its vehicle before performing a stop-signal task in which they must respond rapidly to a visual go-cue in Go trials but inhibit the initiated response following the onset of a stop-cue in Stop trials. The social-emotional content (negative, positive or neutral) of the go-cue was changed trial-by-trial. We found that monkeys' inhibition ability was significantly influenced by the social-emotional content of stimuli and appeared as an enhanced inhibition ability when monkeys were exposed to negative stimuli. However, response execution was not influenced by the emotional content of stimuli in the current or preceding trials. The same dose of oxytocin, which modulated working memory in monkeys, had no significant effect on the inhibition ability, but significantly decreased monkeys' response time regardless of the stimulus valence. Our findings indicate that emotional stimuli, valence dependently, influence monkeys' inhibition ability but not their response execution and suggest that oxytocin might attenuate reorientation of cognitive resources to the task irrelevant emotional information.
Collapse
Affiliation(s)
- Marzieh Mowlavi Vardanjani
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran; Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Sadegh Ghasemian
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran; Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Vahid Sheibani
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran; Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran.
| | - Farshad A Mansouri
- Australian Research Council, Centre of Excellence for Integrative Brain Function, Monash University, Australia
| |
Collapse
|
12
|
Albuquerque N, Savalli C, Cabral F, Resende B. Do Emotional Cues Influence the Performance of Domestic Dogs in an Observational Learning Task? Front Psychol 2021; 12:615074. [PMID: 34093306 PMCID: PMC8172801 DOI: 10.3389/fpsyg.2021.615074] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 04/26/2021] [Indexed: 11/13/2022] Open
Abstract
Using social information is not indiscriminate and being able to choose what to copy and from whom to copy is critical. Dogs are able to learn socially, to recognize, and respond to dog as well as human emotional expressions, and to make reputation-like inferences based on how people behave towards their owner. Yet, the mechanisms dogs use for obtaining and utilizing social information are still to be fully understood, especially concerning whether emotional cues influence dogs’ social learning. Therefore, our main aim was to test the hypothesis that an emotionally charged (negative, positive, or neutral) interaction with the demonstrator of a “V” detour task prior to testing would affect subjects’ performance, by: (i) changing the value of the information provided by the demonstrator or (ii) changing the valence of the learning environment. Our experimental design consisted of three phases: pre-test (subjects were allowed to solve the task alone); emotional display (dogs watched the unfamiliar human behaving in either a positive, negative or neutral way towards their owner); test (demonstrator showed the task and subjects were allowed to move freely). Only dogs that failed in pre-test were considered for analysis (n = 46). We analyzed four dependent variables: success, time to solve the task, latency to reach the fence and matching the side of demonstration. For each, we used four models (GEEs and GLMMs) to investigate the effect of (1) demographic factors; (2) experimental design factors (including emotional group); (3) behavior of the dog; and (4) side chosen and matching. All models took into account all trials (random effect included) and the first trials only. Our findings corroborate previous studies of social learning, but present no evidence to sustain our hypothesis. We discuss the possibility of our stimuli not being salient enough in a task that involves highly motivating food and relies on long and highly distracting interval between phases. Nevertheless, these results represent an important contribution to the study of dog behavior and social cognition and pave the way for further investigations.
Collapse
Affiliation(s)
- Natalia Albuquerque
- Department of Experimental Psychology, Institute of Psychology, University of São Paulo, São Paulo, Brazil
| | - Carine Savalli
- Department of Experimental Psychology, Institute of Psychology, University of São Paulo, São Paulo, Brazil.,Department of Public Policies and Collective Health, Federal University of São Paulo, São Paulo, Brazil
| | - Francisco Cabral
- Department of Experimental Psychology, Institute of Psychology, University of São Paulo, São Paulo, Brazil
| | - Briseida Resende
- Department of Experimental Psychology, Institute of Psychology, University of São Paulo, São Paulo, Brazil
| |
Collapse
|
13
|
Adams GK, Ong WS, Pearson JM, Watson KK, Platt ML. Neurons in primate prefrontal cortex signal valuable social information during natural viewing. Philos Trans R Soc Lond B Biol Sci 2021; 376:20190666. [PMID: 33423624 PMCID: PMC7815429 DOI: 10.1098/rstb.2019.0666] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Information about social partners is innately valuable to primates. Decisions about which sources of information to consume are highly naturalistic but also complex and place unusually strong demands on the brain's decision network. In particular, both the orbitofrontal cortex (OFC) and lateral prefrontal cortex (LPFC) play key roles in decision making and social behaviour, suggesting a likely role in social information-seeking as well. To test this idea, we developed a 'channel surfing' task in which monkeys were shown a series of 5 s video clips of conspecifics engaged in natural behaviours at a field site. Videos were annotated frame-by-frame using an ethogram of species-typical behaviours, an important source of social information. Between each clip, monkeys were presented with a choice between targets that determined which clip would be seen next. Monkeys' gaze during playback indicated differential engagement depending on what behaviours were presented. Neurons in both OFC and LPFC responded to choice targets and to video, and discriminated a subset of the behaviours in the ethogram during video viewing. These findings suggest that both OFC and LPFC are engaged in processing social information that is used to guide dynamic information-seeking decisions. This article is part of the theme issue 'Existence and prevalence of economic behaviours among non-human primates'.
Collapse
Affiliation(s)
- Geoffrey K Adams
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Wei Song Ong
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - John M Pearson
- Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, USA
| | - Karli K Watson
- Institute of Cognitive Science, University of Colorado at Boulder, Boulder, CO, USA
| | - Michael L Platt
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.,Department of Psychology, School of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, USA.,Marketing Department, Wharton School of Business, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
14
|
Törnqvist H, Somppi S, Kujala MV, Vainio O. Observing animals and humans: dogs target their gaze to the biological information in natural scenes. PeerJ 2020; 8:e10341. [PMID: 33362955 DOI: 10.7717/peerj.10341/supp-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 10/20/2020] [Indexed: 05/26/2023] Open
Abstract
BACKGROUND This study examines how dogs observe images of natural scenes containing living creatures (wild animals, dogs and humans) recorded with eye gaze tracking. Because dogs have had limited exposure to wild animals in their lives, we also consider the natural novelty of the wild animal images for the dogs. METHODS The eye gaze of dogs was recorded while they viewed natural images containing dogs, humans, and wild animals. Three categories of images were used: naturalistic landscape images containing single humans or animals, full body images containing a single human or an animal, and full body images containing a pair of humans or animals. The gazing behavior of two dog populations, family and kennel dogs, were compared. RESULTS As a main effect, dogs gazed at living creatures (object areas) longer than the background areas of the images; heads longer than bodies; heads longer than background areas; and bodies longer than background areas. Dogs gazed less at the object areas vs. the background in landscape images than in the other image categories. Both dog groups also gazed wild animal heads longer than human or dog heads in the images. When viewing single animal and human images, family dogs focused their gaze very prominently on the head areas, but in images containing a pair of animals or humans, they gazed more at the body than the head areas. In kennel dogs, the difference in gazing times of the head and body areas within single or paired images failed to reach significance. DISCUSSION Dogs focused their gaze on living creatures in all image categories, also detecting them in the natural landscape images. Generally, they also gazed at the biologically informative areas of the images, such as the head, which supports the importance of the head/face area for dogs in obtaining social information. The natural novelty of the species represented in the images as well as the image category affected the gazing behavior of dogs. Furthermore, differences in the gazing strategy between family and kennel dogs was obtained, suggesting an influence of different social living environments and life experiences.
Collapse
Affiliation(s)
- Heini Törnqvist
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Sanni Somppi
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Miiamaaria V Kujala
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
- Department of Psychology, Faculty of Education and Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Outi Vainio
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
15
|
Törnqvist H, Somppi S, Kujala MV, Vainio O. Observing animals and humans: dogs target their gaze to the biological information in natural scenes. PeerJ 2020; 8:e10341. [PMID: 33362955 PMCID: PMC7749655 DOI: 10.7717/peerj.10341] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 10/20/2020] [Indexed: 11/20/2022] Open
Abstract
Background This study examines how dogs observe images of natural scenes containing living creatures (wild animals, dogs and humans) recorded with eye gaze tracking. Because dogs have had limited exposure to wild animals in their lives, we also consider the natural novelty of the wild animal images for the dogs. Methods The eye gaze of dogs was recorded while they viewed natural images containing dogs, humans, and wild animals. Three categories of images were used: naturalistic landscape images containing single humans or animals, full body images containing a single human or an animal, and full body images containing a pair of humans or animals. The gazing behavior of two dog populations, family and kennel dogs, were compared. Results As a main effect, dogs gazed at living creatures (object areas) longer than the background areas of the images; heads longer than bodies; heads longer than background areas; and bodies longer than background areas. Dogs gazed less at the object areas vs. the background in landscape images than in the other image categories. Both dog groups also gazed wild animal heads longer than human or dog heads in the images. When viewing single animal and human images, family dogs focused their gaze very prominently on the head areas, but in images containing a pair of animals or humans, they gazed more at the body than the head areas. In kennel dogs, the difference in gazing times of the head and body areas within single or paired images failed to reach significance. Discussion Dogs focused their gaze on living creatures in all image categories, also detecting them in the natural landscape images. Generally, they also gazed at the biologically informative areas of the images, such as the head, which supports the importance of the head/face area for dogs in obtaining social information. The natural novelty of the species represented in the images as well as the image category affected the gazing behavior of dogs. Furthermore, differences in the gazing strategy between family and kennel dogs was obtained, suggesting an influence of different social living environments and life experiences.
Collapse
Affiliation(s)
- Heini Törnqvist
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Sanni Somppi
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Miiamaaria V Kujala
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland.,Department of Psychology, Faculty of Education and Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Outi Vainio
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
16
|
Visual exploration of emotional body language: a behavioural and eye-tracking study. PSYCHOLOGICAL RESEARCH 2020; 85:2326-2339. [PMID: 32920675 DOI: 10.1007/s00426-020-01416-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 09/01/2020] [Indexed: 10/23/2022]
Abstract
Bodily postures are essential to correctly comprehend others' emotions and intentions. Nonetheless, very few studies focused on the pattern of eye movements implicated in the recognition of emotional body language (EBL), demonstrating significant differences in relation to different emotions. A yet unanswered question regards the presence of the "left-gaze bias" (i.e. the tendency to look first, to make more fixations and to spend more looking time on the left side of centrally presented stimuli) while scanning bodies. Hence, the present study aims at exploring both the presence of a left-gaze bias and the modulation of EBL visual exploration mechanisms, by investigating the fixation patterns (number of fixations and latency of the first fixation) of participants while judging the emotional intensity of static bodily postures (Angry, Happy and Neutral, without head). While results on the latency of first fixations demonstrate for the first time the presence of the left-gaze bias while scanning bodies, suggesting that it could be related to the stronger expressiveness of the left hand (from the observer's point of view), results on fixations' number only partially fulfil our hypothesis. Moreover, an opposite viewing pattern between Angry and Happy bodily postures is showed. In sum, the present results, by integrating the spatial and temporal dimension of gaze exploration patterns, shed new light on EBL visual exploration mechanisms.
Collapse
|
17
|
Zarei SA, Sheibani V, Mansouri FA. Interaction of music and emotional stimuli in modulating working memory in macaque monkeys. Am J Primatol 2019; 81:e22999. [DOI: 10.1002/ajp.22999] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 04/25/2019] [Accepted: 05/12/2019] [Indexed: 11/07/2022]
Affiliation(s)
- Shahab A. Zarei
- Cognitive Neuroscience Laboratory, Kerman Neuroscience Research CenterInstitute of Neuropharmacology, Kerman University of Medical SciencesKerman Iran
| | - Vahid Sheibani
- Cognitive Neuroscience Laboratory, Kerman Neuroscience Research CenterInstitute of Neuropharmacology, Kerman University of Medical SciencesKerman Iran
- Cognitive Neuroscience Laboratory, Cognitive Neuroscience Research CentreKerman University of Medical SciencesKerman Iran
| | - Farshad A. Mansouri
- Cognitive Neuroscience Laboratory, Cognitive Neuroscience Research CentreKerman University of Medical SciencesKerman Iran
- Cognitive Neuroscience Laboratory, ARC Center of Excellence for Integrative Brain FunctionMonash UniversityClayton VIC Australia
| |
Collapse
|
18
|
Guo K, Li Z, Yan Y, Li W. Viewing heterospecific facial expressions: an eye-tracking study of human and monkey viewers. Exp Brain Res 2019; 237:2045-2059. [PMID: 31165915 PMCID: PMC6647127 DOI: 10.1007/s00221-019-05574-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Accepted: 05/31/2019] [Indexed: 11/03/2022]
Abstract
Common facial expressions of emotion have distinctive patterns of facial muscle movements that are culturally similar among humans, and perceiving these expressions is associated with stereotypical gaze allocation at local facial regions that are characteristic for each expression, such as eyes in angry faces. It is, however, unclear to what extent this 'universality' view can be extended to process heterospecific facial expressions, and how 'social learning' process contributes to heterospecific expression perception. In this eye-tracking study, we examined face-viewing gaze allocation of human (including dog owners and non-dog owners) and monkey observers while exploring expressive human, chimpanzee, monkey and dog faces (positive, neutral and negative expressions in human and dog faces; neutral and negative expressions in chimpanzee and monkey faces). Human observers showed species- and experience-dependent expression categorization accuracy. Furthermore, both human and monkey observers demonstrated different face-viewing gaze distributions which were also species dependent. Specifically, humans predominately attended at human eyes but animal mouth when judging facial expressions. Monkeys' gaze distributions in exploring human and monkey faces were qualitatively different from exploring chimpanzee and dog faces. Interestingly, the gaze behaviour of both human and monkey observers were further affected by their prior experience of the viewed species. It seems that facial expression processing is species dependent, and social learning may play a significant role in discriminating even rudimentary types of heterospecific expressions.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, Lincoln, LN6 7TS, UK.
| | - Zhihan Li
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| | - Yin Yan
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| | - Wu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| |
Collapse
|
19
|
Zarei SA, Sheibani V, Tomaz C, Mansouri FA. The effects of oxytocin on primates’ working memory depend on the emotional valence of contextual factors. Behav Brain Res 2019; 362:82-89. [DOI: 10.1016/j.bbr.2018.12.050] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Revised: 12/20/2018] [Accepted: 12/27/2018] [Indexed: 10/27/2022]
|
20
|
Quadflieg S, Westmoreland K. Making Sense of Other People’s Encounters: Towards an Integrative Model of Relational Impression Formation. JOURNAL OF NONVERBAL BEHAVIOR 2019. [DOI: 10.1007/s10919-019-00295-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
21
|
Human facial expression affects a dog's response to conflicting directional gestural cues. Behav Processes 2019; 159:80-85. [PMID: 30610904 DOI: 10.1016/j.beproc.2018.12.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 12/12/2018] [Accepted: 12/31/2018] [Indexed: 11/21/2022]
Abstract
There is growing scientific interest in both the ability of dogs to evaluate emotional cues and their response to social cueing, we therefore examined the interaction between these by investigating whether human facial expression impact on dogs' approach preference to conflicting directional gestural signals. During testing, a human demonstrator simultaneously pointed in one direction and faced (looked towards) in another towards one of two food bowls placed on opposite sides of the demonstrator with either a happy, angry or neutral facial expression. Thirty-six pet dogs were assessed for their approach preference, approach time, alongside aspects of their temperament (positive/negative activation and impulsivity via validated questionnaires). Dogs significantly preferred to follow the 'point' over 'face' cue when the demonstrator displayed angry or neutral expressions but showed no significant preference when the demonstrator displayed a happy expression. Additionally, dogs scoring higher for impulsivity approached the chosen cue significantly faster than those with lower scores. The findings show that dogs integrate information available from human emotional facial expressions with directional gestural information in their response choices within dog-human interactions.
Collapse
|
22
|
Wilming N, Kietzmann TC, Jutras M, Xue C, Treue S, Buffalo EA, König P. Differential Contribution of Low- and High-level Image Content to Eye Movements in Monkeys and Humans. Cereb Cortex 2017; 27:279-293. [PMID: 28077512 PMCID: PMC5942390 DOI: 10.1093/cercor/bhw399] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Accepted: 12/13/2016] [Indexed: 11/25/2022] Open
Abstract
Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans.
Collapse
Affiliation(s)
- Niklas Wilming
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.,Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA.,Yerkes National Primate Research Center, Atlanta, GA 30329, USA.,Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Washington National Primate Research Center, Seattle, WA 09195, USA
| | - Tim C Kietzmann
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.,Medical Research Council, Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK
| | - Megan Jutras
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA.,Yerkes National Primate Research Center, Atlanta, GA 30329, USA.,Washington National Primate Research Center, Seattle, WA 09195, USA
| | - Cheng Xue
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, Goettingen University, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany
| | - Elizabeth A Buffalo
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA.,Yerkes National Primate Research Center, Atlanta, GA 30329, USA.,Washington National Primate Research Center, Seattle, WA 09195, USA
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.,Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
23
|
Törnqvist H, Somppi S, Koskela A, Krause CM, Vainio O, Kujala MV. Comparison of dogs and humans in visual scanning of social interaction. ROYAL SOCIETY OPEN SCIENCE 2015; 2:150341. [PMID: 26473057 PMCID: PMC4593691 DOI: 10.1098/rsos.150341] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Accepted: 09/02/2015] [Indexed: 05/31/2023]
Abstract
Previous studies have demonstrated similarities in gazing behaviour of dogs and humans, but comparisons under similar conditions are rare, and little is known about dogs' visual attention to social scenes. Here, we recorded the eye gaze of dogs while they viewed images containing two humans or dogs either interacting socially or facing away: the results were compared with equivalent data measured from humans. Furthermore, we compared the gazing behaviour of two dog and two human populations with different social experiences: family and kennel dogs; dog experts and non-experts. Dogs' gazing behaviour was similar to humans: both species gazed longer at the actors in social interaction than in non-social images. However, humans gazed longer at the actors in dog than human social interaction images, whereas dogs gazed longer at the actors in human than dog social interaction images. Both species also made more saccades between actors in images representing non-conspecifics, which could indicate that processing social interaction of non-conspecifics may be more demanding. Dog experts and non-experts viewed the images very similarly. Kennel dogs viewed images less than family dogs, but otherwise their gazing behaviour did not differ, indicating that the basic processing of social stimuli remains similar regardless of social experiences.
Collapse
Affiliation(s)
- Heini Törnqvist
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, Helsinki, Finland
- Cognitive Science, Faculty of Behavioural Sciences, University of Helsinki, Helsinki, Finland
| | - Sanni Somppi
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, Helsinki, Finland
| | - Aija Koskela
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, Helsinki, Finland
| | - Christina M. Krause
- Cognitive Science, Faculty of Behavioural Sciences, University of Helsinki, Helsinki, Finland
| | - Outi Vainio
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, Helsinki, Finland
| | - Miiamaaria V. Kujala
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, Helsinki, Finland
- Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| |
Collapse
|
24
|
Vögeli S, Wolf M, Wechsler B, Gygax L. Housing conditions influence cortical and behavioural reactions of sheep in response to videos showing social interactions of different valence. Behav Brain Res 2015; 284:69-76. [DOI: 10.1016/j.bbr.2015.02.007] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2014] [Revised: 02/02/2015] [Accepted: 02/03/2015] [Indexed: 11/17/2022]
|
25
|
Guo K, Shaw H. Face in profile view reduces perceived facial expression intensity: an eye-tracking study. Acta Psychol (Amst) 2015; 155:19-28. [PMID: 25531122 DOI: 10.1016/j.actpsy.2014.12.001] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Revised: 11/28/2014] [Accepted: 12/03/2014] [Indexed: 10/24/2022] Open
Abstract
Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.
Collapse
|
26
|
Solyst JA, Buffalo EA. Social relevance drives viewing behavior independent of low-level salience in rhesus macaques. Front Neurosci 2014; 8:354. [PMID: 25414633 PMCID: PMC4220660 DOI: 10.3389/fnins.2014.00354] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2013] [Accepted: 10/14/2014] [Indexed: 01/28/2023] Open
Abstract
Quantifying attention to social stimuli during the viewing of complex social scenes with eye tracking has proven to be a sensitive method in the diagnosis of autism spectrum disorders years before average clinical diagnosis. Rhesus macaques provide an ideal model for understanding the mechanisms underlying social viewing behavior, but to date no comparable behavioral task has been developed for use in monkeys. Using a novel scene-viewing task, we monitored the gaze of three rhesus macaques while they freely viewed well-controlled composed social scenes and analyzed the time spent viewing objects and monkeys. In each of six behavioral sessions, monkeys viewed a set of 90 images (540 unique scenes) with each image presented twice. In two-thirds of the repeated scenes, either a monkey or an object was replaced with a novel item (manipulated scenes). When viewing a repeated scene, monkeys made longer fixations and shorter saccades, shifting from a rapid orienting to global scene contents to a more local analysis of fewer items. In addition to this repetition effect, in manipulated scenes, monkeys demonstrated robust memory by spending more time viewing the replaced items. By analyzing attention to specific scene content, we found that monkeys strongly preferred to view conspecifics and that this was not related to their salience in terms of low-level image features. A model-free analysis of viewing statistics found that monkeys that were viewed earlier and longer had direct gaze and redder sex skin around their face and rump, two important visual social cues. These data provide a quantification of viewing strategy, memory and social preferences in rhesus macaques viewing complex social scenes, and they provide an important baseline with which to compare to the effects of therapeutics aimed at enhancing social cognition.
Collapse
Affiliation(s)
- James A Solyst
- Neuroscience Graduate Program, Emory University Atlanta, GA, USA ; Physiology and Biophysics, University of Washington Seattle, WA, USA ; Yerkes National Primate Research Center Atlanta, GA, USA ; Washington National Primate Research Center, University of Washington Seattle, WA, USA
| | - Elizabeth A Buffalo
- Physiology and Biophysics, University of Washington Seattle, WA, USA ; Washington National Primate Research Center, University of Washington Seattle, WA, USA ; Center for Translational Social Neuroscience Atlanta, GA, USA
| |
Collapse
|
27
|
Seeing two faces together: preference formation in humans and rhesus macaques. Anim Cogn 2014; 17:1107-19. [PMID: 24638876 DOI: 10.1007/s10071-014-0742-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2013] [Revised: 02/13/2014] [Accepted: 03/03/2014] [Indexed: 10/25/2022]
Abstract
Humans, great apes and old world monkeys show selective attention to faces depending on conspecificity, familiarity, and social status supporting the view that primates share similar face processing mechanisms. Although many studies have been done on face scanning strategy in monkeys and humans, the mechanisms influencing viewing preference have received little attention. To determine how face categories influence viewing preference in humans and rhesus macaques (Macaca mulatta), we performed two eye-tracking experiments using a visual preference task whereby pairs of faces from different species were presented simultaneously. The results indicated that viewing time was significantly influenced by the pairing of the face categories. Humans showed a strong bias towards an own-race face in an Asian-Caucasian condition. Rhesus macaques directed more attention towards non-human primate faces when they were paired with human faces, regardless of the species. When rhesus faces were paired with faces from Barbary macaques (Macaca sylvanus) or chimpanzees (Pan troglodytes), the novel species' faces attracted more attention. These results indicate that monkeys' viewing preferences, as assessed by a visual preference task, are modulated by several factors, species and dominance being the most influential.
Collapse
|