1
|
Dalmaso M, Galfano G, Baratella A, Castelli L. A direct comparison of gaze-mediated orienting elicited by schematic and real human faces. Acta Psychol (Amst) 2025; 255:104934. [PMID: 40147256 DOI: 10.1016/j.actpsy.2025.104934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2024] [Revised: 03/18/2025] [Accepted: 03/21/2025] [Indexed: 03/29/2025] Open
Abstract
During social interactions, we tend to orient our visual attention towards the spatial location indicated by the gaze direction of others. However, modern societies are characterised by the increasing presence of facial stimuli of various natures, often schematic and pertaining to fictional entities, used in contexts such as advertisements or digital interfaces. In this study, we directly compared the impact of eye-gaze belonging to schematic and real faces on visual attention. These two types of stimuli were utilized in three experiments, where either manual (Experiment 1, N = 160; and Experiment 2, N = 160) or oculomotor (Experiment 3, N = 80) responses were recorded. In addition, schematic and real faces were presented either separately within two distinct blocks or intermixed within the same block of trials. The latter manipulation was aimed to test for eventual stronger differences between schematic and real faces in contexts that maximise the comparison processes between the two types of stimuli. In all experiments, a robust gaze-mediated orienting of attention effect emerged, and this was not significantly influenced by either the type of facial stimulus (i.e., schematic or real) or by the intermixed/blocked presentation. Overall, these results suggest that the human social attention system may treat both types of stimuli similarly. This finding suggests that schematic faces can be effectively used in various applied contexts, such as digital interfaces and advertising, without compromising gaze-mediated attentional orienting.
Collapse
Affiliation(s)
- Mario Dalmaso
- Department of Developmental and Social Psychology, University of Padova, Italy.
| | - Giovanni Galfano
- Department of Developmental and Social Psychology, University of Padova, Italy
| | | | - Luigi Castelli
- Department of Developmental and Social Psychology, University of Padova, Italy
| |
Collapse
|
2
|
McGregor C, von dem Hagen E, Wallbridge C, Dobbs J, Svenson-Tree C, Jones CRG. Supporting Autistic Children's Participation in Research Studies: A Mixed-Methods Study of Familiarizing Autistic Children with A Humanoid Robot. AUTISM & DEVELOPMENTAL LANGUAGE IMPAIRMENTS 2025; 10:23969415251332486. [PMID: 40296896 PMCID: PMC12034964 DOI: 10.1177/23969415251332486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/30/2025]
Abstract
It is important that autism research is inclusive and supports the participation of a wide range of autistic people. However, there has been limited research on how to make studies accessible for autistic participants. This mixed-methods study explored how to promote the comfort of autistic children in research, using the specific example of visiting a research lab and meeting a humanoid robot. In Phase 1, 14 parents of autistic children were interviewed about how their child could be made comfortable during a lab visit, including different approaches for familiarizing their child with the robot. In Phase 2, autistic children of the parents in Phase 1 (n = 10) visited the lab and completed familiarization activities with a humanoid robot. The opinions of the children and their parents about the children's experiences were recorded. Using reflexive thematic analysis, five overarching themes reflected how to best support autistic child participants. These themes encompassed elements of particular relevance to robot studies but also many practices of general relevance to participating in research: (1) preparation is key, (2) consideration of environmental factors, (3) using familiarization, (4) a supportive and engaged researcher, and (5) individualized approaches. Based on our findings, we report preliminary and generalizable best-practice recommendations to support autistic children in a research setting and promote positive experiences.
Collapse
Affiliation(s)
- Carly McGregor
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, UK
- Centre for Artificial Intelligence, Robotics and Human-Machine Systems (IROHMS), Cardiff University, Cardiff, UK
| | | | - Christopher Wallbridge
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, UK
- Centre for Artificial Intelligence, Robotics and Human-Machine Systems (IROHMS), Cardiff University, Cardiff, UK
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Jenna Dobbs
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, UK
| | - Caitlyn Svenson-Tree
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, UK
| | - Catherine RG Jones
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, UK
- Centre for Artificial Intelligence, Robotics and Human-Machine Systems (IROHMS), Cardiff University, Cardiff, UK
| |
Collapse
|
3
|
Hanifi S, Maiettini E, Lombardi M, Natale L. A pipeline for estimating human attention toward objects with on-board cameras on the iCub humanoid robot. Front Robot AI 2024; 11:1346714. [PMID: 39483489 PMCID: PMC11524796 DOI: 10.3389/frobt.2024.1346714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 09/23/2024] [Indexed: 11/03/2024] Open
Abstract
This research report introduces a learning system designed to detect the object that humans are gazing at, using solely visual feedback. By incorporating face detection, human attention prediction, and online object detection, the system enables the robot to perceive and interpret human gaze accurately, thereby facilitating the establishment of joint attention with human partners. Additionally, a novel dataset collected with the humanoid robot iCub is introduced, comprising more than 22,000 images from ten participants gazing at different annotated objects. This dataset serves as a benchmark for human gaze estimation in table-top human-robot interaction (HRI) contexts. In this work, we use it to assess the proposed pipeline's performance and examine each component's effectiveness. Furthermore, the developed system is deployed on the iCub and showcases its functionality. The results demonstrate the potential of the proposed approach as a first step to enhancing social awareness and responsiveness in social robotics. This advancement can enhance assistance and support in collaborative scenarios, promoting more efficient human-robot collaborations.
Collapse
Affiliation(s)
- Shiva Hanifi
- Humanoid Sensing and Perception Group, Istituito Italiano di Tecnologia, Genoa, Italy
| | | | | | | |
Collapse
|
4
|
Sato W, Shimokawa K, Uono S, Minato T. Mentalistic attention orienting triggered by android eyes. Sci Rep 2024; 14:23143. [PMID: 39367157 PMCID: PMC11452688 DOI: 10.1038/s41598-024-75063-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 10/01/2024] [Indexed: 10/06/2024] Open
Abstract
The eyes play a special role in human communications. Previous psychological studies have reported reflexive attention orienting in response to another individual's eyes during live interactions. Although robots are expected to collaborate with humans in various social situations, it remains unclear whether robot eyes have the potential to trigger attention orienting similarly to human eyes, specifically based on mental attribution. We investigated this issue in a series of experiments using a live gaze-cueing paradigm with an android. In Experiment 1, the non-predictive cue was the eyes and head of an android placed in front of human participants. Light-emitting diodes in the periphery served as target signals. The reaction times (RTs) required to localize the valid cued targets were faster than those for invalid cued targets for both types of cues. In Experiment 2, the gaze direction of the android eyes changed before the peripheral target lights appeared with or without barriers that made the targets non-visible, such that the android did not attend to them. The RTs were faster for validly cued targets only when there were no barriers. In Experiment 3, the targets were changed from lights to sounds, which the android could attend to even in the presence of barriers. The RTs to the target sounds were faster with valid cues, irrespective of the presence of barriers. These results suggest that android eyes may automatically induce attention orienting in humans based on mental state attribution.
Collapse
Affiliation(s)
- Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan.
| | - Koh Shimokawa
- Psychological Process Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan
| | - Shota Uono
- Division of Disability Sciences, Institute of Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, 305-8572, Ibaraki, Japan
| | - Takashi Minato
- Interactive Robot Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan
| |
Collapse
|
5
|
Marchesi S, De Tommaso D, Kompatsiari K, Wu Y, Wykowska A. Tools and methods to study and replicate experiments addressing human social cognition in interactive scenarios. Behav Res Methods 2024; 56:7543-7560. [PMID: 38782872 PMCID: PMC11362199 DOI: 10.3758/s13428-024-02434-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/23/2024] [Indexed: 05/25/2024]
Abstract
In the last decade, scientists investigating human social cognition have started bringing traditional laboratory paradigms more "into the wild" to examine how socio-cognitive mechanisms of the human brain work in real-life settings. As this implies transferring 2D observational paradigms to 3D interactive environments, there is a risk of compromising experimental control. In this context, we propose a methodological approach which uses humanoid robots as proxies of social interaction partners and embeds them in experimental protocols that adapt classical paradigms of cognitive psychology to interactive scenarios. This allows for a relatively high degree of "naturalness" of interaction and excellent experimental control at the same time. Here, we present two case studies where our methods and tools were applied and replicated across two different laboratories, namely the Italian Institute of Technology in Genova (Italy) and the Agency for Science, Technology and Research in Singapore. In the first case study, we present a replication of an interactive version of a gaze-cueing paradigm reported in Kompatsiari et al. (J Exp Psychol Gen 151(1):121-136, 2022). The second case study presents a replication of a "shared experience" paradigm reported in Marchesi et al. (Technol Mind Behav 3(3):11, 2022). As both studies replicate results across labs and different cultures, we argue that our methods allow for reliable and replicable setups, even though the protocols are complex and involve social interaction. We conclude that our approach can be of benefit to the research field of social cognition and grant higher replicability, for example, in cross-cultural comparisons of social cognition mechanisms.
Collapse
Affiliation(s)
- Serena Marchesi
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy
- Robotics and Autonomous Systems Department, A*STAR Institute for Infocomm Research, Singapore, Singapore
| | - Davide De Tommaso
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy
| | - Kyveli Kompatsiari
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy
| | - Yan Wu
- Robotics and Autonomous Systems Department, A*STAR Institute for Infocomm Research, Singapore, Singapore
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy.
| |
Collapse
|
6
|
Chung EYH, Sin KKF, Chow DHK. Qualitative outcomes and impact of a robotic intervention on children with autism spectrum disorder: A multiple embedded case study. Br J Occup Ther 2024; 87:574-582. [PMID: 40343357 PMCID: PMC11887874 DOI: 10.1177/03080226241252272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 04/16/2024] [Indexed: 05/11/2025]
Abstract
Most studies of social robot interventions for children with autism spectrum disorder have been laboratory experiments focusing on component skills. There is insufficient evidence documenting the qualitative impact of such programmes on social development and participation of children with autism spectrum disorder. This study aimed to identify the qualitative outcomes of a robot-mediated social skills training programme for children with autism spectrum disorder, examine the impact of such programmes on children's social participation and identify the essential elements of robotic interventions that are conducive to children's social development. A case study approach with a multiple case study design was adopted. Sixteen children with autism spectrum disorder, aged 5-11 years, were included. Participants received 12 weekly sessions of robot-mediated social skills training. The successful outcomes relating to social participation were identified as enhanced verbal expression, social awareness and emotional reciprocity. The impacts of the programme on personal development were identified as enhanced self-esteem, self-confidence and emotional expression. Robot friendship, the role of the robot as a facilitator and the presence of a human instructor capable of leading the programme were identified as essential elements of the positive changes. The encounter with a social robot was regarded as meaningful and important to the children with autism spectrum disorder.
Collapse
Affiliation(s)
- Eva Yin-Han Chung
- Faculty of Medicine, Health and Life Science, Swansea University, Swansea, UK
- Department of Special Education and Counseling, The Education University of Hong Kong, Hong Kong
- Centre for Special Educational Needs and Inclusive Education, The Education University of Hong Kong, Hong Kong
| | - Kenneth Kuen-Fung Sin
- Department of Special Education and Counseling, The Education University of Hong Kong, Hong Kong
- Centre for Special Educational Needs and Inclusive Education, The Education University of Hong Kong, Hong Kong
| | - Daniel Hung-Kay Chow
- Department of Health and Physical Education, The Education University of Hong Kong, Hong Kong
| |
Collapse
|
7
|
Deroy O, Longin L, Bahrami B. Co-perceiving: Bringing the social into perception. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024; 15:e1681. [PMID: 38706396 DOI: 10.1002/wcs.1681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 04/15/2024] [Accepted: 04/16/2024] [Indexed: 05/07/2024]
Abstract
Humans and other animals possess the remarkable ability to effectively navigate a shared perceptual environment by discerning which objects and spaces are perceived by others and which remain private to themselves. Traditionally, this capacity has been encapsulated under the umbrella of joint attention or joint action. In this comprehensive review, we advocate for a broader and more mechanistic understanding of this phenomenon, termed co-perception. Co-perception encompasses the sensitivity to the perceptual engagement of others and the capability to differentiate between objects perceived privately and those perceived commonly with others. It represents a distinct concept from mere simultaneous individual perception. Moreover, discerning between private and common objects doesn't necessitate intricate mind-reading abilities or mutual coordination. The act of perceiving objects as either private or common provides a comprehensive account for social scenarios where individuals simply share the same context or may even engage in competition. This conceptual framework encourages a re-examination of classical paradigms that demonstrate social influences on perception. Furthermore, it suggests that the impacts of shared experiences extend beyond affective responses, also influencing perceptual processes. This article is categorized under: Psychology > Attention Philosophy > Foundations of Cognitive Science Philosophy > Psychological Capacities.
Collapse
Affiliation(s)
- Ophelia Deroy
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, Ludwig Maximilian University, Munich, Germany
- Munich Centre for Neurosciences-Brain & Mind, Munich, Germany
- Institute of Philosophy, School of Advanced Study, University of London, London, UK
| | - Louis Longin
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, Ludwig Maximilian University, Munich, Germany
| | - Bahador Bahrami
- Crowd Cognition Group, Faculty of General Psychology and Education, Ludwig Maxilian University, Munich, Germany
| |
Collapse
|
8
|
Ishikawa K, Oyama T, Tanaka Y, Okubo M. Perceiving social gaze produces the reversed congruency effect. Q J Exp Psychol (Hove) 2024:17470218241232981. [PMID: 38320865 DOI: 10.1177/17470218241232981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Numerous studies have shown that the gaze of others produces a special attentional process, such as the eye contact effect or joint attention. This study investigated the attentional process triggered by various types of gaze stimuli (i.e., human, cat, fish, koala, and robot gaze). A total of 300 university students participated in five experiments. They performed a spatial Stroop task in which five types of gaze stimuli were presented as targets. Participants were asked to judge the direction of the target (left or right) irrespective of its location (left or right). The results showed that the social gaze targets (i.e., human and cat gaze) produced a reversed congruency effect. In contrast to the social gaze targets, the non-social gaze (i.e., fish and robot) target did not produce the reversed congruency effect (Experiments 2, 2B, 3, and 4). These results suggest that attention to the gaze of socially communicable beings (i.e., humans and cats) is responsible for the reversed congruency effect. Our findings support the notion that the theory of mind or social interaction plays an important role in producing specific attentional processes in response to gaze stimuli.
Collapse
Affiliation(s)
- Kenta Ishikawa
- Department of Psychology, Senshu University, Kawasaki, Japan
| | - Takato Oyama
- Graduate School of Humanities, Senshu University, Kawasaki, Japan
| | - Yoshihiko Tanaka
- Graduate School of Humanities, Senshu University, Kawasaki, Japan
| | - Matia Okubo
- Department of Psychology, Senshu University, Kawasaki, Japan
| |
Collapse
|
9
|
Marchesi S, Abubshait A, Kompatsiari K, Wu Y, Wykowska A. Cultural differences in joint attention and engagement in mutual gaze with a robot face. Sci Rep 2023; 13:11689. [PMID: 37468517 DOI: 10.1038/s41598-023-38704-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 07/13/2023] [Indexed: 07/21/2023] Open
Abstract
Joint attention is a pivotal mechanism underlying human ability to interact with one another. The fundamental nature of joint attention in the context of social cognition has led researchers to develop tasks that address this mechanism and operationalize it in a laboratory setting, in the form of a gaze cueing paradigm. In the present study, we addressed the question of whether engaging in joint attention with a robot face is culture-specific. We adapted a classical gaze-cueing paradigm such that a robot avatar cued participants' gaze subsequent to either engaging participants in eye contact or not. Our critical question of interest was whether the gaze cueing effect (GCE) is stable across different cultures, especially if cognitive resources to exert top-down control are reduced. To achieve the latter, we introduced a mathematical stress task orthogonally to the gaze cueing protocol. Results showed larger GCE in the Singapore sample, relative to the Italian sample, independent of gaze type (eye contact vs. no eye contact) or amount of experienced stress, which translates to available cognitive resources. Moreover, since after each block, participants rated how engaged they felt with the robot avatar during the task, we observed that Italian participants rated as more engaging the avatar during the eye contact blocks, relative to no eye contact while Singaporean participants did not show any difference in engagement relative to the gaze. We discuss the results in terms of cultural differences in robot-induced joint attention, and engagement in eye contact, as well as the dissociation between implicit and explicit measures related to processing of gaze.
Collapse
Affiliation(s)
- Serena Marchesi
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy
- Robotics and Autonomous Systems Department, A*STAR Institute for Infocomm Research, Singapore, Singapore
| | - Abdulaziz Abubshait
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy
| | - Kyveli Kompatsiari
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy
| | - Yan Wu
- Robotics and Autonomous Systems Department, A*STAR Institute for Infocomm Research, Singapore, Singapore
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy.
| |
Collapse
|
10
|
Morillo-Mendez L, Stower R, Sleat A, Schreiter T, Leite I, Mozos OM, Schrooten MGS. Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution. Front Psychol 2023; 14:1215771. [PMID: 37519379 PMCID: PMC10374202 DOI: 10.3389/fpsyg.2023.1215771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 06/27/2023] [Indexed: 08/01/2023] Open
Abstract
Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.
Collapse
Affiliation(s)
| | - Rebecca Stower
- Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden
| | - Alex Sleat
- Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden
| | - Tim Schreiter
- Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden
| | - Iolanda Leite
- Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden
| | | | | |
Collapse
|
11
|
Dalmaso M, Fedrigo G, Vicovaro M. Gazing left, gazing right: exploring a spatial bias in social attention. PeerJ 2023; 11:e15694. [PMID: 37456887 PMCID: PMC10349552 DOI: 10.7717/peerj.15694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 06/14/2023] [Indexed: 07/18/2023] Open
Abstract
Faces oriented rightwards are sometimes perceived as more dominant than faces oriented leftwards. In this study, we explored whether faces oriented rightwards can also elicit increased attentional orienting. Participants completed a discrimination task in which they were asked to discriminate, by means of a keypress, a peripheral target. At the same time, a task-irrelevant face oriented leftwards or rightwards appeared at the centre of the screen. The results showed that, while for faces oriented rightwards targets appearing on the right were responded to faster as compared to targets appearing on the left, for faces oriented leftwards no differences emerged between left and right targets. Furthermore, we also found a negative correlation between the magnitude of the orienting response elicited by the faces oriented leftwards and the level of conservatism of the participants. Overall, these findings provide evidence for the existence of a spatial bias reflected in social orienting.
Collapse
Affiliation(s)
- Mario Dalmaso
- Department of Developmental and Social Psychology, University of Padua, Padua, Italy
| | - Giacomo Fedrigo
- Department of Developmental and Social Psychology, University of Padua, Padua, Italy
| | - Michele Vicovaro
- Department of General Psychology, University of Padua, Padua, Italy
| |
Collapse
|
12
|
Gross S, Krenn B. A Communicative Perspective on Human–Robot Collaboration in Industry: Mapping Communicative Modes on Collaborative Scenarios. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00991-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
|
13
|
Morillo-Mendez L, Schrooten MGS, Loutfi A, Mozos OM. Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction. Int J Soc Robot 2022:1-13. [PMID: 36185773 PMCID: PMC9510350 DOI: 10.1007/s12369-022-00926-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2022] [Indexed: 11/12/2022]
Abstract
There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults. Supplementary Information The online version contains supplementary material available at 10.1007/s12369-022-00926-6.
Collapse
Affiliation(s)
- Lucas Morillo-Mendez
- Centre for Applied Autonomous Sensor Systems, Örebro University, Fakultetsgatan 1, Örebro, 702 81 Sweden
| | | | - Amy Loutfi
- Centre for Applied Autonomous Sensor Systems, Örebro University, Fakultetsgatan 1, Örebro, 702 81 Sweden
| | - Oscar Martinez Mozos
- Centre for Applied Autonomous Sensor Systems, Örebro University, Fakultetsgatan 1, Örebro, 702 81 Sweden
| |
Collapse
|
14
|
Onnasch L, Kostadinova E, Schweidler P. Humans Can't Resist Robot Eyes - Reflexive Cueing With Pseudo-Social Stimuli. Front Robot AI 2022; 9:848295. [PMID: 37274454 PMCID: PMC10236938 DOI: 10.3389/frobt.2022.848295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 03/01/2022] [Indexed: 06/06/2023] Open
Abstract
Joint attention is a key mechanism for humans to coordinate their social behavior. Whether and how this mechanism can benefit the interaction with pseudo-social partners such as robots is not well understood. To investigate the potential use of robot eyes as pseudo-social cues that ease attentional shifts we conducted an online study using a modified spatial cueing paradigm. The cue was either a non-social (arrow), a pseudo-social (two versions of an abstract robot eye), or a social stimulus (photographed human eyes) that was presented either paired (e.g. two eyes) or single (e.g. one eye). The latter was varied to separate two assumed triggers of joint attention: the social nature of the stimulus, and the additional spatial information that is conveyed only by paired stimuli. Results support the assumption that pseudo-social stimuli, in our case abstract robot eyes, have the potential to facilitate human-robot interaction as they trigger reflexive cueing. To our surprise, actual social cues did not evoke reflexive shifts in attention. We suspect that the robot eyes elicited the desired effects because they were human-like enough while at the same time being much easier to perceive than human eyes, due to a design with strong contrasts and clean lines. Moreover, results indicate that for reflexive cueing it does not seem to make a difference if the stimulus is presented single or paired. This might be a first indicator that joint attention depends rather on the stimulus' social nature or familiarity than its spatial expressiveness. Overall, the study suggests that using paired abstract robot eyes might be a good design practice for fostering a positive perception of a robot and to facilitate joint attention as a precursor for coordinated behavior.
Collapse
Affiliation(s)
- Linda Onnasch
- Engineering Psychology, Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Eleonora Kostadinova
- Engineering Psychology, Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | | |
Collapse
|
15
|
Chevalier P, Ghiglino D, Floris F, Priolo T, Wykowska A. Visual and Hearing Sensitivity Affect Robot-Based Training for Children Diagnosed With Autism Spectrum Disorder. Front Robot AI 2022; 8:748853. [PMID: 35096980 PMCID: PMC8790526 DOI: 10.3389/frobt.2021.748853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 12/09/2021] [Indexed: 11/20/2022] Open
Abstract
In this paper, we investigate the impact of sensory sensitivity during robot-assisted training for children diagnosed with Autism Spectrum Disorder (ASD). Indeed, user-adaptation for robot-based therapies could help users to focus on the training, and thus improve the benefits of the interactions. Children diagnosed with ASD often suffer from sensory sensitivity, and can show hyper or hypo-reactivity to sensory events, such as reacting strongly or not at all to sounds, movements, or touch. Considering it during robot therapies may improve the overall interaction. In the present study, thirty-four children diagnosed with ASD underwent a joint attention training with the robot Cozmo. The eight session training was embedded in the standard therapy. The children were screened for their sensory sensitivity with the Sensory Profile Checklist Revised. Their social skills were screened before and after the training with the Early Social Communication Scale. We recorded their performance and the amount of feedback they were receiving from the therapist through animations of happy and sad emotions played on the robot. Our results showed that visual and hearing sensitivity influenced the improvements of the skill to initiate joint attention. Also, the therapists of individuals with a high sensitivity to hearing chose to play fewer animations of the robot during the training phase of the robot activity. The animations did not include sounds, but the robot was producing motor noise. These results are supporting the idea that sensory sensitivity of children diagnosed with ASD should be screened prior to engaging the children in robot-assisted therapy.
Collapse
Affiliation(s)
- P. Chevalier
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT), Genoa, Italy
| | - D. Ghiglino
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT), Genoa, Italy
- DIBRIS, Università degli Studi di Genova, Genoa, Italy
| | - F. Floris
- Piccolo Cottolengo Genovese di Don Orione, Genoa, Italy
| | - T. Priolo
- Piccolo Cottolengo Genovese di Don Orione, Genoa, Italy
| | - A. Wykowska
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT), Genoa, Italy
- *Correspondence: A. Wykowska,
| |
Collapse
|
16
|
Perez-Osorio J, Abubshait A, Wykowska A. Irrelevant Robot Signals in a Categorization Task Induce Cognitive Conflict in Performance, Eye Trajectories, the N2 ERP-EEG Component, and Frontal Theta Oscillations. J Cogn Neurosci 2021; 34:108-126. [PMID: 34705044 DOI: 10.1162/jocn_a_01786] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Understanding others' nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. Although gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others' mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. Here, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot "moved" the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object's color. We expected that incongruent head cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 ERP of the EEG signals as well as higher event-related spectral perturbation amplitudes (Study 3) for incongruent trials compared with congruent trials. Our findings reveal that behavioral, ocular, and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks.
Collapse
|
17
|
Acarturk C, Indurkya B, Nawrocki P, Sniezynski B, Jarosz M, Usal KA. Gaze aversion in conversational settings: An investigation based on mock job interview. J Eye Mov Res 2021; 14. [PMID: 34122746 PMCID: PMC8188832 DOI: 10.16910/jemr.14.1.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We report the results of an empirical study on gaze aversion during dyadic human-to-human
conversation in an interview setting. To address various methodological challenges in assessing
gaze-to-face contact, we followed an approach where the experiment was conducted
twice, each time with a different set of interviewees. In one of them the interviewer’s gaze
was tracked with an eye tracker, and in the other the interviewee’s gaze was tracked. The
gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time
Markov Chains. The results show that the interviewer made more frequent and longer gaze
contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze
aversions, whereas the interviewee made sideways aversions (left or right). We discuss the
relevance of this research for Human-Robot Interaction, and discuss some future research
problems.
Collapse
Affiliation(s)
- Cengiz Acarturk
- Department of Cognitive Science, Middle East Technical University, Turkey
| | - Bipin Indurkya
- Department of Cognitive Science, Jagiellonian University, Poland
| | - Piotr Nawrocki
- Institute of Computer Science, AGH University of Science and Technology,, Poland
| | | | - Mateusz Jarosz
- Institute of Computer Science, AGH University of Science and Technology,, Poland
| | - Kerem Alp Usal
- Department of Cognitive Science, Middle East Technical University, Turkey
| |
Collapse
|
18
|
Kompatsiari K, Bossi F, Wykowska A. Eye contact during joint attention with a humanoid robot modulates oscillatory brain activity. Soc Cogn Affect Neurosci 2021; 16:383-392. [PMID: 33416877 PMCID: PMC7990063 DOI: 10.1093/scan/nsab001] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 11/27/2020] [Accepted: 01/08/2021] [Indexed: 11/14/2022] Open
Abstract
Eye contact established by a human partner has been shown to affect various cognitive processes of the receiver. However, little is known about humans' responses to eye contact established by a humanoid robot. Here, we aimed at examining humans' oscillatory brain response to eye contact with a humanoid robot. Eye contact (or lack thereof) was embedded in a gaze-cueing task and preceded the phase of gaze-related attentional orienting. In addition to examining the effect of eye contact on the recipient, we also tested its impact on gaze-cueing effects (GCEs). Results showed that participants rated eye contact as more engaging and responded with higher desynchronization of alpha-band activity in left fronto-central and central electrode clusters when the robot established eye contact with them, compared to no eye contact condition. However, eye contact did not modulate GCEs. The results are interpreted in terms of the functional roles involved in alpha central rhythms (potentially interpretable also as mu rhythm), including joint attention and engagement in social interaction.
Collapse
Affiliation(s)
- Kyveli Kompatsiari
- Italian Institute of Technology, Social Cognition in Human-Robot Interaction (S4HRI), Genova 16152, Italy
| | | | - Agnieszka Wykowska
- Italian Institute of Technology, Social Cognition in Human-Robot Interaction (S4HRI), Genova 16152, Italy
| |
Collapse
|
19
|
Henschel A, Laban G, Cross ES. What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You. CURRENT ROBOTICS REPORTS 2021; 2:9-19. [PMID: 34977592 PMCID: PMC7860159 DOI: 10.1007/s43154-020-00035-0] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 12/21/2020] [Indexed: 12/17/2022]
Abstract
Purpose of Review We provide an outlook on the definitions, laboratory research, and applications of social robots, with an aim to understand what makes a robot social—in the eyes of science and the general public. Recent Findings Social robots demonstrate their potential when deployed within contexts appropriate to their form and functions. Some examples include companions for the elderly and cognitively impaired individuals, robots within educational settings, and as tools to support cognitive and behavioural change interventions. Summary Science fiction has inspired us to conceive of a future with autonomous robots helping with every aspect of our daily lives, although the robots we are familiar with through film and literature remain a vision of the distant future. While there are still miles to go before robots become a regular feature within our social spaces, rapid progress in social robotics research, aided by the social sciences, is helping to move us closer to this reality.
Collapse
Affiliation(s)
- Anna Henschel
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Guy Laban
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Emily S Cross
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland.,Department of Cognitive Science, Macquarie University, Sydney, Australia
| |
Collapse
|
20
|
Abstract
As the field of social robotics has been dynamically growing and expanding over various areas of research and application, in which robots can be of assistance and companionship for humans, this paper offers a different perspective on a role that social robots can also play, namely the role of informing us about flexibility of human mechanisms of social cognition. The paper focuses on studies in which robots have been used as a new type of "stimuli" in psychological experiments to examine whether similar mechanisms of social cognition would be activated in interaction with a robot, as would be elicited in interaction with another human. Analysing studies in which a direct comparison has been made between a robot and a human agent, the paper examines whether for robot agents, the brain re-uses the same mechanisms that have been developed for interaction with other humans in terms of perception, action representation, attention and higher-order social cognition. Based on this analysis, the paper concludes that the human socio-cognitive mechanisms, in adult brains, are sufficiently flexible to be re-used for robotic agents, at least for those that have some level of resemblance to humans.
Collapse
|
21
|
|