1
|
Kim B, Kim S, Park J, Park D. Regretful Bites: Exploring the Influence of Anthropomorphized Food on Children's Food Choices and Consumption. Appetite 2024:107690. [PMID: 39317272 DOI: 10.1016/j.appet.2024.107690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 08/31/2024] [Accepted: 09/21/2024] [Indexed: 09/26/2024]
Abstract
Anthropomorphizing food is a prevalent marketing technique, particularly for children; however, its impact on their choices and consumption remains largely unexplored. We conducted two experiments to investigate how anthropomorphism affects food choices and consumption in four- and five-year-old children. In Study 1 (within-subjects design, N = 72), children were shown both anthropomorphized and non-anthropomorphized cookies and given a plastic coin. They were asked to choose the cookie they would like to exchange the coin for. The results indicated that a greater proportion of children selected the anthropomorphized cookie. In Study 2 (between-subjects design, N = 144), children were given either an anthropomorphized or a non-anthropomorphized cookie and allowed to eat as much as they wished. Those who received the anthropomorphized cookie consumed less and reported more feelings of regret compared to those who were given a non-anthropomorphized cookie. Together, these findings suggest that while anthropomorphic features might increase food choice, they paradoxically decrease actual consumption while increasing feelings of regret.
Collapse
Affiliation(s)
- Boyoon Kim
- Sungkyunkwan University, Department of Psychology, 25-2 Sungkyunkwan-ro, Jongno-gu, Seoul, 03063, South Korea.
| | - Sara Kim
- The University of Hong Kong, Faculty of Business and Economics, Pokfulam Road, Hong Kong.
| | - Jiniee Park
- Gangneung-Wonju National University, Department of Early Childhood Education, 150, Namwon-ro, Heungeop-myeon, Wonju-si, Gangwon-do, 26043, South Korea.
| | - Daeun Park
- Sungkyunkwan University, Department of Psychology, 25-2 Sungkyunkwan-ro, Jongno-gu, Seoul, 03063, South Korea.
| |
Collapse
|
2
|
Flanagan T, Georgiou NC, Scassellati B, Kushnir T. School-age children are more skeptical of inaccurate robots than adults. Cognition 2024; 249:105814. [PMID: 38763071 DOI: 10.1016/j.cognition.2024.105814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 05/10/2024] [Accepted: 05/13/2024] [Indexed: 05/21/2024]
Abstract
We expect children to learn new words, skills, and ideas from various technologies. When learning from humans, children prefer people who are reliable and trustworthy, yet children also forgive people's occasional mistakes. Are the dynamics of children learning from technologies, which can also be unreliable, similar to learning from humans? We tackle this question by focusing on early childhood, an age at which children are expected to master foundational academic skills. In this project, 168 4-7-year-old children (Study 1) and 168 adults (Study 2) played a word-guessing game with either a human or robot. The partner first gave a sequence of correct answers, but then followed this with a sequence of wrong answers, with a reaction following each one. Reactions varied by condition, either expressing an accident, an accident marked with an apology, or an unhelpful intention. We found that older children were less trusting than both younger children and adults and were even more skeptical after errors. Trust decreased most rapidly when errors were intentional, but only children (and especially older children) outright rejected help from intentionally unhelpful partners. As an exception to this general trend, older children maintained their trust for longer when a robot (but not a human) apologized for its mistake. Our work suggests that educational technology design cannot be one size fits all but rather must account for developmental changes in children's learning goals.
Collapse
Affiliation(s)
- Teresa Flanagan
- Department of Psychology & Neuroscience, Duke University, 417 Chapel Drive, Durham, NC 27701, United States of America.
| | - Nicholas C Georgiou
- Department of Computer Science, Yale University, 51 Prospect Street, New Haven, CT 06511, United States of America
| | - Brian Scassellati
- Department of Computer Science, Yale University, 51 Prospect Street, New Haven, CT 06511, United States of America
| | - Tamar Kushnir
- Department of Psychology & Neuroscience, Duke University, 417 Chapel Drive, Durham, NC 27701, United States of America
| |
Collapse
|
3
|
Taniguchi K, Okanda M. Children's animistic beliefs toward a humanoid robot and other objects. J Exp Child Psychol 2024; 244:105945. [PMID: 38729060 DOI: 10.1016/j.jecp.2024.105945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 03/13/2024] [Accepted: 03/27/2024] [Indexed: 05/12/2024]
Abstract
This study examined children's beliefs about a humanoid robot by examining their behavioral and verbal responses. We investigated whether 3- and 5-year-old children would treat the humanoid robot gently along with other objects and tools with and without a face and whether 3- and 5-year-olds would attribute moral, perceptual, and psychological properties to these targets. Although 3-year-olds did not treat objects gently or rudely, they were likely to affirm that hitting targets was acceptable despite targets having psychological and perceptual properties. Thus, 3-year-olds' perception of the targets was incongruent with their behavior toward them. Most 5-year-olds treated a robot gently and were likely to affirm the robot's psychological characteristics. Behaviors and perceptions of the robot differed between 3- and 5-year-olds. Thus, children may start believing that robots are not alive at age five, and they can distinguish them from other objects even when the latter have faces. Developmental changes in children's animistic beliefs are also discussed.
Collapse
Affiliation(s)
| | - Mako Okanda
- Otemon Gakuin University, Osaka 567-8502, Japan.
| |
Collapse
|
4
|
He Y, Gu R, Deng G, Lin Y, Gan T, Cui F, Liu C, Luo YJ. Psychological and Brain Responses to Artificial Intelligence's Violation of Community Ethics. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2024; 27:562-570. [PMID: 38757680 DOI: 10.1089/cyber.2023.0524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2024]
Abstract
Human moral reactions to artificial intelligence (AI) agents' behavior constitute an important aspect of modern-day human-AI relationships. Although previous studies have mainly focused on autonomy ethics, this study investigates how individuals judge AI agents' violations of community ethics (including betrayals and subversions) compared with human violations. Participants' behavioral responses, event-related potentials (ERPs), and individual differences were assessed. Behavioral findings reveal that participants rated AI agents' community-violating actions less morally negative than human transgressions, possibly because AI agents are commonly perceived as having less agency than human adults. The ERP N1 component showed the same pattern with moral rating scores, indicating the modulation effect of human-AI differences on initial moral intuitions. Moreover, the level of social withdrawal correlated with a smaller N1 in the human condition but not in the AI condition. The N2 and P2 components were sensitive to the difference between the loyalty/betrayal and authority/subversion domains but not human/AI differences. Individual levels of moral sense and autistic traits also influenced behavioral data, especially on the loyalty/betrayal domain. In our opinion, these findings offer insights for predicting moral responses to AI agents and guiding ethical AI development aligned with human moral values.
Collapse
Affiliation(s)
- Yue He
- School of Psychology, Shenzhen University, Shenzhen, People's Republic of China
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, People's Republic of China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Ruolei Gu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, People's Republic of China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Guangzhi Deng
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (BNU), Faculty of Psychology, Beijing Normal University, Beijing, People's Republic of China
| | - Yongling Lin
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, People's Republic of China
| | - Tian Gan
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, People's Republic of China
- Research Institute on Aging, School of Science, Zhejiang Sci-Tech University, Hangzhou, People's Republic of China
| | - Fang Cui
- School of Psychology, Shenzhen University, Shenzhen, People's Republic of China
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, People's Republic of China
| | - Chao Liu
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, People's Republic of China
- Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing, People's Republic of China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, People's Republic of China
- National Demonstration Center for Experimental Psychology Education, Faculty of Psychology, Beijing Normal University, Beijing, People's Republic of China
| | - Yue-Jia Luo
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, People's Republic of China
- Institute for Neuropsychological Rehabilitation, University of Health and Rehabilitation Sciences, Qingdao, People's Republic of China
| |
Collapse
|
5
|
Flatebø S, Tran VNN, Wang CEA, Bongo LA. Social robots in research on social and cognitive development in infants and toddlers: A scoping review. PLoS One 2024; 19:e0303704. [PMID: 38748722 PMCID: PMC11095739 DOI: 10.1371/journal.pone.0303704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 04/29/2024] [Indexed: 05/19/2024] Open
Abstract
There is currently no systematic review of the growing body of literature on using social robots in early developmental research. Designing appropriate methods for early childhood research is crucial for broadening our understanding of young children's social and cognitive development. This scoping review systematically examines the existing literature on using social robots to study social and cognitive development in infants and toddlers aged between 2 and 35 months. Moreover, it aims to identify the research focus, findings, and reported gaps and challenges when using robots in research. We included empirical studies published between 1990 and May 29, 2023. We searched for literature in PsychINFO, ERIC, Web of Science, and PsyArXiv. Twenty-nine studies met the inclusion criteria and were mapped using the scoping review method. Our findings reveal that most studies were quantitative, with experimental designs conducted in a laboratory setting where children were exposed to physically present or virtual robots in a one-to-one situation. We found that robots were used to investigate four main concepts: animacy concept, action understanding, imitation, and early conversational skills. Many studies focused on whether young children regard robots as agents or social partners. The studies demonstrated that young children could learn from and understand social robots in some situations but not always. For instance, children's understanding of social robots was often facilitated by robots that behaved interactively and contingently. This scoping review highlights the need to design social robots that can engage in interactive and contingent social behaviors for early developmental research.
Collapse
Affiliation(s)
- Solveig Flatebø
- Department of Psychology, UiT The Arctic University of Norway, Tromsø, Norway
| | - Vi Ngoc-Nha Tran
- Department of Computer Science, UiT The Arctic University of Norway, Tromsø, Norway
| | | | - Lars Ailo Bongo
- Department of Computer Science, UiT The Arctic University of Norway, Tromsø, Norway
| |
Collapse
|
6
|
Guingrich RE, Graziano MSA. Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction. Front Psychol 2024; 15:1322781. [PMID: 38605842 PMCID: PMC11008604 DOI: 10.3389/fpsyg.2024.1322781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 03/13/2024] [Indexed: 04/13/2024] Open
Abstract
The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI's inherent conscious or moral status.
Collapse
Affiliation(s)
- Rose E. Guingrich
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton School of Public and International Affairs, Princeton University, Princeton, NJ, United States
| | - Michael S. A. Graziano
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| |
Collapse
|
7
|
Dubois-Sage M, Jacquet B, Jamet F, Baratgin J. People with Autism Spectrum Disorder Could Interact More Easily with a Robot than with a Human: Reasons and Limits. Behav Sci (Basel) 2024; 14:131. [PMID: 38392485 PMCID: PMC10886012 DOI: 10.3390/bs14020131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 02/02/2024] [Accepted: 02/04/2024] [Indexed: 02/24/2024] Open
Abstract
Individuals with Autism Spectrum Disorder show deficits in communication and social interaction, as well as repetitive behaviors and restricted interests. Interacting with robots could bring benefits to this population, notably by fostering communication and social interaction. Studies even suggest that people with Autism Spectrum Disorder could interact more easily with a robot partner rather than a human partner. We will be looking at the benefits of robots and the reasons put forward to explain these results. The interest regarding robots would mainly be due to three of their characteristics: they can act as motivational tools, and they are simplified agents whose behavior is more predictable than that of a human. Nevertheless, there are still many challenges to be met in specifying the optimum conditions for using robots with individuals with Autism Spectrum Disorder.
Collapse
Affiliation(s)
- Marion Dubois-Sage
- Laboratoire Cognitions Humaine et Artificielle, RNSR 200515259U, UFR de Psychologie, Université Paris 8, 93526 Saint-Denis, France
| | - Baptiste Jacquet
- Laboratoire Cognitions Humaine et Artificielle, RNSR 200515259U, UFR de Psychologie, Université Paris 8, 93526 Saint-Denis, France
- Association P-A-R-I-S, 75005 Paris, France
| | - Frank Jamet
- Laboratoire Cognitions Humaine et Artificielle, RNSR 200515259U, UFR de Psychologie, Université Paris 8, 93526 Saint-Denis, France
- Association P-A-R-I-S, 75005 Paris, France
- UFR d'Éducation, CY Cergy Paris Université, 95000 Cergy-Pontoise, France
| | - Jean Baratgin
- Laboratoire Cognitions Humaine et Artificielle, RNSR 200515259U, UFR de Psychologie, Université Paris 8, 93526 Saint-Denis, France
- Association P-A-R-I-S, 75005 Paris, France
| |
Collapse
|
8
|
Baumann AE, Goldman EJ, Cobos MGM, Poulin-Dubois D. Do preschoolers trust a competent robot pointer? J Exp Child Psychol 2024; 238:105783. [PMID: 37804786 DOI: 10.1016/j.jecp.2023.105783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 08/09/2023] [Accepted: 09/07/2023] [Indexed: 10/09/2023]
Abstract
How young children learn from different informants has been widely studied. However, most studies investigate how children learn verbally conveyed information. Furthermore, most studies investigate how children learn from humans. This study sought to investigate how 3-year-old children learn from, and come to trust, a competent robot versus an incompetent human when competency is established using a pointing paradigm. During an induction phase, a robot informant pointed at a toy inside a transparent box, whereas a human pointed at an empty box. During the test phase, both agents pointed at opaque boxes. We found that young children asked the robot for help to locate a hidden toy more than the human (ask questions) and correctly identified the robot to be accurate (judgment questions). However, children equally endorsed the locations pointed at by both the robot and the human (endorse questions). This suggests that 3-year-olds are sensitive to the epistemic characteristics of the informant even when its displayed social properties are minimal.
Collapse
|
9
|
Chung EYH, Kuen-Fung Sin K, Chow DHK. Effectiveness of Robotic Intervention on Improving Social Development and Participation of Children with Autism Spectrum Disorder - A Randomised Controlled Trial. J Autism Dev Disord 2024:10.1007/s10803-024-06236-2. [PMID: 38231380 DOI: 10.1007/s10803-024-06236-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/31/2023] [Indexed: 01/18/2024]
Abstract
Evidence-based robotic intervention programmes for children with autism spectrum disorder (ASD) have been limited. As yet, there is insufficient evidence to inform therapists, teachers, and service providers on effectiveness of robotic intervention to enhance social development and participation of children with ASD in a real context. This study used a randomised controlled trial to test the efficacy of robotic intervention programmes in enhancing the social development and participation of children with ASD. 60 children with ASD were included. The participants were randomly assigned to the following groups: (1) robotic intervention programme (n = 20), (2) human-instructed programme (n = 20), and (3) control group (n = 20). Both the performance-based behavioural change in social communication and parent-reported change in social responsiveness were evaluated. The participants in the robotic intervention group demonstrated statistically significant changes in both the performance-based assessment and parent-reported change in social participation. Significant differences were found in the communication and reciprocal social interactions scores between the experimental group and the control and comparison groups in the performance-based assessment (p < 0.01). The effectiveness of robotic intervention programme to enhance the social communication and participation was confirmed. Future studies may also consider adding a maintenance phase to document how the effects of the intervention carry over to the participants over a longer period. (Clinical trial number: NCT04879303; Date of registration: 10 May 2021).
Collapse
Affiliation(s)
- Eva Yin-Han Chung
- Faculty of Medicine, Health and Life Science, Swansea University, Room 311 Vivian Tower, Singleton Park, Swansea, SA2 8PP, UK.
- Department of Special Education and Counseling, The Education University of Hong Kong, Hong Kong, Hong Kong.
- Centre for Special Educational Needs and Inclusive Education, The Education University of Hong Kong, Hong Kong, Hong Kong.
| | - Kenneth Kuen-Fung Sin
- Department of Special Education and Counseling, The Education University of Hong Kong, Hong Kong, Hong Kong
- Centre for Special Educational Needs and Inclusive Education, The Education University of Hong Kong, Hong Kong, Hong Kong
| | - Daniel Hung-Kay Chow
- Department of Health and Physical Education, The Education University of Hong Kong, Hong Kong, Hong Kong
| |
Collapse
|
10
|
Chen N, Hu X, Zhai Y. Effects of morality and reputation on sharing behaviors in human-robot teams. Front Psychol 2023; 14:1280127. [PMID: 38144990 PMCID: PMC10739295 DOI: 10.3389/fpsyg.2023.1280127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 11/22/2023] [Indexed: 12/26/2023] Open
Abstract
Introduction The relationship between robots and humans is becoming increasingly close and will become an inseparable part of work and life with humans and robots working together. Sharing, which involves distributing goods between individuals and others, involves individuals as potential beneficiaries and the possibility of giving up the interests of others. In human teams, individual sharing behaviors are influenced by morality and reputation. However, the impact on individuals' sharing behaviors in human-robot collaborative teams remains unclear-individuals may consider morality and reputation differently when sharing with robot or human partners. In this study, three experiments were conducted using the dictator game paradigm, aiming to compare the effects and mechanisms of morality and reputation on sharing behaviors in human and human-robot teams. Methods Experiment 1 involving 18 participants was conducted. Experiment 2 involving 74 participants was conducted. Experiment 3 involving 128 participants was conducted. Results Experiment 1 validated the differences in human sharing behaviors when the agents were robots and humans. Experiment 2 verifies that moral constraints and reputation constraints affect sharing behaviors in human-robot teams. Experiment 3 further reveals the mechanism of differences in sharing behaviors in human-robot teams, where reputation concern plays a mediating role in the impact of moral constraint on sharing behaviors, and the agent type plays a moderating role in the impact of moral constraint on reputation concern and sharing behaviors. Discussion The results of this study contribute to a better understanding of the interaction mechanism of human-robot teams. In the future, the formulation of human-robot collaborative team rules and the setting of interaction environments can consider the potential motivation of human behavior from both morality and reputation perspectives and achieve better work performance.
Collapse
Affiliation(s)
- Na Chen
- School of Economics and Management, Beijing University of Chemical Technology, Beijing, China
| | | | | |
Collapse
|
11
|
Maj K, Grzybowicz P, Drela WL, Olszanowski M. Touching a Mechanical Body: The Role of Anthropomorphic Framing in Physiological Arousal When Touching a Robot. SENSORS (BASEL, SWITZERLAND) 2023; 23:5954. [PMID: 37447802 PMCID: PMC10346885 DOI: 10.3390/s23135954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 06/20/2023] [Accepted: 06/22/2023] [Indexed: 07/15/2023]
Abstract
The growing prevalence of social robots in various fields necessitates a deeper understanding of touch in Human-Robot Interaction (HRI). This study investigates how human-initiated touch influences physiological responses during interactions with robots, considering factors such as anthropomorphic framing of robot body parts and attributed gender. Two types of anthropomorphic framings are applied: the use of anatomical body part names and assignment of male or female gender to the robot. Higher physiological arousal was observed when touching less accessible body parts than when touching more accessible body parts in both conditions. Results also indicate that using anatomical names intensifies arousal compared to the control condition. Additionally, touching the male robot resulted in higher arousal in all participants, especially when anatomical body part names were used. This study contributes to the understanding of how anthropomorphic framing and gender impact physiological arousal in touch interactions with social robots, offering valuable insights for social robotics development.
Collapse
Affiliation(s)
- Konrad Maj
- Faculty of Psychology, SWPS University, 03-815 Warsaw, Poland; (P.G.); (W.L.D.); (M.O.)
| | | | | | | |
Collapse
|
12
|
Schömbs S, Klein J, Roesler E. Feeling with a robot-the role of anthropomorphism by design and the tendency to anthropomorphize in human-robot interaction. Front Robot AI 2023; 10:1149601. [PMID: 37334072 PMCID: PMC10272852 DOI: 10.3389/frobt.2023.1149601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 05/16/2023] [Indexed: 06/20/2023] Open
Abstract
The implementation of anthropomorphic features in regard to appearance and framing is widely supposed to increase empathy towards robots. However, recent research used mainly tasks that are rather atypical for daily human-robot interactions like sacrificing or destroying robots. The scope of the current study was to investigate the influence of anthropomorphism by design on empathy and empathic behavior in a more realistic, collaborative scenario. In this online experiment, participants collaborated either with an anthropomorphic or a technical looking robot and received either an anthropomorphic or a technical description of the respective robot. After the task completion, we investigated situational empathy by displaying a choice-scenario in which participants needed to decide whether they wanted to act empathically towards the robot (sign a petition or a guestbook for the robot) or non empathically (leave the experiment). Subsequently, the perception of and empathy towards the robot was assessed. The results revealed no significant influence of anthropomorphism on empathy and participants' empathic behavior. However, an exploratory follow-up analysis indicates that the individual tendency to anthropomorphize might be crucial for empathy. This result strongly supports the importance to consider individual difference in human-robot interaction. Based on the exploratory analysis, we propose six items to be further investigated as empathy questionnaire in HRI.
Collapse
|
13
|
Naito M, Rea DJ, Kanda T. Hey Robot, Tell It to Me Straight: How Different Service Strategies Affect Human and Robot Service Outcomes. Int J Soc Robot 2023; 15:1-14. [PMID: 37359426 PMCID: PMC10189699 DOI: 10.1007/s12369-023-01013-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/27/2023] [Indexed: 06/28/2023]
Abstract
With robots already entering simple service tasks in shops, it is important to understand how robots should perform customer service to increase customer satisfaction. We investigate two methods of customer service we theorize are better suited for robots than human shopkeepers: straight communication and data-driven communication. Along with an additional, more traditional customer service style, we compare these methods of customer service performed by a robot, to a human performing the same service styles in 3 online studies with over 1300 people. We find that while traditional customer service styles are best suited for human shopkeepers, robot shopkeepers using straight or data driven customer service styles increase customer satisfaction, make customers feel more informed, and feel more natural than when a human uses them. Our work highlights the need for investigating robot-specific best practices for customer service, but also for social interaction at large, as simply duplicating typical human-human interaction may not produce the best results for a robot.
Collapse
|
14
|
Sommer K, Slaughter V, Wiles J, Nielsen M. Revisiting the video deficit in technology-saturated environments: Successful imitation from people, screens, and social robots. J Exp Child Psychol 2023; 232:105673. [PMID: 37068443 DOI: 10.1016/j.jecp.2023.105673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 03/01/2023] [Accepted: 03/01/2023] [Indexed: 04/19/2023]
Abstract
The "video deficit" is a well-documented effect whereby children learn less well about information delivered via a screen than the same information delivered in person. Research suggests that increasing social contingency may ameliorate this video deficit. The current study instantiated social contingency to screen-based information by embodying the screen within a socially interactive robot presented to urban Australian children with frequent exposure to screen-based communication. We failed to document differences between 22- to 26-month-old children's (N = 80) imitation of screen-based information embedded in a social robot and in-person humans. Furthermore, we did not replicate the video deficit with children imitating at similar levels regardless of the presentation medium. This failure to replicate supports the findings of a recent meta-analysis of video deficit research whereby there appears to be a steady decrease over time in the magnitude of the video deficit effect. We postulate that, should the video deficit effect be truly dwindling in effect size, the video deficit may soon be a historical artifact as children begin perceiving technology as relevant and meaningful in everyday life more and more. This research finds that observational-based learning material can be successfully delivered in person, via a screen, or via a screen embedded in a social robot.
Collapse
Affiliation(s)
- Kristyn Sommer
- School of Applied Psychology, Griffith University, Southport, Queensland 4215, Australia.
| | - Virginia Slaughter
- Early Cognitive Development Centre, School of Psychology, University of Queensland, St Lucia, Queensland 4072, Australia
| | - Janet Wiles
- ARC Centre of Excellence for the Dynamics of Language, School of Information Technology and Electrical Engineering, University of Queensland, St Lucia, Queensland 4067, Australia
| | - Mark Nielsen
- Early Cognitive Development Centre, School of Psychology, University of Queensland, St Lucia, Queensland 4072, Australia; Faculty of Humanities, University of Johannesburg, Auckland Park 2092, South Africa
| |
Collapse
|
15
|
Systematic Review of Affective Computing Techniques for Infant Robot Interaction. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00985-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
AbstractResearch studies on social robotics and human-robot interaction have gained insights into factors that influence people’s perceptions and behaviors towards robots. However, adults’ perceptions of robots may differ significantly from those of infants. Consequently, extending this knowledge also to infants’ attitudes toward robots is a growing field of research. Indeed, infant-robot interaction (IRI) is emerging as a critical and necessary area of research as robots are increasingly used in social environments, such as caring for infants with all types of disabilities, companionship, and education. Although studies have been conducted on the ability of robots to positively engage infants, little is known about the infants’ affective state when interacting with a robot. In this systematic review, technologies for infant affective state recognition relevant to IRI applications are presented and surveyed. Indeed, adapting techniques currently employed for infant’s emotion recognition to the field of IRI results to be a complex task, since it requires timely response while not interfering with the infant’s behavior. Those aspects have a crucial impact on the selection of the emotion recognition techniques and the related metrics to be used for this purpose. Therefore, this review is intended to shed light on the advantages and the current research challenges of the infants’ affective state recognition approaches in the IRI field, elucidates a roadmap for their use in forthcoming studies as well as potentially provide support to future developments of emotion-aware robots.
Collapse
|
16
|
Higashino K, Kimoto M, Iio T, Shimohara K, Shiomi M. Is Politeness Better than Impoliteness? Comparisons of Robot's Encouragement Effects Toward Performance, Moods, and Propagation. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00971-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
Abstract
AbstractThis study experimentally compared the effects of encouragement with polite/ impolite attitudes from a robot in a monotonous task from three viewpoints: performance, mood, and propagation. Experiment I investigated encouragement effects on performance and mood. The participants did a monotonous task during which a robot continuously provided polite, neutral, or impolite encouragement. Our experiment results showed that polite and impolite encouragement significantly improved performance more than neutral comments, although there was no significant difference between polite and impolite encouragement. In addition, impolite encouragement caused significantly more negative moods than polite encouragement. Experiment II determined whether the robot's encouragement influenced the participants' encouragement styles. The participants behaved similarly to the robot in Experiment I, i.e., they selected polite, neutral, and impolite encouragements by observing the progress of a monotonous task by a dummy participant. The experiment results, which showed that the robot's encouragement significantly influenced the participants' encouragement styles, suggest that polite encouragement is more advantageous than impolite encouragement.
Collapse
|
17
|
Perceptions of Behaviors Associated with ASD in Others: Knowledge of the Diagnosis Increases Empathy and Improves Perceptions of Warmth and Competence. Eur J Investig Health Psychol Educ 2022; 12:1594-1606. [DOI: 10.3390/ejihpe12110112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 10/15/2022] [Accepted: 10/27/2022] [Indexed: 11/06/2022] Open
Abstract
Individuals with Autism Spectrum Disorder (ASD) often exhibit atypical social behaviors that some may perceive as odd or discomforting. Given that ASD is largely invisible, it may be difficult to understand why a person is displaying these atypical behaviors, leading to less favorable attitudes. The current study aimed to examine if having an explanation for an individual exhibiting behaviors associated with ASD could improve perceptions of warmth and competence, as well as the amount of empathy felt towards the individual. Participants (n = 82) were presented with a scenario involving two people, one of whom exhibited behaviors consistent with ASD. ASD diagnosis information was manipulated, such that half of the participants were told that the target was diagnosed with ASD, and the other half were given no diagnostic information. Afterwards, participants rated the target. Results indicated that having an explanation for the ASD-related behaviors led to higher ratings of warmth and competence and greater feelings of empathy. Furthermore, empathy mediated the relationship between having the diagnostic information and target ratings. Thus, having an explanation for someone’s behavior may lead to greater feelings of empathy and improve perceptions and understanding. This has important implications for improving education and awareness about behaviors associated with ASD as well as for making the decision of whether or not to disclose one’s diagnosis.
Collapse
|
18
|
Improving evaluations of advanced robots by depicting them in harmful situations. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
19
|
Festerling J, Siraj I, Malmberg LE. Exploring children's exposure to voice assistants and their ontological conceptualizations of life and technology. AI & SOCIETY 2022:1-28. [PMID: 36276897 PMCID: PMC9580440 DOI: 10.1007/s00146-022-01555-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 09/05/2022] [Indexed: 11/30/2022]
Abstract
Digital Voice Assistants (DVAs) have become a ubiquitous technology in today's home and childhood environments. Inspired by (Bernstein and Crowley, J Learn Sci 17:225-247, 2008) original study (n = 60, age 4-7 years) on how children's ontological conceptualizations of life and technology were systematically associated with their real-world exposure to robotic entities, the current study explored this association for children in their middle childhood (n = 143, age 7-11 years) and with different levels of DVA-exposure. We analyzed correlational survey data from 143 parent-child dyads who were recruited on 'Amazon Mechanical Turk' (MTurk). Children's ontological conceptualization patterns of life and technology were measured by asking them to conceptualize nine prototypical organically living and technological entities (e.g., humans, cats, smartphones, DVAs) with respect to their biology, intelligence, and psychology. Their ontological conceptualization patterns were then associated with their DVA-exposure and additional control variables (e.g., children's technological affinity, demographic/individual characteristics). Compared to biology and psychology, intelligence was a less differentiating factor for children to differentiate between organically living and technological entities. This differentiation pattern became more pronounced with technological affinity. There was some evidence that children with higher DVA-exposure differentiated more rigorously between organically living and technological entities on the basis of psychology. To the best of our knowledge, this is the first study exploring children's real-world exposure to DVAs and how it is associated with their conceptual understandings of life and technology. Findings suggest although psychological conceptualizations of technology may become more pronounced with DVA-exposure, it is far from clear such tendencies blur ontological boundaries between life and technology from children's perspective. Supplementary Information The online version contains supplementary material available at 10.1007/s00146-022-01555-3.
Collapse
Affiliation(s)
- Janik Festerling
- Department of Education, University of Oxford, 15 Norham Gardens, Oxford, OX2 6PY UK
| | - Iram Siraj
- Department of Education, University of Oxford, 15 Norham Gardens, Oxford, OX2 6PY UK
| | - Lars-Erik Malmberg
- Department of Education, University of Oxford, 15 Norham Gardens, Oxford, OX2 6PY UK
| |
Collapse
|
20
|
Constantinescu M, Uszkai R, Vică C, Voinea C. Children-Robot Friendship, Moral Agency, and Aristotelian Virtue Development. Front Robot AI 2022; 9:818489. [PMID: 35991848 PMCID: PMC9384694 DOI: 10.3389/frobt.2022.818489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 06/20/2022] [Indexed: 11/13/2022] Open
Abstract
Social robots are increasingly developed for the companionship of children. In this article we explore the moral implications of children-robot friendships using the Aristotelian framework of virtue ethics. We adopt a moderate position and argue that, although robots cannot be virtue friends, they can nonetheless enable children to exercise ethical and intellectual virtues. The Aristotelian requirements for true friendship apply only partly to children: unlike adults, children relate to friendship as an educational play of exploration, which is constitutive of the way they acquire and develop virtues. We highlight that there is a relevant difference between the way we evaluate adult-robot friendship compared to children-robot friendship, which is rooted in the difference in moral agency and moral responsibility that generate the asymmetries in the moral status ascribed to adults versus children. We look into the role played by imaginary companions (IC) and personified objects (PO) in children’s moral development and claim that robots, understood as Personified Robotic Objects (PROs), play a similar role with such fictional entities, enabling children to exercise affection, moral imagination and reasoning, thus contributing to their development as virtuous adults. Nonetheless, we argue that adequate use of robots for children’s moral development is conditioned by several requirements related to design, technology and moral responsibility.
Collapse
Affiliation(s)
- Mihaela Constantinescu
- CCEA, Faculty of Philosophy, University of Bucharest, Bucharest, Romania
- *Correspondence: Mihaela Constantinescu,
| | - Radu Uszkai
- Department of Philosophy and Social Sciences, Bucharest University of Economic Studies, Bucharest, Romania
| | - Constantin Vică
- CCEA, Faculty of Philosophy, University of Bucharest, Bucharest, Romania
| | - Cristina Voinea
- Department of Philosophy and Social Sciences, Bucharest University of Economic Studies, Bucharest, Romania
| |
Collapse
|
21
|
Design, Development, and a Pilot Study of a Low-Cost Robot for Child–Robot Interaction in Autism Interventions. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6060043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Socially assistive robots are widely deployed in interventions with children on the autism spectrum, exploiting the benefits of this technology in social behavior intervention plans, while reducing their autistic behavior. Furthermore, innovations in modern technologies such as machine learning enhance these robots with great capabilities. Since the results of this implementation are promising, their total cost makes them unaffordable for some organizations while the needs are growing progressively. In this paper, a low-cost robot for autism interventions is proposed, benefiting from the advantages of machine learning and low-cost hardware. The mechanical design of the robot and the development of machine learning models are presented. The robot was evaluated by a small group of educators for children with ASD. The results of various model implementations, together with the design evaluation of the robot, are encouraging and indicate that this technology would be advantageous for deployment in child–robot interaction scenarios.
Collapse
|
22
|
Innovation of Teaching Tools during Robot Programming Learning to Promote Middle School Students’ Critical Thinking. SUSTAINABILITY 2022. [DOI: 10.3390/su14116625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
In the digital age, robotics education has gained much attention for cultivating learners’ design thinking, creative thinking, critical thinking, and cooperative abilities. In particular, critical thinking as one of the key competencies in Education for Sustainable Development (ESD) can stimulate imagination and creation. It is of great value to explore critical thinking cultivation in robot programming learning. Therefore, this study applied different teaching tools to take the content of “making a manipulator through programming and construction” in a robotics course as an experimental context to examine the promotion of learners’ critical thinking. Before the experiment, a pre-test was conducted to measure students’ critical thinking ability. Then, all students were divided randomly into two groups: one as an experimental group with the teaching tool of Construction–Criticism–Migration (CCM) instructional design, and the other as a control group with the traditional teaching tool of demonstrate–practice instructional design. After a 6-week experiment, the measurement of critical thinking was applied as a post-test. SPSS was used to conduct an independent sample t test and one-way ANOVA to explore whether students’ critical thinking ability had improved and whether differences were found between the experimental group and the control group after the 6-week experiment. The results showed that the experimental group students’ critical thinking ability significantly improved, whereas no significant difference was found before and after the experiment for the control group. A significant difference existed between the two groups. This study provides an example of a new instructional design teaching tool for the teaching of robot programming and can provide valuable suggestions for instructors in middle schools.
Collapse
|
23
|
Penčić M, Čavić M, Oros D, Vrgović P, Babković K, Orošnjak M, Čavić D. Anthropomorphic Robotic Eyes: Structural Design and Non-Verbal Communication Effectiveness. SENSORS (BASEL, SWITZERLAND) 2022; 22:3060. [PMID: 35459046 PMCID: PMC9024502 DOI: 10.3390/s22083060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 04/05/2022] [Accepted: 04/12/2022] [Indexed: 02/04/2023]
Abstract
This paper shows the structure of a mechanical system with 9 DOFs for driving robot eyes, as well as the system's ability to produce facial expressions. It consists of three subsystems which enable the motion of the eyeballs, eyelids, and eyebrows independently to the rest of the face. Due to its structure, the mechanical system of the eyeballs is able to reproduce all of the motions human eyes are capable of, which is an important condition for the realization of binocular function of the artificial robot eyes, as well as stereovision. From a kinematic standpoint, the mechanical systems of the eyeballs, eyelids, and eyebrows are highly capable of generating the movements of the human eye. The structure of a control system is proposed with the goal of realizing the desired motion of the output links of the mechanical systems. The success of the mechanical system is also rated on how well it enables the robot to generate non-verbal emotional content, which is why an experiment was conducted. Due to this, the face of the human-like robot MARKO was used, covered with a face mask to aid in focusing the participants on the eye region. The participants evaluated the efficiency of the robot's non-verbal communication, with certain emotions achieving a high rate of recognition.
Collapse
Affiliation(s)
- Marko Penčić
- Faculty of Technical Sciences, University of Novi Sad, Trg Dositeja Obradovića 6, 21000 Novi Sad, Serbia; (M.Č.); (D.O.); (P.V.); (K.B.); (M.O.); (D.Č.)
| | | | | | | | | | | | | |
Collapse
|
24
|
Children selectively demonstrate their competence to a puppet when others depict it as an agent. COGNITIVE DEVELOPMENT 2022. [DOI: 10.1016/j.cogdev.2022.101186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
25
|
Law T, Chita-Tegmark M, Rabb N, Scheutz M. Examining attachment to robots: Benefits, challenges, and alternatives. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Potential applications of robots in private and public human spaces have prompted the design of so-called “social robots” that can interact with humans in social settings and potentially cause humans to attach to the robots. The focus of this paper is an analysis of possible benefits and challenges arising from such human-robot attachment as reported in the HRI literature, followed by guidelines for the use and the design of robots that might elicit attachment bonds. We start by analyzing the potential benefits for humans becoming attached to robots, which might include increased natural interaction, effectiveness and acceptance of the robot, social companionship, and well-being for the human. Turning to the potential risks associated with human-robot attachment, we discuss the possibly suboptimal use of the robot in the most benign cases, but also the potential formation of unidirectional emotional bonds, and the potential for deception and subconscious influence of the robot on the person in more severe cases. The upshot of the analysis then is a recommendation to reconceptualize relationships with social robots in an attempt to retain potential benefits of human-robot attachment, while mitigating (to the extent possible) its downsides.
Collapse
|
26
|
British Children’s and Adults’ Perceptions of Robots. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2022. [DOI: 10.1155/2022/3813820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Robotics and artificial intelligence (AI) systems are quickly becoming a familiar part of different aspects of everyday life. We know very little about how children and adults perceive the abilities of different robots and whether these ascriptions are associated with a willingness to interact with a robot. In the current study, we asked British children aged 4–13 years and British adults to complete an online experiment. Participants were asked to describe what a robot looks like, give their preference for various types of robots (a social robot, a machine-like robot, and a human-like robot), and answer whether they were willing to engage in different activities with the different robots. Results showed that younger children (4 to 8 years old) are more willing to engage with robots compared to older children (9 to 13 years) and adults. Specifically, younger children were more likely to see robots as kind compared to older children and adults. Younger children were also more likely to rate the social robot as helpful compared to older children and adults. This is also the first study to examine preferences for robots engaging in religious activities, and results show that British adults prefer humans over robots to pray for them but such biases may not be generally applicable to children. These results provide new insight into how children and adults in the United Kingdom accept the presence and function of robots.
Collapse
|
27
|
Carpendale JIM, Parnell VL, Wallbridge B. Conceptualizations of Knowledge in Structuring Approaches to Moral Development: A Process-Relational Approach. Front Psychol 2021; 12:756654. [PMID: 34975648 PMCID: PMC8716751 DOI: 10.3389/fpsyg.2021.756654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 11/22/2021] [Indexed: 11/13/2022] Open
Abstract
Like other aspects of child development, views of the nature and development of morality depend on philosophical assumptions or worldviews presupposed by researchers. We analyze assumptions regarding knowledge linked to two contrasting worldviews: Cartesian-split-mechanistic and process-relational. We examine the implications of these worldviews for approaches to moral development, including relations between morality and social outcomes, and the concepts of information, meaning, interaction and computation. It is crucial to understand how researchers view these interrelated concepts in order to understand approaches to moral development. Within the Cartesian-split-mechanistic worldview, knowledge is viewed as representation and meaning is mechanistic and fixed. Both nativism and empiricism are based in this worldview, differing in whether the source of representations is assumed to be primarily internal or external. Morality is assumed to pre-exist, either in the genome or the culture. We discuss problems with these conceptions and endorse the process-relational paradigm, according to which knowledge is constructed through interaction, and morality begins in activity as a process of coordinating perspectives, rather than the application of fixed rules. The contrast is between beginning with the mind or beginning with social activity in explaining the mind.
Collapse
|
28
|
Anthropomorphizing Technology: A Conceptual Review of Anthropomorphism Research and How it Relates to Children's Engagements with Digital Voice Assistants. Integr Psychol Behav Sci 2021; 56:709-738. [PMID: 34811705 PMCID: PMC9334403 DOI: 10.1007/s12124-021-09668-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/07/2021] [Indexed: 11/29/2022]
Abstract
‘Anthropomorphism’ is a popular term in the literature on human-technology engagements, in general, and child-technology engagements, in particular. But what does it really mean to ‘anthropomorphize’ something in today’s world? This conceptual review article, addressed to researchers interested in anthropomorphism and adjacent areas, reviews contemporary anthropomorphism research, and it offers a critical perspective on how anthropomorphism research relates to today’s children who grow up amid increasingly intelligent and omnipresent technologies, particularly digital voice assistants (e.g., Alexa, Google Assistant, Siri). First, the article reviews a comprehensive body of quantitative as well as qualitative anthropomorphism research and considers it within three different research perspectives: descriptive, normative and explanatory. Following a brief excursus on philosophical pragmatism, the article then discusses each research perspective from a pragmatistic viewpoint, with a special emphasis on child-technology and child-voice-assistant engagements, and it also challenges some popular notions in the literature. These notions include descriptive ‘as if’ parallels (e.g., child behaves ‘as if’ Alexa was a friend), or normative assumptions that human-human engagements are generally superior to human-technology engagements. Instead, the article reviews different examples from the literature suggesting the nature of anthropomorphism may change as humans’ experiential understandings of humanness change, and this may particularly apply to today’s children as their social cognition develops in interaction with technological entities which are increasingly characterized by unprecedented combinations of human and non-human qualities.
Collapse
|
29
|
Girouard‐Hallam LN, Streble HM, Danovitch JH. Children's mental, social, and moral attributions toward a familiar digital voice assistant. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2021. [DOI: 10.1002/hbe2.321] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
| | - Hailey M. Streble
- Department of Psychological and Brain Sciences University of Louisville Louisville Kentucky USA
| | - Judith H. Danovitch
- Department of Psychological and Brain Sciences University of Louisville Louisville Kentucky USA
| |
Collapse
|
30
|
Quintelier KJP, van Hugten J, Parmar BL, Brokerhof IM. Humanizing Stakeholders by Rethinking Business. Front Psychol 2021; 12:687067. [PMID: 34630203 PMCID: PMC8493002 DOI: 10.3389/fpsyg.2021.687067] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 08/09/2021] [Indexed: 11/13/2022] Open
Abstract
Can business humanize its stakeholders? And if so, how does this relate to moral consideration for stakeholders? In this paper we compare two business orientations that are relevant for current business theory and practice: a stakeholder orientation and a profit orientation. We empirically investigate the causal relationships between business orientation, humanization, and moral consideration. We report the results of six experiments, making use of different operationalizations of a stakeholder and profit orientation, different stakeholders (employees, suppliers, labor unions), and different participant samples. Our findings support the prediction that individual stakeholders observing a stakeholder-oriented firm see the firm’s other stakeholders as more human than individual stakeholders observing a profit-oriented firm. This humanization, in turn, increases individual stakeholders’ moral consideration for the firm’s other stakeholders. Our findings underscore the importance of humanization for stakeholders’ moral consideration for each other. This paper contributes to a deeper understanding of the firm as a moral community of stakeholders. Specifically, we move away from a focus on managers, and how they can make business more moral. Instead we direct attention to (other) stakeholders, and how business can make these stakeholders more moral.
Collapse
Affiliation(s)
- Katinka J P Quintelier
- Department of Management & Organization, School of Business and Economics, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Joeri van Hugten
- Department of Management & Organization, School of Business and Economics, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Bidhan L Parmar
- The Darden School of Business, The University of Virginia, Charlottesville, VA, United States
| | - Inge M Brokerhof
- Faculty of Management Studies, The Open University of The Netherlands, Heerlen, Netherlands
| |
Collapse
|
31
|
McHugh SR, Callanan MA, Weatherwax K, Jipson JL, Takayama L. Unusual artifacts: Linking parents'
STEM
background and children's animacy judgments to
parent–child
play with robots. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2021. [DOI: 10.1002/hbe2.286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
32
|
Flanagan T, Rottman J, Howard LH. Constrained Choice: Children's and Adults' Attribution of Choice to a Humanoid Robot. Cogn Sci 2021; 45:e13043. [PMID: 34606132 DOI: 10.1111/cogs.13043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 08/16/2021] [Accepted: 08/18/2021] [Indexed: 11/29/2022]
Abstract
Young children, like adults, understand that human agents can flexibly choose different actions in different contexts, and they evaluate these agents based on such choices. However, little is known about children's tendencies to attribute the capacity to choose to robots, despite increased contact with robotic agents. In this paper, we compare 5- to 7-year-old children's and adults' attributions of free choice to a robot and to a human child by using a series of tasks measuring agency attribution, action prediction, and choice attribution. In morally neutral scenarios, children ascribed similar levels of free choice to the robot and the human, while adults were more likely to ascribe free choice to the human. For morally relevant scenarios, however, both age groups considered the robot's actions to be more constrained than the human's actions. These findings demonstrate that children and adults hold a nuanced understanding of free choice that is sensitive to both the agent type and constraints within a given scenario.
Collapse
|
33
|
Aguiar NR. A paradigm for assessing adults' and children's concepts of artificially intelligent virtual characters. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2021. [DOI: 10.1002/hbe2.283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
34
|
Geiger AR, Balas B. Robot faces elicit responses intermediate to human faces and objects at face-sensitive ERP components. Sci Rep 2021; 11:17890. [PMID: 34504241 PMCID: PMC8429544 DOI: 10.1038/s41598-021-97527-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 08/17/2021] [Indexed: 11/10/2022] Open
Abstract
Face recognition is supported by selective neural mechanisms that are sensitive to various aspects of facial appearance. These include event-related potential (ERP) components like the P100 and the N170 which exhibit different patterns of selectivity for various aspects of facial appearance. Examining the boundary between faces and non-faces using these responses is one way to develop a more robust understanding of the representation of faces in extrastriate cortex and determine what critical properties an image must possess to be considered face-like. Robot faces are a particularly interesting stimulus class to examine because they can differ markedly from human faces in terms of shape, surface properties, and the configuration of facial features, but are also interpreted as social agents in a range of settings. In the current study, we thus chose to investigate how ERP responses to robot faces may differ from the response to human faces and non-face objects. In two experiments, we examined how the P100 and N170 responded to human faces, robot faces, and non-face objects (clocks). In Experiment 1, we found that robot faces elicit intermediate responses from face-sensitive components relative to non-face objects (clocks) and both real human faces and artificial human faces (computer-generated faces and dolls). These results suggest that while human-like inanimate faces (CG faces and dolls) are processed much like real faces, robot faces are dissimilar enough to human faces to be processed differently. In Experiment 2 we found that the face inversion effect was only partly evident in robot faces. We conclude that robot faces are an intermediate stimulus class that offers insight into the perceptual and cognitive factors that affect how social agents are identified and categorized.
Collapse
Affiliation(s)
- Allie R Geiger
- Department of Psychology, North Dakota State University, Fargo, ND, 58102, USA
| | - Benjamin Balas
- Department of Psychology, North Dakota State University, Fargo, ND, 58102, USA.
| |
Collapse
|
35
|
Fong FT, Sommer K, Redshaw J, Kang J, Nielsen M. The man and the machine: Do children learn from and transmit tool-use knowledge acquired from a robot in ways that are comparable to a human model? J Exp Child Psychol 2021; 208:105148. [DOI: 10.1016/j.jecp.2021.105148] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 03/03/2021] [Accepted: 03/04/2021] [Indexed: 11/16/2022]
|
36
|
Calvo-Barajas N, Elgarf M, Perugia G, Paiva A, Peters C, Castellano G. Hurry Up, We Need to Find the Key! How Regulatory Focus Design Affects Children's Trust in a Social Robot. Front Robot AI 2021; 8:652035. [PMID: 34307468 PMCID: PMC8297465 DOI: 10.3389/frobt.2021.652035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 06/14/2021] [Indexed: 11/13/2022] Open
Abstract
In educational scenarios involving social robots, understanding the way robot behaviors affect children's motivation to achieve their learning goals is of vital importance. It is crucial for the formation of a trust relationship between the child and the robot so that the robot can effectively fulfill its role as a learning companion. In this study, we investigate the effect of a regulatory focus design scenario on the way children interact with a social robot. Regulatory focus theory is a type of self-regulation that involves specific strategies in pursuit of goals. It provides insights into how a person achieves a particular goal, either through a strategy focused on "promotion" that aims to achieve positive outcomes or through one focused on "prevention" that aims to avoid negative outcomes. In a user study, 69 children (7-9 years old) played a regulatory focus design goal-oriented collaborative game with the EMYS robot. We assessed children's perception of likability and competence and their trust in the robot, as well as their willingness to follow the robot's suggestions when pursuing a goal. Results showed that children perceived the prevention-focused robot as being more likable than the promotion-focused robot. We observed that a regulatory focus design did not directly affect trust. However, the perception of likability and competence was positively correlated with children's trust but negatively correlated with children's acceptance of the robot's suggestions.
Collapse
Affiliation(s)
- Natalia Calvo-Barajas
- Uppsala Social Robotics Lab, Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Maha Elgarf
- Embodied Social Agents Lab (ESAL), School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Giulia Perugia
- Uppsala Social Robotics Lab, Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Ana Paiva
- Department of Computer Science and Engineering, Instituto Superior Técnico (IST), University of Lisbon, Lisbon, Portugal
| | - Christopher Peters
- Embodied Social Agents Lab (ESAL), School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Ginevra Castellano
- Uppsala Social Robotics Lab, Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
37
|
|
38
|
Keijsers M, Bartneck C, Eyssel F. Pay Them No Mind: the Influence of Implicit and Explicit Robot Mind Perception on the Right to be Protected. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00799-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
AbstractMind perception is a fundamental part of anthropomorphism and has recently been suggested to be a dual process. The current research studied the influence of implicit and explicit mind perception on a robot’s right to be protected from abuse, both in terms of participants condemning abuse that befell the robot as well as in terms of participants’ tendency to humiliate the robot themselves. Results indicated that acceptability of robot abuse can be manipulated through explicit mind perception, yet are inconclusive about the influence of implicit mind perception. Interestingly, explicit attribution of mind to the robot did not make people less likely to mistreat the robot. This suggests that the relationship between a robot’s perceived mind and right to protection is far from straightforward, and has implications for researchers and engineers who want to tackle the issue of robot abuse.
Collapse
|
39
|
Parent reports of children's parasocial relationships with conversational agents: Trusted voices in children's lives. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2021. [DOI: 10.1002/hbe2.271] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
40
|
Banks J. From Warranty Voids to Uprising Advocacy: Human Action and the Perceived Moral Patiency of Social Robots. Front Robot AI 2021; 8:670503. [PMID: 34124176 PMCID: PMC8194253 DOI: 10.3389/frobt.2021.670503] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Accepted: 05/12/2021] [Indexed: 11/23/2022] Open
Abstract
Moral status can be understood along two dimensions: moral agency [capacities to be and do good (or bad)] and moral patiency (extents to which entities are objects of moral concern), where the latter especially has implications for how humans accept or reject machine agents into human social spheres. As there is currently limited understanding of how people innately understand and imagine the moral patiency of social robots, this study inductively explores key themes in how robots may be subject to humans’ (im)moral action across 12 valenced foundations in the moral matrix: care/harm, fairness/unfairness, loyalty/betrayal, authority/subversion, purity/degradation, liberty/oppression. Findings indicate that people can imagine clear dynamics by which anthropomorphic, zoomorphic, and mechanomorphic robots may benefit and suffer at the hands of humans (e.g., affirmations of personhood, compromising bodily integrity, veneration as gods, corruption by physical or information interventions). Patterns across the matrix are interpreted to suggest that moral patiency may be a function of whether people diminish or uphold the ontological boundary between humans and machines, though even moral upholdings bare notes of utilitarianism.
Collapse
Affiliation(s)
- Jaime Banks
- College of Media & Communication, Texas Tech University, Lubbock, TX, United States
| |
Collapse
|
41
|
Okanda M, Taniguchi K, Wang Y, Itakura S. Preschoolers' and adults' animism tendencies toward a humanoid robot. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106688] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
42
|
Lee JG, Lee J, Lee D. Cheerful encouragement or careful listening: The dynamics of robot etiquette at Children's different developmental stages. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
43
|
van Straten CL, Peter J, Kühne R, Barco A. The wizard and I: How transparent teleoperation and self-description (do not) affect children’s robot perceptions and child-robot relationship formation. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01202-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractIt has been well documented that children perceive robots as social, mental, and moral others. Studies on child-robot interaction may encourage this perception of robots, first, by using a Wizard of Oz (i.e., teleoperation) set-up and, second, by having robots engage in self-description. However, much remains unknown about the effects of transparent teleoperation and self-description on children’s perception of, and relationship formation with a robot. To address this research gap initially, we conducted an experimental study with a 2 × 2 (teleoperation: overt/covert; self-description: yes/no) between-subject design in which 168 children aged 7–10 interacted with a Nao robot once. Transparency about the teleoperation procedure decreased children’s perceptions of the robot’s autonomy and anthropomorphism. Self-description reduced the degree to which children perceived the robot as being similar to themselves. Transparent teleoperation and self-description affected neither children’s perceptions of the robot’s animacy and social presence nor their closeness to and trust in the robot.
Collapse
|
44
|
Can Robots Help Working Parents with Childcare? Optimizing Childcare Functions for Different Parenting Characteristics. Int J Soc Robot 2021; 14:193-211. [PMID: 33841588 PMCID: PMC8020072 DOI: 10.1007/s12369-021-00784-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2021] [Indexed: 11/16/2022]
Abstract
Is it true that parents always prioritize educational effectiveness when selecting childcare services? The current study identified the potential requirements of dual-income parents toward social robots’ diverse childcare functions (e.g., socialization, education, entertainment, and consultation). The results revealed that parental attitudes toward robots were made more positive by all the childcare functions of robots except for their educational features. Furthermore, parents’ expectations of childcare functions varied based on their parenting characteristics. Spectral clustering analysis identified distinctive parenting styles (e.g., family-oriented, work-oriented, noninterventional, and dominant), and multigroup structural equation modeling suggested that the impact of robots’ socialization function was significant in all parent groups, while other childcare functions exerted limited influence according to specific parenting styles. In addition, children’s characteristics were found to alter parents’ preferences for each childcare function. These results offer practical implications for the early adoption of childcare robots through predetermining parents’ acceptability based on their specific parenting characteristics.
Collapse
|
45
|
Okanda M, Taniguchi K. Is a robot a boy? Japanese children’s and adults’ gender-attribute bias toward robots and its implications for education on gender stereotypes. COGNITIVE DEVELOPMENT 2021. [DOI: 10.1016/j.cogdev.2021.101044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
46
|
Laakasuo M, Palomäki J, Köbis N. Moral Uncanny Valley: A Robot’s Appearance Moderates How its Decisions are Judged. Int J Soc Robot 2021. [DOI: 10.1007/s12369-020-00738-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractArtificial intelligence and robotics are rapidly advancing. Humans are increasingly often affected by autonomous machines making choices with moral repercussions. At the same time, classical research in robotics shows that people are adverse to robots that appear eerily human—a phenomenon commonly referred to as the uncanny valley effect. Yet, little is known about how machines’ appearances influence how human evaluate their moral choices. Here we integrate the uncanny valley effect into moral psychology. In two experiments we test whether humans evaluate identical moral choices made by robots differently depending on the robots’ appearance. Participants evaluated either deontological (“rule based”) or utilitarian (“consequence based”) moral decisions made by different robots. The results provide first indication that people evaluate moral choices by robots that resemble humans as less moral compared to the same moral choices made by humans or non-human robots: a moral uncanny valley effect. We discuss the implications of our findings for moral psychology, social robotics and AI-safety policy.
Collapse
|
47
|
Stower R, Calvo-Barajas N, Castellano G, Kappas A. A Meta-analysis on Children’s Trust in Social Robots. Int J Soc Robot 2021. [DOI: 10.1007/s12369-020-00736-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
AbstractAlthough research on children’s trust in social robots is increasingly growing in popularity, a systematic understanding of the factors which influence children’s trust in robots is lacking. In addition, meta-analyses in child–robot-interaction (cHRI) have yet to be popularly adopted as a method for synthesising results. We therefore conducted a meta-analysis aimed at identifying factors influencing children’s trust in robots. We constructed four meta-analytic models based on 20 identified studies, drawn from an initial pool of 414 papers, as a means of investigating the effect of robot embodiment and behaviour on both social and competency trust. Children’s pro-social attitudes towards social robots were also explored. There was tentative evidence to suggest that more human-like attributes lead to less competency trust in robots. In addition, we found a trend towards the type of measure that was used (subjective or objective) influencing the direction of effects for social trust. The meta-analysis also revealed a tendency towards under-powered designs, as well as variation in the methods and measures used to define trust. Nonetheless, we demonstrate that it is still possible to perform rigorous analyses despite these challenges. We also provide concrete methodological recommendations for future research, such as simplifying experimental designs, conducting a priori power analyses and clearer statistical reporting.
Collapse
|
48
|
Michaelis JE, Mutlu B. Reading socially: Transforming the in-home reading experience with a learning-companion robot. Sci Robot 2021; 3:3/21/eaat5999. [PMID: 33141721 DOI: 10.1126/scirobotics.aat5999] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 07/25/2018] [Indexed: 01/21/2023]
Abstract
Social robots hold great promise as companions and peer learners for children, yet little is known about how they can be best designed for this population, what interaction scenarios can benefit from their use, and how they might fit into learning activities and environments. We aimed to close this gap by designing a learning-companion robot to augment guided reading activity and examined the robot's impact on an in-home reading experience. In this paper, we compared the experiences of early adolescent children aged 10 to 12 years (N = 24) who completed guided reading activities either with a learning-companion robot or as a paper-based activity in a 2-week-long, in-home field study. We found similar reading frequency and duration in both conditions and that both guided reading activities were described as positive experiences that helped to build reading skill and to sustain engagement. Children who read with the learning-companion robot further reported that the activities supported reading comprehension and motivated them to read and indicated a deepening social connection (i.e., companionship or affiliation) with the robot. We conclude that, rather than the activity falling off after a novelty effect, our simple prototype social robot is capable of preserving the benefits of an existing in-home learning activity while transforming the reading experience into a valuable, social one. Our findings contribute to an understanding of how we might capitalize on the capacity of social robots to serve as a transformative learning tool as robots become more widely available to the public.
Collapse
Affiliation(s)
- Joseph E Michaelis
- Department of Educational Psychology, University of Wisconsin-Madison, Madison, WI 53706, USA.
| | - Bilge Mutlu
- Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI 53706, USA
| |
Collapse
|
49
|
Differential Facial Articulacy in Robots and Humans Elicit Different Levels of Responsiveness, Empathy, and Projected Feelings. ROBOTICS 2020. [DOI: 10.3390/robotics9040092] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Life-like humanoid robots are on the rise, aiming at communicative purposes that resemble humanlike conversation. In human social interaction, the facial expression serves important communicative functions. We examined whether a robot’s face is similarly important in human-robot communication. Based on emotion research and neuropsychological insights on the parallel processing of emotions, we argue that greater plasticity in the robot’s face elicits higher affective responsivity, more closely resembling human-to-human responsiveness than a more static face. We conducted a between-subjects experiment of 3 (facial plasticity: human vs. facially flexible robot vs. facially static robot) × 2 (treatment: affectionate vs. maltreated). Participants (N = 265; Mage = 31.5) were measured for their emotional responsiveness, empathy, and attribution of feelings to the robot. Results showed empathically and emotionally less intensive responsivity toward the robots than toward the human but followed similar patterns. Significantly different intensities of feelings and attributions (e.g., pain upon maltreatment) followed facial articulacy. Theoretical implications for underlying processes in human-robot communication are discussed. We theorize that precedence of emotion and affect over cognitive reflection, which are processed in parallel, triggers the experience of ‘because I feel, I believe it’s real,’ despite being aware of communicating with a robot. By evoking emotional responsiveness, the cognitive awareness of ‘it is just a robot’ fades into the background and appears not relevant anymore.
Collapse
|
50
|
Shiomi M, Okumura S, Kimoto M, Iio T, Shimohara K. Two is better than one: Social rewards from two agents enhance offline improvements in motor skills more than single agent. PLoS One 2020; 15:e0240622. [PMID: 33147230 PMCID: PMC7641341 DOI: 10.1371/journal.pone.0240622] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Accepted: 09/29/2020] [Indexed: 11/18/2022] Open
Abstract
Social rewards as praise from others enhance offline improvements in human motor skills. Does praise from artificial beings, e.g., computer-graphics-based agents (displayed agents) and robots (collocated agents), also enhance offline improvements in motor skills as effectively as praise from humans? This paper answers this question via two subsequent days’ experiment. We investigated the effect of the number of agents and their sense of presence toward offline improvement in motor skills because they are essential factors to change social effects and people’s behaviors in human-agent and human-robot interaction. Our 96 participants performed a finger-tapping task. Our results showed that those who received praise from two agents showed significantly better offline motor skill improvement than people who were praised by just one agent and those who received no praise. However, we identified no significant effects related to the sense of presence.
Collapse
Affiliation(s)
- Masahiro Shiomi
- Department of Agent Interaction Design Laboratory, Advanced Telecommunications Research Institute International, Kyoto, Japan
- * E-mail:
| | - Soto Okumura
- Department of Agent Interaction Design Laboratory, Advanced Telecommunications Research Institute International, Kyoto, Japan
- Department of Information Systems Design, Doshisha University, Kyoto, Japan
| | - Mitsuhiko Kimoto
- Department of Agent Interaction Design Laboratory, Advanced Telecommunications Research Institute International, Kyoto, Japan
- Department of Information and Computer Science, Keio University, Kanagawa, Japan
| | - Takamasa Iio
- Department of Agent Interaction Design Laboratory, Advanced Telecommunications Research Institute International, Kyoto, Japan
- Social Cognitive Engineering Laboratory, University of Tsukuba, Ibaraki, Japan
- JST PRESTO, Japan
| | | |
Collapse
|