1
|
Zhang X, Lu B, Chen C, Yang L, Chen W, Yao D, Hou J, Qiu J, Li F, Xu P. The correlation between upper body grip strength and resting-state EEG network. Med Biol Eng Comput 2023:10.1007/s11517-023-02865-4. [PMID: 37338738 DOI: 10.1007/s11517-023-02865-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 06/07/2023] [Indexed: 06/21/2023]
Abstract
Current research in the field of neuroscience primarily focuses on the analysis of electroencephalogram (EEG) activities associated with movement within the central nervous system. However, there is a dearth of studies investigating the impact of prolonged individual strength training on the resting state of the brain. Therefore, it is crucial to examine the correlation between upper body grip strength and resting-state EEG networks. In this study, coherence analysis was utilized to construct resting-state EEG networks using the available datasets. A multiple linear regression model was established to examine the correlation between the brain network properties of individuals and their maximum voluntary contraction (MVC) during gripping tasks. The model was used to predict individual MVC. The beta and gamma frequency bands showed significant correlation between RSN connectivity and MVC (p < 0.05), particularly in left hemisphere frontoparietal and fronto-occipital connectivity. RSN properties were consistently correlated with MVC in both bands, with correlation coefficients greater than 0.60 (p < 0.01). Additionally, predicted MVC positively correlated with actual MVC, with a coefficient of 0.70 and root mean square error of 5.67 (p < 0.01). The results show that the resting-state EEG network is closely related to upper body grip strength, which can indirectly reflect an individual's muscle strength through the resting brain network.
Collapse
Affiliation(s)
- Xiabing Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, 611731, Sichuan, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Bin Lu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, 611731, Sichuan, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Chunli Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, 611731, Sichuan, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Lei Yang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, 611731, Sichuan, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Wanjun Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, 611731, Sichuan, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, 611731, Sichuan, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
- Research Unit of NeuroInformation, Chinese Academy of Medical Sciences, Chengdu, 611731, China
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | - Jingming Hou
- Department of Rehabilitation, Southwest Hospital, Army Medical University, Chongqing, 400038, China
| | - Jing Qiu
- Robotics Research Center, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Fali Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, 611731, Sichuan, China.
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.
- Research Unit of NeuroInformation, Chinese Academy of Medical Sciences, Chengdu, 611731, China.
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, 611731, Sichuan, China.
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.
- Research Unit of NeuroInformation, Chinese Academy of Medical Sciences, Chengdu, 611731, China.
- Radiation Oncology Key Laboratory of Sichuan Province, Chengdu, 610041, China.
| |
Collapse
|
2
|
Tidoni E, Holle H, Scandola M, Schindler I, Hill L, Cross ES. Human but not robotic gaze facilitates action prediction. iScience 2022; 25:104462. [PMID: 35707718 PMCID: PMC9189121 DOI: 10.1016/j.isci.2022.104462] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/05/2022] [Accepted: 05/17/2022] [Indexed: 01/08/2023] Open
Abstract
Do people ascribe intentions to humanoid robots as they would to humans or non-human-like animated objects? In six experiments, we compared people’s ability to extract non-mentalistic (i.e., where an agent is looking) and mentalistic (i.e., what an agent is looking at; what an agent is going to do) information from gaze and directional cues performed by humans, human-like robots, and a non-human-like object. People were faster to infer the mental content of human agents compared to robotic agents. Furthermore, although the absence of differences in control conditions rules out the use of non-mentalizing strategies, the human-like appearance of non-human agents may engage mentalizing processes to solve the task. Overall, results suggest that human-like robotic actions may be processed differently from humans’ and objects’ behavior. These findings inform our understanding of the relevance of an object’s physical features in triggering mentalizing abilities and its relevance for human–robot interaction. People differently ascribe mental content to human-like and non-human-like agents A human-like shape may automatically engage mentalizing processes Human actions are interpreted faster than non-human actions
Collapse
|
3
|
Thellman S, de Graaf M, Ziemke T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a
computer < robot < human
pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Collapse
|
4
|
Kemmerer D. What modulates the Mirror Neuron System during action observation?: Multiple factors involving the action, the actor, the observer, the relationship between actor and observer, and the context. Prog Neurobiol 2021; 205:102128. [PMID: 34343630 DOI: 10.1016/j.pneurobio.2021.102128] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 06/23/2021] [Accepted: 07/29/2021] [Indexed: 01/03/2023]
Abstract
Seeing an agent perform an action typically triggers a motor simulation of that action in the observer's Mirror Neuron System (MNS). Over the past few years, it has become increasingly clear that during action observation the patterns and strengths of responses in the MNS are modulated by multiple factors. The first aim of this paper is therefore to provide the most comprehensive survey to date of these factors. To that end, 22 distinct factors are described, broken down into the following sets: six involving the action; two involving the actor; nine involving the observer; four involving the relationship between actor and observer; and one involving the context. The second aim is to consider the implications of these findings for four prominent theoretical models of the MNS: the Direct Matching Model; the Predictive Coding Model; the Value-Driven Model; and the Associative Model. These assessments suggest that although each model is supported by a wide range of findings, each one is also challenged by other findings and relatively unaffected by still others. Hence, there is now a pressing need for a richer, more inclusive model that is better able to account for all of the modulatory factors that have been identified so far.
Collapse
Affiliation(s)
- David Kemmerer
- Department of Psychological Sciences, Department of Speech, Language, and Hearing Sciences, Lyles-Porter Hall, Purdue University, 715 Clinic Drive, United States.
| |
Collapse
|
5
|
Cross ES, Ramsey R. Mind Meets Machine: Towards a Cognitive Science of Human-Machine Interactions. Trends Cogn Sci 2020; 25:200-212. [PMID: 33384213 DOI: 10.1016/j.tics.2020.11.009] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 11/26/2020] [Accepted: 11/28/2020] [Indexed: 12/31/2022]
Abstract
As robots advance from the pages and screens of science fiction into our homes, hospitals, and schools, they are poised to take on increasingly social roles. Consequently, the need to understand the mechanisms supporting human-machine interactions is becoming increasingly pressing. We introduce a framework for studying the cognitive and brain mechanisms that support human-machine interactions, leveraging advances made in cognitive neuroscience to link different levels of description with relevant theory and methods. We highlight unique features that make this endeavour particularly challenging (and rewarding) for brain and behavioural scientists. Overall, the framework offers a way to study the cognitive science of human-machine interactions that respects the diversity of social machines, individuals' expectations and experiences, and the structure and function of multiple cognitive and brain systems.
Collapse
Affiliation(s)
- Emily S Cross
- Department of Cognitive Science, Macquarie University, Sydney, Australia; Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, UK.
| | - Richard Ramsey
- Department of Psychology, Macquarie University, Sydney, Australia
| |
Collapse
|
6
|
Kupferberg A, Iacoboni M, Flanagin V, Huber M, Kasparbauer A, Baumgartner T, Hasler G, Schmidt F, Borst C, Glasauer S. Fronto-parietal coding of goal-directed actions performed by artificial agents. Hum Brain Mapp 2017; 39:1145-1162. [PMID: 29205671 DOI: 10.1002/hbm.23905] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Revised: 11/17/2017] [Accepted: 11/22/2017] [Indexed: 11/11/2022] Open
Abstract
With advances in technology, artificial agents such as humanoid robots will soon become a part of our daily lives. For safe and intuitive collaboration, it is important to understand the goals behind their motor actions. In humans, this process is mediated by changes in activity in fronto-parietal brain areas. The extent to which these areas are activated when observing artificial agents indicates the naturalness and easiness of interaction. Previous studies indicated that fronto-parietal activity does not depend on whether the agent is human or artificial. However, it is unknown whether this activity is modulated by observing grasping (self-related action) and pointing actions (other-related action) performed by an artificial agent depending on the action goal. Therefore, we designed an experiment in which subjects observed human and artificial agents perform pointing and grasping actions aimed at two different object categories suggesting different goals. We found a signal increase in the bilateral inferior parietal lobule and the premotor cortex when tool versus food items were pointed to or grasped by both agents, probably reflecting the association of hand actions with the functional use of tools. Our results show that goal attribution engages the fronto-parietal network not only for observing a human but also a robotic agent for both self-related and social actions. The debriefing after the experiment has shown that actions of human-like artificial agents can be perceived as being goal-directed. Therefore, humans will be able to interact with service robots intuitively in various domains such as education, healthcare, public service, and entertainment.
Collapse
Affiliation(s)
- Aleksandra Kupferberg
- Division of Molecular Psychiatry, Translational Research Center, University Hospital of Psychiatry University of Bern, Bern, Switzerland
| | - Marco Iacoboni
- David Geffen School of Medicine at UCLA, Ahmanson-Lovelace Brain Mapping Center, Semel Institute for Neuroscience and Human Behavior, Brain Research Institute, Los Angeles, California
| | - Virginia Flanagin
- German Center for Vertigo and Balance Disorders DSGZ, Ludwig-Maximilian University Munich, München, Germany.,Center for Sensorimotor Research, Department of Neurology, Ludwig-Maximilian University, München, Germany
| | - Markus Huber
- Center for Sensorimotor Research, Department of Neurology, Ludwig-Maximilian University, München, Germany
| | | | - Thomas Baumgartner
- Department of Social Psychology and Social Neuroscience, University of Bern, Bern, Switzerland
| | - Gregor Hasler
- Division of Molecular Psychiatry, Translational Research Center, University Hospital of Psychiatry University of Bern, Bern, Switzerland
| | - Florian Schmidt
- Department of Robotics, DLR, Oberpfaffenhofen, Bavaria, Germany
| | - Christoph Borst
- Department of Robotics, DLR, Oberpfaffenhofen, Bavaria, Germany
| | - Stefan Glasauer
- German Center for Vertigo and Balance Disorders DSGZ, Ludwig-Maximilian University Munich, München, Germany.,Center for Sensorimotor Research, Department of Neurology, Ludwig-Maximilian University, München, Germany
| |
Collapse
|