1
|
Zhang N, Zhu E. The analysis of international communication value assessment of Chinese mythology themed animated films in belt and road under BPNN algorithm. Sci Rep 2025; 15:16055. [PMID: 40341587 PMCID: PMC12062324 DOI: 10.1038/s41598-025-01159-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Accepted: 05/05/2025] [Indexed: 05/10/2025] Open
Abstract
Chinese mythology-themed animated films are critical carriers of cultural communication. With the promotion of the "Belt and Road" initiative, how to scientifically assess their international communication value has become a current research hotspot. The innovation of this work lies in integrating the Backpropagation Neural Network (BPNN) algorithm from artificial intelligence techniques with the Bidirectional Long Short-Term Memory (BiLSTM) algorithm. This work proposes a cultural feature recognition model based on the BPNN-BiLSTM fusion. This innovative approach effectively handles the complex nonlinear features in film content and captures long-term dependencies and semantic information within text sequences through the BiLSTM algorithm. The approach improves the recognition accuracy of cultural features in Chinese mythological animated films. Experimental results show that the model achieves an accuracy of 94.39% on the test set, with the loss value maintained at around 0.60, demonstrating high performance and accuracy. Compared to traditional evaluation methods, the proposed fusion algorithm improves the efficiency of the evaluation. Also, it provides a new technical path for accurately identifying cultural features. Thus, this approach has significant theoretical value and practical significance. Meanwhile, it can effectively promote the international dissemination of Chinese mythological animated films in the context of the "Belt and Road" initiative.
Collapse
Affiliation(s)
- Nan Zhang
- School of Film, Jilin Animation Institute, Changchun, 130013, China
- School of Theater, Film and Television, Communication University of China, Beijing, 100024, China
| | - Ellen Zhu
- Academy of Arts and Design, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
2
|
Katirai A. Autism and emotion recognition technologies in the workplace. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2025; 29:554-565. [PMID: 39282995 DOI: 10.1177/13623613241279704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2025]
Abstract
The use of emotion recognition technologies in the workplace is expanding. These technologies claim to provide insights into internal emotional states based on external cues like facial expressions. Despite interconnections between autism and the development of emotion recognition technologies as reported in prior research, little attention has been paid to the particular issues that arise for autistic individuals when emotion recognition technologies are implemented in consequential settings like the workplace. This article examines recent literature on autism and on emotion recognition technologies to argue that the risks of the use of emotion recognition technologies in the workplace are heightened for autistic people. Following a brief overview of emotion recognition technologies, this argument is made by focusing on the issues that arise through the development and deployment of emotion recognition technologies. Issues related to the development of emotion recognition technologies include fundamental problems with the science behind the technologies, the underrepresentation of autistic individuals in data sets and the problems with increasing this representation, and annotation of the training data for the technologies. Issues related to implementation include the invasive nature of emotion recognition technologies, the sensitivity of the data used, and the imposition of neurotypical norms on autistic workers through their use. The article closes with a call for future research on the implications of these emergent technologies for autistic individuals.Lay abstractTechnologies using artificial intelligence to recognize people's emotional states are increasingly being developed under the name of emotional recognition technologies. Emotion recognition technologies claim to identify people's emotional states based on data, like facial expressions. This is despite research providing counterevidence that emotion recognition technologies are founded on bad science and that it is not possible to correctly identify people's emotions in this way. The use of emotion recognition technologies is widespread, and they can be harmful when they are used in the workplace, especially for autistic workers. Although previous research has shown that the origins of emotion recognition technologies relied on autistic people, there has been little research on the impact of emotion recognition technologies on autistic people when it is used in the workplace. Through a review of recent academic studies, this article looks at the development and implementation processes of emotion recognition technologies to show how autistic people in particular may be disadvantaged or harmed by the development and use of the technologies. This article closes with a call for more research on autistic people's perception of the technologies and their impact, with involvement from diverse participants.
Collapse
|
3
|
Kozak J, Fel S. How sociodemographic factors relate to trust in artificial intelligence among students in Poland and the United Kingdom. Sci Rep 2024; 14:28776. [PMID: 39567593 PMCID: PMC11579466 DOI: 10.1038/s41598-024-80305-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 11/18/2024] [Indexed: 11/22/2024] Open
Abstract
The article aims to determine the sociodemographic factors associated with the level of trust in artificial intelligence (AI) based on cross-sectional research conducted in late 2023 and early 2024 on a sample of 2098 students in Poland (1088) and the United Kingdom (1010). In the times of AI progressively penetrating people's everyday life, it is important to identify the sociodemographic predictors of trust in this increasingly dynamically developing technology. The theoretical framework for the article is the extended Unified Theory of Acceptance and Use of Technology (UTAUT), which highlights the significance of sociodemographic variables as predictors of trust in AI. We performed a multivariate ANOVA and regression analysis, comparing trust in AI between students from Poland and the UK to identify the significant predictors of trust in this technology. The significant predictors of trust were nationality, gender, length of study, place of study, religious practices, and religious development. There is a need for research into the sociodemographic factors of trust in AI and for expanding the UTAUT to include new variables.
Collapse
Affiliation(s)
- Jarosław Kozak
- Institute of Sociological Sciences, Faculty of Social Sciences, The John Paul II Catholic University of Lublin, al. Raclawickie 14, 20-950, Lublin, Poland.
| | - Stanisław Fel
- Institute of Sociological Sciences, Faculty of Social Sciences, The John Paul II Catholic University of Lublin, al. Raclawickie 14, 20-950, Lublin, Poland
| |
Collapse
|
4
|
Gur T, Hameiri B, Maaravi Y. Political ideology shapes support for the use of AI in policy-making. Front Artif Intell 2024; 7:1447171. [PMID: 39540200 PMCID: PMC11557559 DOI: 10.3389/frai.2024.1447171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 10/14/2024] [Indexed: 11/16/2024] Open
Abstract
In a world grappling with technological advancements, the concept of Artificial Intelligence (AI) in governance is becoming increasingly realistic. While some may find this possibility incredibly alluring, others may see it as dystopian. Society must account for these varied opinions when implementing new technologies or regulating and limiting them. This study (N = 703) explored Leftists' (liberals) and Rightists' (conservatives) support for using AI in governance decision-making amidst an unprecedented political crisis that washed through Israel shortly after the proclamation of the government's intentions to initiate reform. Results indicate that Leftists are more favorable toward AI in governance. While legitimacy is tied to support for using AI in governance among both, Rightists' acceptance is also tied to perceived norms, whereas Leftists' approval is linked to perceived utility, political efficacy, and warmth. Understanding these ideological differences is crucial, both theoretically and for practical policy formulation regarding AI's integration into governance.
Collapse
Affiliation(s)
- Tamar Gur
- Adelson School of Entrepreneurship, Reichman University, Herzliya, Israel
| | - Boaz Hameiri
- The School of Social and Policy Studies, Tel Aviv University, Tel Aviv, Israel
| | - Yossi Maaravi
- Adelson School of Entrepreneurship, Reichman University, Herzliya, Israel
| |
Collapse
|
5
|
Barnes AJ, Zhang Y, Valenzuela A. AI and culture: Culturally dependent responses to AI systems. Curr Opin Psychol 2024; 58:101838. [PMID: 39002473 DOI: 10.1016/j.copsyc.2024.101838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 06/06/2024] [Accepted: 06/25/2024] [Indexed: 07/15/2024]
Abstract
This article synthesizes recent research connected to how cultural identity can determine responses to artificial intelligence. National differences in AI adoption imply that culturally-driven psychological differences may offer a nuanced understanding and interventions. Our review suggests that cultural identity shapes how individuals include AI in constructing the self in relation to others and determines the effect of AI on key decision-making processes. Individualists may be more prone to view AI as external to the self and interpret AI features to infringe upon their uniqueness, autonomy, and privacy. In contrast, collectivists may be more prone to view AI as an extension of the self and interpret AI features to facilitate conforming to consensus, respond to their environment, and protect privacy.
Collapse
Affiliation(s)
- Aaron J Barnes
- University of Louisville, 110 W Brandeis Ave., Louisville, KY 40208, USA.
| | | | - Ana Valenzuela
- ESADE-Ramon Llul, Barcelona, Spain; Baruch College, City University of New York, USA
| |
Collapse
|
6
|
Bakir V, Laffer A, McStay A, Miranda D, Urquhart L. On manipulation by emotional AI: UK adults' views and governance implications. FRONTIERS IN SOCIOLOGY 2024; 9:1339834. [PMID: 38912311 PMCID: PMC11190365 DOI: 10.3389/fsoc.2024.1339834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 05/22/2024] [Indexed: 06/25/2024]
Abstract
With growing commercial, regulatory and scholarly interest in use of Artificial Intelligence (AI) to profile and interact with human emotion ("emotional AI"), attention is turning to its capacity for manipulating people, relating to factors impacting on a person's decisions and behavior. Given prior social disquiet about AI and profiling technologies, surprisingly little is known on people's views on the benefits and harms of emotional AI technologies, especially their capacity for manipulation. This matters because regulators of AI (such as in the European Union and the UK) wish to stimulate AI innovation, minimize harms and build public trust in these systems, but to do so they should understand the public's expectations. Addressing this, we ascertain UK adults' perspectives on the potential of emotional AI technologies for manipulating people through a two-stage study. Stage One (the qualitative phase) uses design fiction principles to generate adequate understanding and informed discussion in 10 focus groups with diverse participants (n = 46) on how emotional AI technologies may be used in a range of mundane, everyday settings. The focus groups primarily flagged concerns about manipulation in two settings: emotion profiling in social media (involving deepfakes, false information and conspiracy theories), and emotion profiling in child oriented "emotoys" (where the toy responds to the child's facial and verbal expressions). In both these settings, participants express concerns that emotion profiling covertly exploits users' cognitive or affective weaknesses and vulnerabilities; additionally, in the social media setting, participants express concerns that emotion profiling damages people's capacity for rational thought and action. To explore these insights at a larger scale, Stage Two (the quantitative phase), conducts a UK-wide, demographically representative national survey (n = 2,068) on attitudes toward emotional AI. Taking care to avoid leading and dystopian framings of emotional AI, we find that large majorities express concern about the potential for being manipulated through social media and emotoys. In addition to signaling need for civic protections and practical means of ensuring trust in emerging technologies, the research also leads us to provide a policy-friendly subdivision of what is meant by manipulation through emotional AI and related technologies.
Collapse
Affiliation(s)
- Vian Bakir
- School of History, Law and Social Sciences, Bangor University, Bangor, United Kingdom
| | - Alexander Laffer
- School of Media and Film, University of Winchester, Winchester, United Kingdom
| | - Andrew McStay
- School of History, Law and Social Sciences, Bangor University, Bangor, United Kingdom
| | - Diana Miranda
- Faculty of Social Sciences, University of Stirling, Scotland, United Kingdom
| | - Lachlan Urquhart
- Edinburgh Law School, University of Edinburgh, Scotland, United Kingdom
| |
Collapse
|
7
|
Guerra-Tamez CR, Kraul Flores K, Serna-Mendiburu GM, Chavelas Robles D, Ibarra Cortés J. Decoding Gen Z: AI's influence on brand trust and purchasing behavior. Front Artif Intell 2024; 7:1323512. [PMID: 38500672 PMCID: PMC10944976 DOI: 10.3389/frai.2024.1323512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 02/19/2024] [Indexed: 03/20/2024] Open
Abstract
This study focuses on the role of AI in shaping Generation Z's consumer behaviors across fashion, technology, beauty, and education sectors. Analyzing responses from 224 participants, our findings reveal that AI exposure, attitude toward AI, and AI accuracy perception significantly enhance brand trust, which in turn positively impacts purchasing decisions. Notably, flow experience acts as a mediator between brand trust and purchasing decisions. These insights underscore the critical role of AI in developing brand trust and influencing purchasing choices among Generation Z, offering valuable implications for marketers in an increasingly digital landscape.
Collapse
Affiliation(s)
- Cristobal Rodolfo Guerra-Tamez
- Art and Design Department, Centro Roberto Garza Sada de Arte, Arquitectura y Diseño, Universidad de Monterrey, Nuevo León, Mexico
| | - Keila Kraul Flores
- Department of Marketing and Analysis, Instituto Tecnológico y de Estudios Superiores de Monterrey, Monterrey, Nuevo León, Mexico
| | - Gabriela Mariah Serna-Mendiburu
- Art and Design Department, Centro Roberto Garza Sada de Arte, Arquitectura y Diseño, Universidad de Monterrey, Nuevo León, Mexico
| | - David Chavelas Robles
- Art and Design Department, Centro Roberto Garza Sada de Arte, Arquitectura y Diseño, Universidad de Monterrey, Nuevo León, Mexico
| | - Jorge Ibarra Cortés
- Department of Marketing and Analysis, Instituto Tecnológico y de Estudios Superiores de Monterrey, Monterrey, Nuevo León, Mexico
| |
Collapse
|
8
|
LI Y. Relationship between perceived threat of artificial intelligence and turnover intention in luxury hotels. Heliyon 2023; 9:e18520. [PMID: 37529336 PMCID: PMC10388198 DOI: 10.1016/j.heliyon.2023.e18520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 07/18/2023] [Accepted: 07/19/2023] [Indexed: 08/03/2023] Open
Abstract
When artificial intelligence technology erodes employees' professional knowledge, they tend to feel highly anxious, and turnover intention is created. This study aimed to test the impact of the perceived threat of artificial intelligence on turnover intention through perceived organizational support and the perceived value of artificial intelligence. The method and procedure were as follow: construct a theoretical framework and propose hypotheses - collect questionnaires through voluntary sampling - use a two-step approach to test the model. This study has some findings. Theoretically, this study proposes a conceptual model of artificial intelligence perception. The combination of technology threat avoidance, organizational support, and perceived value theories applies to the research background of this study. Methodologically, the relationship between the perceived threat of artificial intelligence, perceived organizational support, perceived value of artificial intelligence, and turnover intention variables was studied together for the first time, and the perceived value of artificial intelligence as a new significant mediator between perceived organizational support and turnover intention is discovered. Managementarily, when facing the threats of artificial intelligence to employees, hotel managers should emphasize organizational support, especially in finance, career, and adjustment. This study has important implications for luxury hotel management. First, hotel employees' perceptions of artificial intelligence are dual. Second, luxury hotel managers should consider perceived organizational support to be a key variable.
Collapse
|
9
|
Ahmad SF, Han H, Alam MM, Rehmat MK, Irshad M, Arraño-Muñoz M, Ariza-Montes A. Impact of artificial intelligence on human loss in decision making, laziness and safety in education. HUMANITIES & SOCIAL SCIENCES COMMUNICATIONS 2023; 10:311. [PMID: 37325188 PMCID: PMC10251321 DOI: 10.1057/s41599-023-01787-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 05/23/2023] [Indexed: 06/17/2023]
Abstract
This study examines the impact of artificial intelligence (AI) on loss in decision-making, laziness, and privacy concerns among university students in Pakistan and China. Like other sectors, education also adopts AI technologies to address modern-day challenges. AI investment will grow to USD 253.82 million from 2021 to 2025. However, worryingly, researchers and institutions across the globe are praising the positive role of AI but ignoring its concerns. This study is based on qualitative methodology using PLS-Smart for the data analysis. Primary data was collected from 285 students from different universities in Pakistan and China. The purposive Sampling technique was used to draw the sample from the population. The data analysis findings show that AI significantly impacts the loss of human decision-making and makes humans lazy. It also impacts security and privacy. The findings show that 68.9% of laziness in humans, 68.6% in personal privacy and security issues, and 27.7% in the loss of decision-making are due to the impact of artificial intelligence in Pakistani and Chinese society. From this, it was observed that human laziness is the most affected area due to AI. However, this study argues that significant preventive measures are necessary before implementing AI technology in education. Accepting AI without addressing the major human concerns would be like summoning the devils. Concentrating on justified designing and deploying and using AI for education is recommended to address the issue.
Collapse
|
10
|
Vuong QH, La VP, Nguyen MH, Jin R, La MK, Le TT. How AI's Self-Prolongation Influences People's Perceptions of Its Autonomous Mind: The Case of U.S. Residents. Behav Sci (Basel) 2023; 13:470. [PMID: 37366721 DOI: 10.3390/bs13060470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 05/25/2023] [Accepted: 06/02/2023] [Indexed: 06/28/2023] Open
Abstract
The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI's capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, to make assessments. Employing information-processing-based Bayesian Mindsponge Framework (BMF) analytics on a dataset of 266 residents in the United States, we found that the more people believe that an AI agent seeks continued functioning, the more they believe in that AI agent's capability of having a mind of its own. Moreover, we also found that the above association becomes stronger if a person is more familiar with personally interacting with AI. This suggests a directional pattern of value reinforcement in perceptions of AI. As the information processing of AI becomes even more sophisticated in the future, it will be much harder to set clear boundaries about what it means to have an autonomous mind.
Collapse
Affiliation(s)
- Quan-Hoang Vuong
- Centre for Interdisciplinary Social Research, Phenikaa University, Yen Nghia Ward, Ha Dong District, Hanoi 100803, Vietnam
| | - Viet-Phuong La
- Centre for Interdisciplinary Social Research, Phenikaa University, Yen Nghia Ward, Ha Dong District, Hanoi 100803, Vietnam
- A.I. for Social Data Lab (AISDL), Vuong & Associates, Hanoi 100000, Vietnam
| | - Minh-Hoang Nguyen
- Centre for Interdisciplinary Social Research, Phenikaa University, Yen Nghia Ward, Ha Dong District, Hanoi 100803, Vietnam
| | - Ruining Jin
- Civil, Commercial and Economic Law School, China University of Political Science and Law, Beijing 100088, China
| | - Minh-Khanh La
- School of Electrical and Electronic Engineering, Hanoi University of Science and Technology, Hanoi 100000, Vietnam
| | - Tam-Tri Le
- Centre for Interdisciplinary Social Research, Phenikaa University, Yen Nghia Ward, Ha Dong District, Hanoi 100803, Vietnam
| |
Collapse
|
11
|
Melville NP, Robert L, Xiao X. Putting humans back in the loop: An affordance conceptualization of the 4th industrial revolution. INFORMATION SYSTEMS JOURNAL 2022. [DOI: 10.1111/isj.12422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Affiliation(s)
- Nigel P. Melville
- Stephen M. Ross School of Business University of Michigan Ann Arbor Michigan USA
| | - Lionel Robert
- School of Information University of Michigan Ann Arbor Michigan USA
| | - Xiao Xiao
- Department of Digitalization Copenhagen Business School Frederiksberg Denmark
| |
Collapse
|
12
|
Ho MT. Disillusioned with artificial intelligence: a book review. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01588-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
13
|
Peter M, Ho MT. Why we need to be weary of emotional AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01576-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
14
|
Thinking about the mind-technology problem. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01485-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|