1
|
Wu X, Qin Y. On the relationship between music students' negative emotions, artificial intelligence readiness, and their engagement. Acta Psychol (Amst) 2025; 253:104760. [PMID: 39889664 DOI: 10.1016/j.actpsy.2025.104760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 01/23/2025] [Accepted: 01/24/2025] [Indexed: 02/03/2025] Open
Abstract
This study explored the relationship between negative emotions, engagement, and artificial intelligence (AI) readiness among 323 music students. The researchers employed SPSS (version 27) and AMOS (version 24) for analysis using the Emotion Beliefs Questionnaire (EBQ), the Students' Engagement Questionnaire, and the Artificial Intelligence Readiness Scale. Structural Equation Modeling (SEM), along with reliability analysis, correlation, and Multiple Linear Regression, was applied to understand the data. Findings indicate that negative emotions and AI readiness are interrelated with student engagement. Music's emotional impact can influence how students manage their feelings and engage with AI technologies. For instance, individuals who are more prepared for AI integration may leverage these tools to manage their emotions more effectively, which in turn could lead to enhanced music performance, though the specific mechanisms connecting these factors need further clarification. Consequently, high AI readiness can lead to greater engagement with digital learning platforms, potentially benefiting emotional regulation and academic achievements.
Collapse
Affiliation(s)
- Xiao Wu
- Namseoul University, Cheonan 31023, Republic of Korea; College of Humanities and Art, Hunan Institute of Traffic Engineering, Hengyang, Hunan 421000, China
| | - Yu Qin
- College of Music and Dance, Hunan City University, Yiyang Hunan 413000, China.
| |
Collapse
|
2
|
Nie J, Ahmadi Dehrashid H. Evaluation of student failure in higher education by an innovative strategy of fuzzy system combined optimization algorithms and AI. Heliyon 2024; 10:e29182. [PMID: 38867939 PMCID: PMC11168195 DOI: 10.1016/j.heliyon.2024.e29182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 03/23/2024] [Accepted: 04/02/2024] [Indexed: 06/14/2024] Open
Abstract
This research suggests two novel metaheuristic algorithms to enhance student performance: Harris Hawk's Optimizer (HHO) and the Earthworm Optimization Algorithm (EWA). In this sense, a series of adaptive neuro-fuzzy inference system (ANFIS) proposed models were trained using these methods. The selection of the best-fit model depends on finding an excellent connection between inputs and output(s) layers in training and testing datasets (e.g., a combination of expert knowledge, experimentation, and validation techniques). The study's primary result is a division of the participants into two performance-based groups (failed and non-failed). The experimental data used to build the models measured fourteen process variables: relocation, gender, age at enrollment, debtor, nationality, educational special needs, current tuition fees, scholarship holder, unemployment, inflation, GDP, application order, day/evening attendance, and admission grade. During the model evaluation, a scoring system was created in addition to using mean absolute error (MAE), mean squared error (MSE), and area under the curve (AUC) to assess the efficacy of the utilized approaches. Further research revealed that the HHO-ANFIS is superior to the EWA-ANFIS. With AUC = 0.8004 and 0.7886, MSE of 0.62689 and 0.65598, and MAE of 0.64105 and 0.65746, the failure of the pupils was assessed with the most significant degree of accuracy. The MSE, MAE, and AUC precision indicators showed that the EWA-ANFIS is less accurate, having MSE amounts of 0.71543 and 0.71776, MAE amounts of 0.70819 and 0.71518, and AUC amounts of 0.7565 and 0.758. It was found that the optimization algorithms have a high ability to increase the accuracy and performance of the conventional ANFIS model in predicting students' performance, which can cause changes in the management of the educational system and improve the quality of academic programs.
Collapse
Affiliation(s)
- Junting Nie
- Xinyang Vocational and Technical College, Xinyang 464000, Henan Province, China
| | | |
Collapse
|
3
|
Ahmad A, Li Z, Iqbal S, Aurangzeb M, Tariq I, Flah A, Blazek V, Prokop L. A comprehensive bibliometric survey of micro-expression recognition system based on deep learning. Heliyon 2024; 10:e27392. [PMID: 38495163 PMCID: PMC10943397 DOI: 10.1016/j.heliyon.2024.e27392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 02/21/2024] [Accepted: 02/28/2024] [Indexed: 03/19/2024] Open
Abstract
Micro-expressions (ME) are rapidly occurring expressions that reveal the true emotions that a human being is trying to hide, cover, or suppress. These expressions, which reveal a person's actual feelings, have a broad spectrum of applications in public safety and clinical diagnosis. This study provides a comprehensive review of the area of ME recognition. A bibliometric and network analysis techniques is used to compile all the available literature related to ME recognition. A total of 735 publications from the Web of Science (WOS) and Scopus databases were evaluated from December 2012 to December 2022 using all relevant keywords. The first round of data screening produced some basic information, which was further extracted for citation, coupling, co-authorship, co-occurrence, bibliographic, and co-citation analysis. Additionally, a thematic and descriptive analysis was executed to investigate the content of prior research findings, and research techniques used in the literature. The year wise publications indicated that the published literature between 2012 and 2017 was relatively low but however by 2021, a nearly 24-fold increment made it to 154 publications. The three topmost productive journals and conferences included IEEE Transactions on Affective Computing (n = 20 publications) followed by Neurocomputing (n = 17) and Multimedia tools and applications (n = 15). Zhao G was the most proficient author with 48 publications and the top influential country was China (620 publications). Publications by citations showed that each of the authors acquired citations ranging from 100 to 1225. While publications by organizations indicated that the University of Oulu had the most published papers (n = 51). Deep learning, facial expression recognition, and emotion recognition were among the most frequently used terms. It has been discovered that ME research was primarily classified in the discipline of engineering, with more contribution from China and Malaysia comparatively.
Collapse
Affiliation(s)
- Adnan Ahmad
- Key Laboratory of Underwater Acoustic Signal Processing of Ministry of Education, School of Information Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Zhao Li
- Key Laboratory of Underwater Acoustic Signal Processing of Ministry of Education, School of Information Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Sheeraz Iqbal
- Department of Electrical Engineering, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, AJK, Pakistan
| | - Muhammad Aurangzeb
- School of Electrical Engineering, Southeast University, Nanjing, 210096, China
| | - Irfan Tariq
- Key Laboratory of Underwater Acoustic Signal Processing of Ministry of Education, School of Information Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Ayman Flah
- College of Engineering, University of Business and Technology (UBT), Jeddah, 21448, Saudi Arabia
- MEU Research Unit, Middle East University, Amman, Jordan
- The Private Higher School of Applied Sciences and Technology of Gabes, University of Gabes, Gabes, Tunisia
- National Engineering School of Gabes, University of Gabes, Gabes, 6029, Tunisia
| | - Vojtech Blazek
- ENET Centre, VSB—Technical University of Ostrava, Ostrava, Czech Republic
| | - Lukas Prokop
- ENET Centre, VSB—Technical University of Ostrava, Ostrava, Czech Republic
| |
Collapse
|
4
|
Khan D, Alonazi M, Abdelhaq M, Al Mudawi N, Algarni A, Jalal A, Liu H. Robust human locomotion and localization activity recognition over multisensory. Front Physiol 2024; 15:1344887. [PMID: 38449788 PMCID: PMC10915014 DOI: 10.3389/fphys.2024.1344887] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 01/26/2024] [Indexed: 03/08/2024] Open
Abstract
Human activity recognition (HAR) plays a pivotal role in various domains, including healthcare, sports, robotics, and security. With the growing popularity of wearable devices, particularly Inertial Measurement Units (IMUs) and Ambient sensors, researchers and engineers have sought to take advantage of these advances to accurately and efficiently detect and classify human activities. This research paper presents an advanced methodology for human activity and localization recognition, utilizing smartphone IMU, Ambient, GPS, and Audio sensor data from two public benchmark datasets: the Opportunity dataset and the Extrasensory dataset. The Opportunity dataset was collected from 12 subjects participating in a range of daily activities, and it captures data from various body-worn and object-associated sensors. The Extrasensory dataset features data from 60 participants, including thousands of data samples from smartphone and smartwatch sensors, labeled with a wide array of human activities. Our study incorporates novel feature extraction techniques for signal, GPS, and audio sensor data. Specifically, for localization, GPS, audio, and IMU sensors are utilized, while IMU and Ambient sensors are employed for locomotion activity recognition. To achieve accurate activity classification, state-of-the-art deep learning techniques, such as convolutional neural networks (CNN) and long short-term memory (LSTM), have been explored. For indoor/outdoor activities, CNNs are applied, while LSTMs are utilized for locomotion activity recognition. The proposed system has been evaluated using the k-fold cross-validation method, achieving accuracy rates of 97% and 89% for locomotion activity over the Opportunity and Extrasensory datasets, respectively, and 96% for indoor/outdoor activity over the Extrasensory dataset. These results highlight the efficiency of our methodology in accurately detecting various human activities, showing its potential for real-world applications. Moreover, the research paper introduces a hybrid system that combines machine learning and deep learning features, enhancing activity recognition performance by leveraging the strengths of both approaches.
Collapse
Affiliation(s)
- Danyal Khan
- Department of Computer Science, Air University, Islamabad, Pakistan
| | - Mohammed Alonazi
- Department of Information Systems, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Maha Abdelhaq
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Naif Al Mudawi
- Department of Computer Science, College of Computer Science and Information System, Najran University, Najran, Saudi Arabia
| | - Asaad Algarni
- Department of Computer Sciences, Faculty of Computing and Information Technology, Northern Border University, Rafha, Saudi Arabia
| | - Ahmad Jalal
- Department of Computer Science, Air University, Islamabad, Pakistan
| | - Hui Liu
- Cognitive Systems Lab, University of Bremen, Bremen, Germany
| |
Collapse
|
5
|
Khan D, Al Mudawi N, Abdelhaq M, Alazeb A, Alotaibi SS, Algarni A, Jalal A. A Wearable Inertial Sensor Approach for Locomotion and Localization Recognition on Physical Activity. SENSORS (BASEL, SWITZERLAND) 2024; 24:735. [PMID: 38339452 PMCID: PMC10857626 DOI: 10.3390/s24030735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 12/31/2023] [Accepted: 01/02/2024] [Indexed: 02/12/2024]
Abstract
Advancements in sensing technology have expanded the capabilities of both wearable devices and smartphones, which are now commonly equipped with inertial sensors such as accelerometers and gyroscopes. Initially, these sensors were used for device feature advancement, but now, they can be used for a variety of applications. Human activity recognition (HAR) is an interesting research area that can be used for many applications like health monitoring, sports, fitness, medical purposes, etc. In this research, we designed an advanced system that recognizes different human locomotion and localization activities. The data were collected from raw sensors that contain noise. In the first step, we detail our noise removal process, which employs a Chebyshev type 1 filter to clean the raw sensor data, and then the signal is segmented by utilizing Hamming windows. After that, features were extracted for different sensors. To select the best feature for the system, the recursive feature elimination method was used. We then used SMOTE data augmentation techniques to solve the imbalanced nature of the Extrasensory dataset. Finally, the augmented and balanced data were sent to a long short-term memory (LSTM) deep learning classifier for classification. The datasets used in this research were Real-World Har, Real-Life Har, and Extrasensory. The presented system achieved 89% for Real-Life Har, 85% for Real-World Har, and 95% for the Extrasensory dataset. The proposed system outperforms the available state-of-the-art methods.
Collapse
Affiliation(s)
- Danyal Khan
- Faculty of Computing ad AI, Air University, E-9, Islamabad 44000, Pakistan;
| | - Naif Al Mudawi
- Department of Computer Science, College of Computer Science and Information System, Najran University, Najran 55461, Saudi Arabia
| | - Maha Abdelhaq
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Abdulwahab Alazeb
- Department of Computer Science, College of Computer Science and Information System, Najran University, Najran 55461, Saudi Arabia
| | - Saud S. Alotaibi
- Information Systems Department, College of Computer and Information Systems, Umm Al-Qura University, Makkah 24382, Saudi Arabia
| | - Asaad Algarni
- Department of Computer Sciences, Faculty of Computing and Information Technology, Northern Border University, Rafha 91911, Saudi Arabia;
| | - Ahmad Jalal
- Faculty of Computing ad AI, Air University, E-9, Islamabad 44000, Pakistan;
| |
Collapse
|
6
|
Zhao Y, Kong X, Zheng W, Ahmad S. Emotion generation method in online physical education teaching based on data mining of teacher-student interactions. PeerJ Comput Sci 2024; 10:e1814. [PMID: 38259880 PMCID: PMC10803077 DOI: 10.7717/peerj-cs.1814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 12/19/2023] [Indexed: 01/24/2024]
Abstract
Different from conventional educational paradigms, online education lacks the direct interplay between instructors and learners, particularly in the sphere of virtual physical education. Regrettably, extant research seldom directs its focus toward the intricacies of emotional arousal within the teacher-student course dynamic. The formulation of an emotion generation model exhibits constraints necessitating refinement tailored to distinct educational cohorts, disciplines, and instructional contexts. This study proffers an emotion generation model rooted in data mining of teacher-student course interactions to refine emotional discourse and enhance learning outcomes in the realm of online physical education. This model includes techniques for data preprocessing and augmentation, a multimodal dialogue text emotion recognition model, and a topic-expanding emotional dialogue generation model based on joint decoding. The encoder assimilates the input sentence into a fixed-length vector, culminating in the final state, wherein the vector produced by the context recurrent neural network is conjoined with the preceding word's vector and employed as the decoder's input. Leveraging the long-short-term memory neural network facilitates the modeling of emotional fluctuations across multiple rounds of dialogue, thus fulfilling the mandate of emotion prediction. The evaluation of the model against the DailyDialog dataset demonstrates its superiority over the conventional end-to-end model in terms of loss and confusion values. Achieving an accuracy rate of 84.4%, the model substantiates that embedding emotional cues within dialogues augments response generation. The proposed emotion generation model augments emotional discourse and learning efficacy within online physical education, offering fresh avenues for refining and advancing emotion generation models.
Collapse
Affiliation(s)
| | | | - Wei Zheng
- Langfang 16th Middle School, LangFang, China
| | | |
Collapse
|
7
|
Zhao Y, Shu X. Speech emotion analysis using convolutional neural network (CNN) and gamma classifier-based error correcting output codes (ECOC). Sci Rep 2023; 13:20398. [PMID: 37989782 PMCID: PMC10663497 DOI: 10.1038/s41598-023-47118-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 11/09/2023] [Indexed: 11/23/2023] Open
Abstract
Speech emotion analysis is one of the most basic requirements for the evolution of Artificial Intelligence (AI) in the field of human-machine interaction. Accurate emotion recognition in speech can be effective in applications such as online support, lie detection systems and customer feedback analysis. However, the existing techniques for this field have not yet met sufficient development. This paper presents a new method to improve the performance of emotion analysis in speech. The proposed method includes the following steps: pre-processing, feature description, feature extraction, and classification. The initial description of speech features in the proposed method is done by using the combination of spectro-temporal modulation (STM) and entropy features. Also, a Convolutional Neural Network (CNN) is utilized to reduce the dimensions of these features and extract the features of each signal. Finally, the combination of gamma classifier (GC) and Error-Correcting Output Codes (ECOC) is applied to classify features and extract emotions in speech. The performance of the proposed method has been evaluated using two datasets, Berlin and ShEMO. The results show that the proposed method can recognize speech emotions in the Berlin and ShEMO datasets with an average accuracy of 93.33 and 85.73%, respectively, which is at least 6.67% better than compared methods.
Collapse
Affiliation(s)
- Yunhao Zhao
- Department of Chinese Language & Literature, The Catholic University of Korea, 43 Jibong-Ro, Gyeonggi-Do, Bucheon-Si, 14662, Republic of Korea.
| | - Xiaoqing Shu
- Department of Chinese Language & Literature, The Catholic University of Korea, 43 Jibong-Ro, Gyeonggi-Do, Bucheon-Si, 14662, Republic of Korea
| |
Collapse
|