1
|
Mi X, He H, Shen H. A Multi-Agent RL Algorithm for Dynamic Task Offloading in D2D-MEC Network with Energy Harvesting. SENSORS (BASEL, SWITZERLAND) 2024; 24:2779. [PMID: 38732885 PMCID: PMC11086306 DOI: 10.3390/s24092779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 04/18/2024] [Accepted: 04/24/2024] [Indexed: 05/13/2024]
Abstract
Delay-sensitive task offloading in a device-to-device assisted mobile edge computing (D2D-MEC) system with energy harvesting devices is a critical challenge due to the dynamic load level at edge nodes and the variability in harvested energy. In this paper, we propose a joint dynamic task offloading and CPU frequency control scheme for delay-sensitive tasks in a D2D-MEC system, taking into account the intricacies of multi-slot tasks, characterized by diverse processing speeds and data transmission rates. Our methodology involves meticulous modeling of task arrival and service processes using queuing systems, coupled with the strategic utilization of D2D communication to alleviate edge server load and prevent network congestion effectively. Central to our solution is the formulation of average task delay optimization as a challenging nonlinear integer programming problem, requiring intelligent decision making regarding task offloading for each generated task at active mobile devices and CPU frequency adjustments at discrete time slots. To navigate the intricate landscape of the extensive discrete action space, we design an efficient multi-agent DRL learning algorithm named MAOC, which is based on MAPPO, to minimize the average task delay by dynamically determining task-offloading decisions and CPU frequencies. MAOC operates within a centralized training with decentralized execution (CTDE) framework, empowering individual mobile devices to make decisions autonomously based on their unique system states. Experimental results demonstrate its swift convergence and operational efficiency, and it outperforms other baseline algorithms.
Collapse
Affiliation(s)
- Xin Mi
- School of Computer, Zhongshan Institute, University of Electronic Science and Technology of China, Zhognshan 528400, China;
| | - Huaiwen He
- School of Computer, Zhongshan Institute, University of Electronic Science and Technology of China, Zhognshan 528400, China;
| | - Hong Shen
- Engineering and Technology, Central Queensland University, Brisbane 4000, Australia;
| |
Collapse
|
2
|
Niu S, Liu W, Yan S, Liu Q. Message sharing scheme based on edge computing in IoV. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:20809-20827. [PMID: 38124577 DOI: 10.3934/mbe.2023921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
With the rapid development of 5G wireless communication and sensing technology, the Internet of Vehicles (IoV) will establish a widespread network between vehicles and roadside infrastructure. The collected road information is transferred to the cloud server with the assistance of roadside infrastructure, where it is stored and made available to other vehicles as a resource. However, in an open cloud environment, message confidentiality and vehicle identity privacy are severely compromised, and current attribute-based encryption algorithms still burden vehicles with large computational costs. In order to resolve these issues, we propose a message-sharing scheme in IoV based on edge computing. To start, we utilize attribute-based encryption techniques to protect the communications being delivered. We introduce edge computing, in which the vehicle outsources some operations in encryption and decryption to roadside units to reduce the vehicle's computational load. Second, to guarantee the integrity of the message and the security of the vehicle identity, we utilize anonymous identity-based signature technology. At the same time, we can batch verify the message, which further reduces the time and transmission of verifying a large number of message signatures. Based on the computational Diffie-Hellman problem, it is demonstrated that the proposed scheme is secure under the random oracle model. Finally, the performance analysis results show that our work is more computationally efficient compared to existing schemes and is more suitable for actual vehicle networking.
Collapse
Affiliation(s)
- Shufen Niu
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Wei Liu
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Sen Yan
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Qi Liu
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| |
Collapse
|
3
|
Wang Z, Na H. Multimedia Technology Based Interactive Translation Learning for Students. ACM T ASIAN LOW-RESO 2023. [DOI: 10.1145/3588569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/01/2023]
Abstract
Multimedia technology incorporates into the educational arena to translate traditional educational material into interactive digital mode. This has permitted teachers into the learning environment to design and integrate interactive multimedia learning. Many problems towards giving more attention to students facing teacher's multimedia levels are uneven, and their integration significance, bringing enormous curriculum learning strategies, is not entirely apparent. This research introduces multimedia network interpretation teaching using machine learning (MNIT-ML)in multimedia education design to enhance and strengthen the traditional teaching process and promote various modern approaches to transmit knowledge towards students.Allocation of learning resources (ALR) framework is mainly to enlarge the use and utilization of materials for learning activities from impressed resources into recordings, video content, motion graphics, and other forms of resources. The purpose of the Allocation of Learning Resources (ALR) framework is to aid teachers in making sound decisions about how to distribute available educational materials. Its goal is to guide teachers in making smart choices about how to use limited resources like time, money, and technology in order to improve students' educational achievements.
The ALR framework was picked because it offers a methodical strategy for selecting choices that is founded in educational research and best practices. Resource allocation is a key aspect in influencing student outcomes, and efficient allocation can help guarantee that all students have access to the tools they need to succeed.
There are a number of alternative frameworks that might direct studies on the distribution of educational resources. The Technological Acceptance Model (TAM) is a common tool for analysing what factors lead to widespread implementation of educational technology.
The goal of other frameworks like the Universal Design for Learning (UDL) framework is to help educators create lessons and methods that are inclusive of students of all backgrounds and abilities.
The final decision on which framework to use will be determined by the nature of the research issues and the setting in which they are being investigated. In order to make educated decisions, it is crucial to pick a framework that is appropriate for the research issues at hand and that gives a systematic approach based on established educational best practices. The digital promoting resources for teaching (DPRT) method creates a real environment for students to learn, focusing on enhancing training to make students feel the standard language translation skills. The simulation analysis is performed based on security 94.6%, the performance of 95.9%, and privacy, proving the proposed framework's reliability overall ratio of 93.4%.
Collapse
|
4
|
Li X, Chen S, Wu J, Li J, Wang T, Tang J, Hu T, Wu W. Satellite cloud image segmentation based on lightweight convolutional neural network. PLoS One 2023; 18:e0280408. [PMID: 36745635 PMCID: PMC9901801 DOI: 10.1371/journal.pone.0280408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 12/28/2022] [Indexed: 02/07/2023] Open
Abstract
More than 50% of the images captured by optical satellites are covered by clouds, which reduces the available information in the images and seriously affects the subsequent applications of satellite images. Therefore, the identification and segmentation of cloud regions come to be one of the most important problems in current satellite image processing. Due to the complexity and variability of satellite images, especially when the ground is covered with snow, the boundary information of cloud regions is difficult to be accurately identified. The fast and accurate segmentation of cloud regions is a difficult point in the current research. We propose a lightweight convolutional neural network. Firstly, channel attention is used to optimize the effective information in the feature maps as a way to improve the network's ability to extract semantic information at each scale. Then, we fuse high and low-dimensional feature maps to enhance the network's ability to obtain small-scale semantic information. In addition, the feature aggregation module automatically adjusts the input multi-level feature weights to highlight the details of different features. Finally, we design the fully connected conditional random field to solve the problem that some noise in the input image and local minima during training is passed to the output layer resulting in the loss of edge features. Experimental results show that the proposed method achieves 0.9695 and 0.8218 for overall accuracy and recall, respectively, which has higher segmentation accuracy with the shortest time consumption compared with other state-of-the-art methods.
Collapse
Affiliation(s)
- Xi Li
- Foundation Department, Chongqing Medical and Pharmaceutical College, Chongqing, China
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, Chongqing, China
| | - Shilan Chen
- Chongqing Botong Water Conservancy Information Network Co.,Ltd., Chongqing, China
| | - Jin Wu
- Chongqing Botong Water Conservancy Information Network Co.,Ltd., Chongqing, China
| | - Jun Li
- Foundation Department, Chongqing Medical and Pharmaceutical College, Chongqing, China
| | - Ting Wang
- Foundation Department, Chongqing Medical and Pharmaceutical College, Chongqing, China
| | - Junquan Tang
- Foundation Department, Chongqing Medical and Pharmaceutical College, Chongqing, China
| | - Tongyi Hu
- Foundation Department, Chongqing Medical and Pharmaceutical College, Chongqing, China
| | - Wenzhu Wu
- Foundation Department, Chongqing Medical and Pharmaceutical College, Chongqing, China
- * E-mail:
| |
Collapse
|
5
|
Liao L. Artificial Intelligence-Based English Vocabulary Test Research on Cognitive Web Services Platforms. INTERNATIONAL JOURNAL OF E-COLLABORATION 2023. [DOI: 10.4018/ijec.316656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Automated interaction between agents necessitates the ability of these agents to discover and select from a set of similar (or identical) services. As a result, trust is used to assess the quality of different cognitive web services. Therefore, this paper proposes artificial intelligence-based English vocabulary test research (AI-EVTR) to overcome the student's requirement. Further, pre-test behavior analysis has been introduced to enhance the apps. Before and after the test period, both groups took a pre-test behavior analysis and post-test log analysis “Vocabulary Test in English.” The experimental group used an app-assisted English vocabulary questionnaire to share their points; they were not extremely motivated in the app-assisted approach using machine learning. Statistical approaches comprising independent samples were used to analyze the acquired data. The experimental group greatly improved between the pre-test and post-test in spelling. Language learning on the website can be an option.
Collapse
Affiliation(s)
- Li Liao
- Jiangxi University of Technology, China
| |
Collapse
|
6
|
Asim M, Abd El-Latif AA. Intelligent computational methods for multi-unmanned aerial vehicle-enabled autonomous mobile edge computing systems. ISA TRANSACTIONS 2023; 132:5-15. [PMID: 34933773 DOI: 10.1016/j.isatra.2021.11.021] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/15/2021] [Accepted: 11/15/2021] [Indexed: 06/14/2023]
Abstract
This paper proposes a multi-unmanned aerial vehicle (UAV)-enabled autonomous mobile edge computing (MEC) system, in which several UAVs are deployed to provide services to user devices (UDs). The aim is to reduce/minimize the overall energy consumption of the autonomous system via designing the optimal trajectories of multiple UAVs. The problem is very complicated to be solved by traditional methods, as one has to take into account the deployment updation of stop points (SPs), the association of SPs with UDs and UAVs, and the optimal trajectories designing of UAVs. To tackle this problem, we propose a variable-length trajectory planning algorithm (VLTPA) consisting of three phases. In the first phase, the deployment of SPs is updated via presenting a genetic algorithm (GA) having variable-length individuals. Accordingly, the association between UDs and SPs is addressed by using a close rule. Finally, a multi-chrome GA is proposed to jointly handle the association of SPs with UAVs and their order for UAVs. The proposed VLTPA is tested via performing extensive experiments on eight instances ranging from 60 to 200 UDs, which reveal that the proposed VLTPA outperforms other compared state-of-the-art algorithms.
Collapse
Affiliation(s)
- Muhammad Asim
- School of Computer Science and Engineering, Central South University, Changsha 410083, China.
| | - Ahmed A Abd El-Latif
- Department of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebin El-Koom 32511, Egypt.
| |
Collapse
|
7
|
Chui KT, Gupta BB, Zhao M, Malibari A, Arya V, Alhalabi W, Ruiz MT. Enhancing Electrocardiogram Classification with Multiple Datasets and Distant Transfer Learning. Bioengineering (Basel) 2022; 9:683. [PMID: 36421084 PMCID: PMC9687650 DOI: 10.3390/bioengineering9110683] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 10/28/2022] [Accepted: 11/02/2022] [Indexed: 12/26/2023] Open
Abstract
Electrocardiogram classification is crucial for various applications such as the medical diagnosis of cardiovascular diseases, the level of heart damage, and stress. One of the typical challenges of electrocardiogram classification problems is the small size of the datasets, which may lead to limitation in the performance of the classification models, particularly for models based on deep-learning algorithms. Transfer learning has demonstrated effectiveness in transferring knowledge from a source model with a similar domain and can enhance the performance of the target model. Nevertheless, the consideration of datasets with similar domains restricts the selection of source domains. In this paper, electrocardiogram classification was enhanced by distant transfer learning where a generative-adversarial-network-based auxiliary domain with a domain-feature-classifier negative-transfer-avoidance (GANAD-DFCNTA) algorithm was proposed to bridge the knowledge transfer from distant sources to target domains. To evaluate the performance of the proposed algorithm, eight benchmark datasets were chosen, with four from electrocardiogram datasets and four from the following distant domains: ImageNet, COCO, WordNet, and Sentiment140. The results showed an average accuracy improvement of 3.67 to 4.89%. The proposed algorithm was also compared with existing works using traditional transfer learning, revealing an average accuracy improvement of 0.303-5.19%. Ablation studies confirmed the effectiveness of the components of GANAD-DFCNTA.
Collapse
Affiliation(s)
- Kwok Tai Chui
- Department of Electronic Engineering and Computer Science, School of Science and Technology, Hong Kong Metropolitan University, Hong Kong, China
| | - Brij B. Gupta
- International Center for AI and Cyber Security Research and Innovations, Department of Computer Science and Information Engineering, Asia University, Taichung 41354, Taiwan
- Lebanese American University, Beirut 1102, Lebanon
- Center for Interdisciplinary Research, University of Petroleum and Energy Studies (UPES), Dehradun 248007, India
- Department of Computer Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Mingbo Zhao
- School of Information Science & Technology, Donghua University, Shanghai 200051, China
| | - Areej Malibari
- Department of Industrial and Systems Engineering, College of Engineering, Princess Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Varsha Arya
- Lebanese American University, Beirut 1102, Lebanon
- Insights2Techinfo, India
| | - Wadee Alhalabi
- Immersive Virtual Reality Research Group, Department of Computer Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Department of Computer Science, Dar Alhekma University, Jeddah 22246, Saudi Arabia
| | - Miguel Torres Ruiz
- Instituto Politécnico Nacional, CIC, UPALM-Zacatenco, Mexico City 07320, Mexico
| |
Collapse
|
8
|
Gao Y, Ma C, Sheng A. Compound fault diagnosis for cooling dehumidifier based on RBF neural network improved by kernel principle component analysis and adaptive genetic algorithm. Soft comput 2022. [DOI: 10.1007/s00500-022-07509-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
9
|
Guan X, Lv T, Lin Z, Huang P, Zeng J. D2D-Assisted Multi-User Cooperative Partial Offloading in MEC Based on Deep Reinforcement Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:7004. [PMID: 36146350 PMCID: PMC9502189 DOI: 10.3390/s22187004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/02/2022] [Accepted: 09/13/2022] [Indexed: 06/16/2023]
Abstract
Mobile edge computing (MEC) and device-to-device (D2D) communication can alleviate the resource constraints of mobile devices and reduce communication latency. In this paper, we construct a D2D-MEC framework and study the multi-user cooperative partial offloading and computing resource allocation. We maximize the number of devices under the maximum delay constraints of the application and the limited computing resources. In the considered system, each user can offload its tasks to an edge server and a nearby D2D device. We first formulate the optimization problem as an NP-hard problem and then decouple it into two subproblems. The convex optimization method is used to solve the first subproblem, and the second subproblem is defined as a Markov decision process (MDP). A deep reinforcement learning algorithm based on a deep Q network (DQN) is developed to maximize the amount of tasks that the system can compute. Extensive simulation results demonstrate the effectiveness and superiority of the proposed scheme.
Collapse
Affiliation(s)
- Xin Guan
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China
| | - Tiejun Lv
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China
| | - Zhipeng Lin
- Key Laboratory of Dynamic Cognitive System of Electromagnetic Spectrum Space, College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics (NUAA), Nanjing 211106, China
| | - Pingmu Huang
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China
| | - Jie Zeng
- School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
10
|
Ensemble feature selection for multi‐label text classification: An intelligent order statistics approach. INT J INTELL SYST 2022. [DOI: 10.1002/int.23044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
11
|
Asim M, Mashwani WK, A. Abd El-Latif A. Energy and task completion time minimization algorithm for UAVs-empowered MEC SYSTEM. SUSTAINABLE COMPUTING: INFORMATICS AND SYSTEMS 2022; 35:100698. [DOI: 10.1016/j.suscom.2022.100698] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
12
|
Riyahi M, Rafsanjani MK, Gupta BB, Alhalabi W. Multiobjective whale optimization algorithm‐based feature selection for intelligent systems. INT J INTELL SYST 2022. [DOI: 10.1002/int.22979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Milad Riyahi
- Department of Computer Science, Faculty of Mathematics and Computer Shahid Bahonar University of Kerman Kerman Iran
| | - Marjan K. Rafsanjani
- Department of Computer Science, Faculty of Mathematics and Computer Shahid Bahonar University of Kerman Kerman Iran
| | - Brij B. Gupta
- Department of Computer Science and Information Engineering Asia University Taichung Taiwan
- Lebanese American University Beirut Lebanon
- Center for Interdisciplinary Research UPES Dehradun India
- Research and Innovation Department Skyline University College Sharjah United Arab Emirates
| | - Wadee Alhalabi
- Department of Electrical and Computer Engineering University of Miami Coral Gables Florida USA
| |
Collapse
|
13
|
An MRI Scans-Based Alzheimer's Disease Detection via Convolutional Neural Network and Transfer Learning. Diagnostics (Basel) 2022; 12:diagnostics12071531. [PMID: 35885437 PMCID: PMC9318866 DOI: 10.3390/diagnostics12071531] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 06/20/2022] [Accepted: 06/21/2022] [Indexed: 02/04/2023] Open
Abstract
Alzheimer’s disease (AD) is the most common type (>60%) of dementia and can wreak havoc on the psychological and physiological development of sufferers and their carers, as well as the economic and social development. Attributed to the shortage of medical staff, automatic diagnosis of AD has become more important to relieve the workload of medical staff and increase the accuracy of medical diagnoses. Using the common MRI scans as inputs, an AD detection model has been designed using convolutional neural network (CNN). To enhance the fine-tuning of hyperparameters and, thus, the detection accuracy, transfer learning (TL) is introduced, which brings the domain knowledge from heterogeneous datasets. Generative adversarial network (GAN) is applied to generate additional training data in the minority classes of the benchmark datasets. Performance evaluation and analysis using three benchmark (OASIS-series) datasets revealed the effectiveness of the proposed method, which increases the accuracy of the detection model by 2.85−3.88%, 2.43−2.66%, and 1.8−40.1% in the ablation study of GAN and TL, as well as the comparison with existing works, respectively.
Collapse
|
14
|
Nwogbaga NE, Latip R, Affendey LS, Rahiman ARA. Attribute reduction based scheduling algorithm with enhanced hybrid genetic algorithm and particle swarm optimization for optimal device selection. JOURNAL OF CLOUD COMPUTING 2022. [DOI: 10.1186/s13677-022-00288-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
AbstractThe applications of the Internet of Things in different areas and the resources that demand these applications are on the increase. However, the limitations of the IoT devices such as processing capability, storage, and energy are challenging. Computational offloading is introduced to ameliorate the limitations of mobile devices. Offloading heavy data size to a remote node introduces the problem of additional delay due to transmission. Therefore, in this paper, we proposed Dynamic tasks scheduling algorithm based on attribute reduction with an enhanced hybrid Genetic Algorithm and Particle Swarm Optimization for optimal device selection. The proposed method uses a rank accuracy estimation model to decide the rank-1 value to be applied for the decomposition. Then canonical Polyadic decomposition-based attribute reduction is applied to the offload-able task to reduce the data size. Enhance hybrid genetic algorithm and particle Swarm optimization are developed to select the optimal device in either fog or cloud. The proposed algorithm improved the response time, delay, number of offloaded tasks, throughput, and energy consumption of the IoT requests. The simulation is implemented with iFogSim and java programming language. The proposed method can be applied in smart cities, monitoring, health delivery, augmented reality, and gaming among others.
Collapse
|
15
|
MEC Computation Offloading-Based Learning Strategy in Ultra-Dense Networks. INFORMATION 2022. [DOI: 10.3390/info13060271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
Mobile edge computing (MEC) has the potential to realize intensive applications in 5G networks. Through migrating intensive tasks to edge servers, MEC can expand the computing power of wireless networks. Fifth generation networks need to meet service requirements, such as wide coverage, high capacity, low latency and low power consumption. Therefore, the network architecture of MEC combined with ultra-dense networks (UDNs) will become a typical model in the future. This paper designs a MEC architecture in a UDN, which is our research background. First, the system model is established in the UDN, and the optimization problems is proposed. Second, the action classification (AC) algorithm is utilized to filter the effective action in Q-learning. Then, the optimal computation offloading strategy and resource allocation scheme are obtained using a deep reinforcement learning-based AC algorithm, which is known as the DQN-AC algorithm. Finally, the simulation experiments show that the proposed DQN-AC algorithm can effectively reduce the system weighted cost compared with the full local computation algorithm, full offloading computation algorithm and Q-learning algorithm.
Collapse
|
16
|
Resource Allocation Strategy of the Educational Resource Base for MEC Multiserver Heuristic Joint Task. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4818767. [PMID: 35607471 PMCID: PMC9124079 DOI: 10.1155/2022/4818767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 04/09/2022] [Accepted: 04/20/2022] [Indexed: 11/18/2022]
Abstract
This paper analyzes the application of MEC multiserver heuristic joint task in resource allocation of the educational resource database. After constructing the scenario of educational resource database, a mathematical model is constructed from the dimensions of local execution strategy, unloading execution, and given educational resource allocation, in order to optimize the optimal allocation of educational resources through MEC. The results show that the DOOA scheme has good performance in terms of calculation cost and timeout rate. Compared with other benchmark schemes, the DQN-based unloading scheme has better performance, can effectively balance the load, and is better than the random unloading scheme and the SNR-based unloading scheme in terms of delay and calculation cost. The results show that the total hits of all category 1 users' content requests account for the proportion of the total content requests. The images have a small downward trend at the 15000 and 30000 time slots and then continue to rise. This shows that the proposed scheme can automatically adjust the caching strategy to adapt to the changes of content popularity, which proves that the agent can correctly perceive the changing trend of content popularity when the popularity of network content is unknown and improve the caching strategy accordingly to improve the cache hit rate. Therefore, the allocation of educational resources based on the MEC multiserver heuristic joint task is more reasonable and can achieve the optimal solution.
Collapse
|
17
|
Sedik A, Maleh Y, El Banby GM, Khalaf AA, Abd El-Samie FE, Gupta BB, Psannis K, Abd El-Latif AA. AI-enabled digital forgery analysis and crucial interactions monitoring in smart communities. TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE 2022; 177:121555. [DOI: 10.1016/j.techfore.2022.121555] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
18
|
Chiang TA, Che ZH, Huang YL, Tsai CY. Using an Ontology-Based Neural Network and DEA to Discover Deficiencies of Hotel Services. INT J SEMANT WEB INF 2022. [DOI: 10.4018/ijswis.306748] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Companies can gain critical real-time insights into customer requirements and service evaluation by mining social media. To acquire the service performance and improve the service deficiencies for hotels, this research proposes a benchmark-based performance evaluation model for hotel service to enable hotel managers to assess the service performance. In the case of non-benchmark service hotels, the identification and improvement model for non-benchmark criteria can recognize and analyze the required quantities of performance improvements for non-benchmark criteria. For understanding the causes of service deficiencies, this research mines the online posts and creates a hierarchical ontology of service deficiencies for hotels. A hierarchical ontology-based neural network is proposed to automatically identify the causes of service deficiencies. This study employs an online forum as a case to achieve the identification accuracy of causes of service deficiencies of 92.68%. The analytical result can demonstrate the significant effectiveness and practical value of the proposed methodology.
Collapse
Affiliation(s)
| | - Z. H. Che
- National Taipei University of Technology, Taiwan
| | - Yi-Ling Huang
- Universal Global Scientific Industrial Co., Ltd., Taiwan
| | | |
Collapse
|
19
|
Sedik A, Hammad M, Abd El-Samie FE, Gupta BB, Abd El-Latif AA. Efficient deep learning approach for augmented detection of Coronavirus disease. Neural Comput Appl 2022. [PMID: 33487885 DOI: 10.1016/j.compeleceng.2022.108011] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/29/2023]
Abstract
The new Coronavirus disease 2019 (COVID-19) is rapidly affecting the world population with statistics quickly falling out of date. Due to the limited availability of annotated Coronavirus X-ray and CT images, the detection of COVID-19 remains the biggest challenge in diagnosing this disease. This paper provides a promising solution by proposing a COVID-19 detection system based on deep learning. The proposed deep learning modalities are based on convolutional neural network (CNN) and convolutional long short-term memory (ConvLSTM). Two different datasets are adopted for the simulation of the proposed modalities. The first dataset includes a set of CT images, while the second dataset includes a set of X-ray images. Both of these datasets consist of two categories: COVID-19 and normal. In addition, COVID-19 and pneumonia image categories are classified in order to validate the proposed modalities. The proposed deep learning modalities are tested on both X-ray and CT images as well as a combined dataset that includes both types of images. They achieved an accuracy of 100% and an F1 score of 100% in some cases. The simulation results reveal that the proposed deep learning modalities can be considered and adopted for quick COVID-19 screening.
Collapse
Affiliation(s)
- Ahmed Sedik
- Department of the Robotics and Intelligent Machines, Kafrelsheikh University, Kafrelsheikh, Egypt
| | - Mohamed Hammad
- Information Technology Department, Faculty of Computers and Information, Menoufia University, Shebeen El-Kom, Egypt
| | - Fathi E Abd El-Samie
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufa University, Menouf, 32952 Egypt
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, 84428 Saudi Arabia
| | - Brij B Gupta
- National Institute of Technology, Kurukshetra, India
- Department of Computer Science and Information Engineering, Asia University, Taichung City, Taiwan
| | - Ahmed A Abd El-Latif
- Department of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebeen El-Kom, 32511 Egypt
| |
Collapse
|
20
|
Cognitive Load Balancing Approach for 6G MEC Serving IoT Mashups. MATHEMATICS 2021. [DOI: 10.3390/math10010101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The sixth generation (6G) of communication networks represents more of a revolution than an evolution of the previous generations, providing new directions and innovative approaches to face the network challenges of the future. A crucial aspect is to make the best use of available resources for the support of an entirely new generation of services. From this viewpoint, the Web of Things (WoT), which enables Things to become Web Things to chain, use and re-use in IoT mashups, allows interoperability among IoT platforms. At the same time, Multi-access Edge Computing (MEC) brings computing and data storage to the edge of the network, which creates the so-called distributed and collective edge intelligence. Such intelligence is created in order to deal with the huge amount of data to be collected, analyzed and processed, from real word contexts, such as smart cities, which are evolving into dynamic and networked systems of people and things. To better exploit this architecture, it is crucial to break monolithic applications into modular microservices, which can be executed independently. Here, we propose an approach based on complex network theory and two weighted and interdependent multiplex networks to address the Microservices-compliant Load Balancing (McLB) problem in MEC infrastructure. Our findings show that the multiplex network representation represents an extra dimension of analysis, allowing to capture the complexity in WoT mashup organization and its impact on the organizational aspect of MEC servers. The impact of this extracted knowledge on the cognitive organization of MEC is quantified, through the use of heuristics that are engineered to guarantee load balancing and, consequently, QoS.
Collapse
|
21
|
Machine Learning Techniques in the Energy Consumption of Buildings: A Systematic Literature Review Using Text Mining and Bibliometric Analysis. ENERGIES 2021. [DOI: 10.3390/en14227810] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The high level of energy consumption of buildings is significantly influencing occupant behavior changes towards improved energy efficiency. This paper introduces a systematic literature review with two objectives: to understand the more relevant factors affecting energy consumption of buildings and to find the best intelligent computing (IC) methods capable of classifying and predicting energy consumption of different types of buildings. Adopting the PRISMA method, the paper analyzed 822 manuscripts from 2013 to 2020 and focused on 106, based on title and abstract screening and on manuscripts with experiments. A text mining process and a bibliometric map tool (VOS viewer) were adopted to find the most used terms and their relationships, in the energy and IC domains. Our approach shows that the terms “consumption,” “residential,” and “electricity” are the more relevant terms in the energy domain, in terms of the ratio of important terms (TITs), whereas “cluster” is the more commonly used term in the IC domain. The paper also shows that there are strong relations between “Residential Energy Consumption” and “Electricity Consumption,” “Heating” and “Climate. Finally, we checked and analyzed 41 manuscripts in detail, summarized their major contributions, and identified several research gaps that provide hints for further research.
Collapse
|
22
|
A Smart Healthcare Recommendation System for Multidisciplinary Diabetes Patients with Data Fusion Based on Deep Ensemble Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:4243700. [PMID: 34567101 PMCID: PMC8463188 DOI: 10.1155/2021/4243700] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/09/2021] [Accepted: 09/06/2021] [Indexed: 12/12/2022]
Abstract
The prediction of human diseases precisely is still an uphill battle task for better and timely treatment. A multidisciplinary diabetic disease is a life-threatening disease all over the world. It attacks different vital parts of the human body, like Neuropathy, Retinopathy, Nephropathy, and ultimately Heart. A smart healthcare recommendation system predicts and recommends the diabetic disease accurately using optimal machine learning models with the data fusion technique on healthcare datasets. Various machine learning models and methods have been proposed in the recent past to predict diabetes disease. Still, these systems cannot handle the massive number of multifeatures datasets on diabetes disease properly. A smart healthcare recommendation system is proposed for diabetes disease based on deep machine learning and data fusion perspectives. Using data fusion, we can eliminate the irrelevant burden of system computational capabilities and increase the proposed system's performance to predict and recommend this life-threatening disease more accurately. Finally, the ensemble machine learning model is trained for diabetes prediction. This intelligent recommendation system is evaluated based on a well-known diabetes dataset, and its performance is compared with the most recent developments from the literature. The proposed system achieved 99.6% accuracy, which is higher compared to the existing deep machine learning methods. Therefore, our proposed system is better for multidisciplinary diabetes disease prediction and recommendation. Our proposed system's improved disease diagnosis performance advocates for its employment in the automated diagnostic and recommendation systems for diabetic patients.
Collapse
|
23
|
Survey on Intelligence Edge Computing in 6G: Characteristics, Challenges, Potential Use Cases, and Market Drivers. FUTURE INTERNET 2021. [DOI: 10.3390/fi13050118] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Intelligence Edge Computing (IEC) is the key enabler of emerging 5G technologies networks and beyond. IEC is considered to be a promising backbone of future services and wireless communication systems in 5G integration. In addition, IEC enables various use cases and applications, including autonomous vehicles, augmented and virtual reality, big data analytic, and other customer-oriented services. Moreover, it is one of the 5G technologies that most enhanced market drivers in different fields such as customer service, healthcare, education methods, IoT in agriculture and energy sustainability. However, 5G technological improvements face many challenges such as traffic volume, privacy, security, digitization capabilities, and required latency. Therefore, 6G is considered to be promising technology for the future. To this end, compared to other surveys, this paper provides a comprehensive survey and an inclusive overview of Intelligence Edge Computing (IEC) technologies in 6G focusing on main up-to-date characteristics, challenges, potential use cases and market drivers. Furthermore, we summarize research efforts on IEC in 5G from 2014 to 2021, in which the integration of IEC and 5G technologies are highlighted. Finally, open research challenges and new future directions in IEC with 6G networks will be discussed.
Collapse
|