1
|
Nguyen HD, Van CP, Nguyen TG, Dang DK, Pham TTN, Nguyen QH, Bui QT. Soil salinity prediction using hybrid machine learning and remote sensing in Ben Tre province on Vietnam's Mekong River Delta. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2023:10.1007/s11356-023-27516-x. [PMID: 37204580 DOI: 10.1007/s11356-023-27516-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/04/2023] [Indexed: 05/20/2023]
Abstract
Soil salinization is considered one of the disasters that have significant effects on agricultural activities in many parts of the world, particularly in the context of climate change and sea level rise. This problem has become increasingly essential and severe in the Mekong River Delta of Vietnam. Therefore, soil salinity monitoring and assessment are critical to building appropriate strategies to develop agricultural activities. This study aims to develop a low-cost method based on machine learning and remote sensing to map soil salinity in Ben Tre province, which is located in Vietnam's Mekong River Delta. This objective was achieved by using six machine learning algorithms, including Xgboost (XGR), sparrow search algorithm (SSA), bird swarm algorithm (BSA), moth search algorithm (MSA), Harris hawk optimization (HHO), grasshopper optimization algorithm (GOA), particle swarm optimization algorithm (PSO), and 43 factors extracted from remote sensing images. Various indices were used, namely, root mean square error (RMSE), mean absolute error (MAE), and the coefficient of determination (R2) to estimate the efficiency of the prediction models. The results show that six optimization algorithms successfully improved XGR model performance with an R2 value of more than 0.98. Among the proposed models, the XGR-HHO model was better than the other models with a value of R2 of 0.99 and a value of RMSE of 0.051, by XGR-GOA (R2 = 0.931, RMSE = 0.055), XGR-MSA (R2 = 0.928, RMSE = 0.06), XGR-BSA (R2 = 0.926, RMSE = 0.062), XGR-SSA (R2 = 0.917, 0.07), XGR-PSO (R2 = 0.916, RMSE = 0.08), XGR (R2 = 0.867, RMSE = 0.1), CatBoost (R2 = 0.78, RMSE = 0.12), and RF (R2 = 0.75, RMSE = 0.19), respectively. These proposed models have surpassed the reference models (CatBoost and random forest). The results indicated that the soils in the eastern areas of Ben Tre province are more saline than in the western areas. The results of this study highlighted the effectiveness of using hybrid machine learning and remote sensing in soil salinity monitoring. The finding of this study provides essential tools to support farmers and policymakers in selecting appropriate crop types in the context of climate change to ensure food security.
Collapse
Affiliation(s)
- Huu Duy Nguyen
- Faculty of Geography, VNU University of Science, Vietnam National University, Hanoi, Vietnam
| | - Chien Pham Van
- Thuyloi University, 175 Tay Son, Dong Da, Hanoi, Vietnam
| | - Tien Giang Nguyen
- Faculty of Hydrology, Meteorology and Oceanography, VNU University of Science, Vietnam National University, 334 Nguyen Trai, Thanh Xuan District, Hanoi, Vietnam.
| | - Dinh Kha Dang
- Faculty of Hydrology, Meteorology and Oceanography, VNU University of Science, Vietnam National University, 334 Nguyen Trai, Thanh Xuan District, Hanoi, Vietnam
| | - Thi Thuy Nga Pham
- Center for Environmental Fluid Dynamics, VNU University of Science, Vietnam National University, 334 Nguyen Trai, Thanh Xuan District, Hanoi, Vietnam
| | - Quoc-Huy Nguyen
- Faculty of Geography, VNU University of Science, Vietnam National University, Hanoi, Vietnam
| | - Quang-Thanh Bui
- Faculty of Geography, VNU University of Science, Vietnam National University, Hanoi, Vietnam
| |
Collapse
|
2
|
Weng P, Xie J, Zou Y. Compressive strength prediction of admixed HPC concrete by hybrid deep learning approaches. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-221714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
Abstract
The estimation of compressive strength includes time-consuming, finance-wasting, and laboring approaches to undertaking High-performance concrete (HPC) production. On the other side, a vast volume of concrete consumption in industrial construction requires an optimal mix design with different percentages to reach the highest compressive strength. The present study considered two deep learning approaches to handle compressive strength prediction. The robustness of the deep model was put high through two novel optimization algorithms as a novelty in the research world that played their precise roles in charge of model structure optimization. Also, a dataset containing cement, silica fume, fly ash, the total aggregate amount, the coarse aggregate amount, superplasticizer, water, curing time, and high-performance concrete compressive strength was used to develop models. The results indicate that the AMLP-I and GMLP-I models served the highest prediction accuracy. R2 and RMSE of AMLP-I stood at 0.9895 and 1.7341, respectively, which declared that the AMLP-I model could be presented as the robust model for estimating compressive strength. Generally, using optimization algorithms to boost the capabilities of prediction models by tuning the internal characteristics has increased the reliability of artificial intelligent approaches to substitute the more experimental practices.
Collapse
Affiliation(s)
- Peng Weng
- Changzhou University Huaide College, JingJiang, China
| | - JingJing Xie
- Changzhou University Huaide College, JingJiang, China
| | - Yang Zou
- Shanghai Construction NO.2(Group) Co., Ltd, ShangHai, China
| |
Collapse
|
3
|
Belhadi A, Djenouri Y, Srivastava G, Lin JCW. Fast and Accurate Framework for Ontology Matching in Web of Things. ACM T ASIAN LOW-RESO 2023. [DOI: 10.1145/3578708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
The Web of Things (WoT) can help with knowledge discovery and interoperability issues in many Internet of Things (IoT) applications. This paper focuses on semantic modeling of the WoT and proposes a new approach called Decomposition for Ontology Matching (DOM) to discover relevant knowledge by exploring correlations between WoT data using decomposition strategies. DOM technique adopts several decomposition techniques to order the highly linked ontologies of WoT data into similar groups. The main idea is to decompose the instances of each ontology into similar groups and then match the instances of the similar groups instead of the entire instances of the two ontologies. Three main algorithms for decomposition have been developed. The first algorithm is based on radar scanning, which determines the distribution of distances between each instance and all other instances to determine the cluster centroid. The second algorithm is based on adaptive grid clustering, where it focuses on the distribution information and the construction of spanning trees. The third algorithm is based on split index clustering, where instances are divided into groups of cells from which noise is removed during the merging process. Several studies were conducted with different ontology databases to illustrate the use of the DOM technique. The results show that DOM outperforms state-of-the-art ontology matching models in terms of computational cost while maintaining the quality of the matching. Moreover, these results demonstrate that DOM is capable of handling various large datasets in WoT contexts.
Collapse
Affiliation(s)
| | | | - Gautam Srivastava
- Brandon University, Canada and China Medical University, Taiwan and Lebanese American University, Lebanon
| | | |
Collapse
|
4
|
|
5
|
Attracting Potential Customers in E-Commerce Environments: A Comparative Study of Metaheuristic Algorithms. Processes (Basel) 2022. [DOI: 10.3390/pr10020369] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Internet technology has provided an indescribable new way for businesses to attract new customers, track their behaviour, customise services, products, and advertising. Internet technology and the new trend of online shopping have resulted in the establishment of numerous websites to sell products on a daily basis. Products compete to be displayed on the limited pages of a website in online shopping because it has a significant impact on sales. Website designers carefully select which products to display on a page in order to influence the customers’ purchasing decisions. However, concerns regarding appropriate decision making have not been fully addressed. As a result, this study conducts a comprehensive comparative analysis of the performance of ten different metaheuristics. The ant lion optimiser (ALO), Dragonfly algorithm (DA), Grasshopper optimisation algorithm (GOA), Harris hawks optimisation (HHO), Moth-flame optimisation algorithm (MFO), Multi-verse optimiser (MVO), sine cosine algorithm (SCA), Salp Swarm Algorithm (SSA), The whale optimisation algorithm (WOA), and Grey wolf optimiser (GWO) are some of the recent algorithms that were chosen for this study. The results show that the MFO outperforms the other methods in all sizes. MFO has an average normalised objective function of 81%, while ALO has a normalised objective function of 77%. In contrast, HHO has the worst performance of 16%. The study’s findings add new theoretical and practical insights to the growing body of knowledge about e-commerce environments and have implications for planners, policymakers, and managers, particularly in companies where an unplanned advertisement wastes the budget.
Collapse
|
6
|
Ontology-Based Methodology for Knowledge Acquisition from Groupware. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031448] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Groupware exist, and they contain expertise knowledge (explicit and tacit) that is primarily for solving problems, and it is collected on-the-job through virtual teams; such knowledge should be harvested. A system to acquire on-the-job knowledge of experts from groupware in view of the enrichment of intelligent agents has become one of the important technologies that is very much in demand in the field of knowledge technology, especially in this era of textual data explosion including due to the ever-increasing remote work culture. Before acquiring new knowledge from sentences in groupware into an existing ontology, it is vital to process the groupware discussions to recognise concepts (especially new ones), as well as to find the appropriate mappings between the said concepts and the destination ontology. There are several mapping procedures in the literature, but these have been formulated on the basis of mapping two or more independent ontologies using concept-similarities and it requires a significant amount of computation. With the goal of lowering computational complexities, identification difficulties, and complications of insertion (hooking) of a concept into an existing ontology, this paper proposes: (1) an ontology-based framework with changeable modules to harvest knowledge from groupware discussions; and (2) a facts enrichment approach (FEA) for the identification of new concepts and the insertion/hooking of new concepts from sentences into an existing ontology. This takes into consideration the notions of equality, similarity, and equivalence of concepts. This unique approach can be implemented on any platform of choice using current or newly constructed modules that can be constantly revised with enhanced sophistication or extensions. In general, textual data is taken and analysed in view of the creation of an ontology that can be utilised to power intelligent agents. The complete architecture of the framework is provided and the evaluation of the results reveal that the proposed methodology performs significantly better compared to the universally recommended thresholds as well as the existing works. Our technique shows a notable high improvement on the F1 score that measures precision and recall. In terms of future work, the study recommends the development of algorithms to fully automate the framework as well as for harvesting tacit knowledge from groupware.
Collapse
|
7
|
Off-Site Construction Three-Echelon Supply Chain Management with Stochastic Constraints: A Modelling Approach. BUILDINGS 2022. [DOI: 10.3390/buildings12020119] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Off-site construction is becoming more popular as more companies recognise the benefits of shifting the construction process away from the construction site and into a controlled manufacturing environment. However, challenges associated with the component supply chain have not been fully addressed. As a result, this study proposes a model for three-echelon supply chain supply management in off-site construction with stochastic constraints. In this paper, multiple off-site factories produce various types of components and ship them to supplier warehouses to meet the needs of the construction sites. Each construction site is directly served by a supplier warehouse. The service level for each supplier warehouse is assumed to be different based on regional conditions. Because of the unpredictable nature of construction projects, demand at each construction site is stochastic, so each supplier warehouse should stock a certain number of components. The inventory control policy is reviewed regularly and is in (R, s, S) form. Two objectives are considered: minimising total cost while achieving the desired delivery time for construction sites due to their demands and balancing driver workloads during the routeing stage. A grasshopper optimisation algorithm (GOA) and an exact method are used to solve this NP-hard problem. The findings of this study contribute new theoretical and practical insights to a growing body of knowledge about supply chain management strategies in off-site construction and have implications for project planners and suppliers, policymakers, and managers, particularly in companies where an unplanned supply chain exacerbates project delays and overrun costs.
Collapse
|
8
|
Qin P, Hu H, Yang Z. The improved grasshopper optimization algorithm and its applications. Sci Rep 2021; 11:23733. [PMID: 34887483 PMCID: PMC8660903 DOI: 10.1038/s41598-021-03049-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 11/26/2021] [Indexed: 11/09/2022] Open
Abstract
Grasshopper optimization algorithm (GOA) proposed in 2017 mimics the behavior of grasshopper swarms in nature for solving optimization problems. In the basic GOA, the influence of the gravity force on the updated position of every grasshopper is not considered, which possibly causes GOA to have the slower convergence speed. Based on this, the improved GOA (IGOA) is obtained by the two updated ways of the position of every grasshopper in this paper. One is that the gravity force is introduced into the updated position of every grasshopper in the basic GOA. And the other is that the velocity is introduced into the updated position of every grasshopper and the new position are obtained from the sum of the current position and the velocity. Then every grasshopper adopts its suitable way of the updated position on the basis of the probability. Finally, IGOA is firstly performed on the 23 classical benchmark functions and then is combined with BP neural network to establish the predicted model IGOA-BPNN by optimizing the parameters of BP neural network for predicting the closing prices of the Shanghai Stock Exchange Index and the air quality index (AQI) of Taiyuan, Shanxi Province. The experimental results show that IGOA is superior to the compared algorithms in term of the average values and the predicted model IGOA-BPNN has the minimal predicted errors. Therefore, the proposed IGOA is an effective and efficient algorithm for optimization.
Collapse
Affiliation(s)
- Peng Qin
- School of Electrical and Control Engineering, North University of China, Taiyuan, 030051, Shanxi, China. .,Shanxi Key Laboratory of Information Detection and Processing, Taiyuan, 030051, Shanxi, China.
| | - Hongping Hu
- School of Science, North University of China, Taiyuan, 030051, Shanxi, China
| | - Zhengmin Yang
- School of Science, North University of China, Taiyuan, 030051, Shanxi, China
| |
Collapse
|