1
|
Leiva D, Ramos-Tapia B, Crawford B, Soto R, Cisternas-Caneo F. A Novel Approach to Combinatorial Problems: Binary Growth Optimizer Algorithm. Biomimetics (Basel) 2024; 9:283. [PMID: 38786493 PMCID: PMC11117713 DOI: 10.3390/biomimetics9050283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 04/30/2024] [Accepted: 05/02/2024] [Indexed: 05/25/2024] Open
Abstract
The set-covering problem aims to find the smallest possible set of subsets that cover all the elements of a larger set. The difficulty of solving the set-covering problem increases as the number of elements and sets grows, making it a complex problem for which traditional integer programming solutions may become inefficient in real-life instances. Given this complexity, various metaheuristics have been successfully applied to solve the set-covering problem and related issues. This study introduces, implements, and analyzes a novel metaheuristic inspired by the well-established Growth Optimizer algorithm. Drawing insights from human behavioral patterns, this approach has shown promise in optimizing complex problems in continuous domains, where experimental results demonstrate the effectiveness and competitiveness of the metaheuristic compared to other strategies. The Growth Optimizer algorithm is modified and adapted to the realm of binary optimization for solving the set-covering problem, resulting in the creation of the Binary Growth Optimizer algorithm. Upon the implementation and analysis of its outcomes, the findings illustrate its capability to achieve competitive and efficient solutions in terms of resolution time and result quality.
Collapse
Affiliation(s)
| | | | - Broderick Crawford
- Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2241, Valparaíso 2362807, Chile; (D.L.); (B.R.-T.); (R.S.); (F.C.-C.)
| | | | | |
Collapse
|
2
|
Abu Doush I, Awadallah MA, Al-Betar MA, Alomari OA, Makhadmeh SN, Abasi AK, Alyasseri ZAA. Archive-based coronavirus herd immunity algorithm for optimizing weights in neural networks. Neural Comput Appl 2023; 35:15923-15941. [PMID: 37273914 PMCID: PMC10115390 DOI: 10.1007/s00521-023-08577-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Accepted: 04/05/2023] [Indexed: 06/06/2023]
Abstract
The success of the supervised learning process for feedforward neural networks, especially multilayer perceptron neural network (MLP), depends on the suitable configuration of its controlling parameters (i.e., weights and biases). Normally, the gradient descent method is used to find the optimal values of weights and biases. The gradient descent method suffers from the local optimal trap and slow convergence. Therefore, stochastic approximation methods such as metaheuristics are invited. Coronavirus herd immunity optimizer (CHIO) is a recent metaheuristic human-based algorithm stemmed from the herd immunity mechanism as a way to treat the spread of the coronavirus pandemic. In this paper, an external archive strategy is proposed and applied to direct the population closer to more promising search regions. The external archive is implemented during the algorithm evolution, and it saves the best solutions to be used later. This enhanced version of CHIO is called ACHIO. The algorithm is utilized in the training process of MLP to find its optimal controlling parameters thus empowering their classification accuracy. The proposed approach is evaluated using 15 classification datasets with classes ranging between 2 to 10. The performance of ACHIO is compared against six well-known swarm intelligence algorithms and the original CHIO in terms of classification accuracy. Interestingly, ACHIO is able to produce accurate results that excel other comparative methods in ten out of the fifteen classification datasets and very competitive results for others.
Collapse
Affiliation(s)
- Iyad Abu Doush
- College of Engineering and Applied Sciences, American University of Kuwait, Salmiya, Kuwait
- Computer Science Department, Yarmouk University, Irbid, Jordan
| | - Mohammed A. Awadallah
- Department of Computer Science, Al-Aqsa University, Gaza, Palestine
- Artificial Intelligence Research Center (AIRC), Ajman University, Ajman, United Arab Emirates
| | - Mohammed Azmi Al-Betar
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman, United Arab Emirates
- Department of Information Technology, Al-Huson University College, Al-Balqa Applied University, Irbid, Jordan
| | | | - Sharif Naser Makhadmeh
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman, United Arab Emirates
| | - Ammar Kamal Abasi
- Machine Learning Department, Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi, United Arab Emirates
| | | |
Collapse
|
3
|
A joint multiobjective optimization of feature selection and classifier design for high-dimensional data classification. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.01.069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
4
|
Xie J, Liu S, Chen J, Jia J. Huber loss based distributed robust learning algorithm for random vector functional-link network. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10362-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
5
|
Pan JS, Hu P, Snášel V, Chu SC. A survey on binary metaheuristic algorithms and their engineering applications. Artif Intell Rev 2022; 56:6101-6167. [PMID: 36466763 PMCID: PMC9684803 DOI: 10.1007/s10462-022-10328-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
This article presents a comprehensively state-of-the-art investigation of the engineering applications utilized by binary metaheuristic algorithms. Surveyed work is categorized based on application scenarios and solution encoding, and describes these algorithms in detail to help researchers choose appropriate methods to solve related applications. It is seen that transfer function is the main binary coding of metaheuristic algorithms, which usually adopts Sigmoid function. Among the contributions presented, there were different implementations and applications of metaheuristic algorithms, or the study of engineering applications by different objective functions such as the single- and multi-objective problems of feature selection, scheduling, layout and engineering structure optimization. The article identifies current troubles and challenges by the conducted review, and discusses that novel binary algorithm, transfer function, benchmark function, time-consuming problem and application integration are need to be resolved in future.
Collapse
Affiliation(s)
- Jeng-Shyang Pan
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590 Shandong China
- Department of Information Management, Chaoyang University of Technology, Taichung, 413310 Taiwan
| | - Pei Hu
- Department of Information Management, Chaoyang University of Technology, Taichung, 413310 Taiwan
- School of Computer Science and Software Engineering, Nanyang Institute of Technology, Nanyang, 473004 Henan China
| | - Václav Snášel
- Faculty of Electrical Engineering and Computer Science, VŠB—Technical University of Ostrava, Ostrava, 70032 Moravskoslezský kraj Czech Republic
| | - Shu-Chuan Chu
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590 Shandong China
| |
Collapse
|
6
|
A cooperative genetic algorithm based on extreme learning machine for data classification. Soft comput 2022. [DOI: 10.1007/s00500-022-07202-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
7
|
Manifold-Based Multi-Deep Belief Network for Feature Extraction of Hyperspectral Image. REMOTE SENSING 2022. [DOI: 10.3390/rs14061484] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Deep belief networks (DBNs) have been widely applied in hyperspectral imagery (HSI) processing. However, the original DBN model fails to explore the prior knowledge of training samples which limits the discriminant capability of extracted features for classification. In this paper, we proposed a new deep learning method, termed manifold-based multi-DBN (MMDBN), to obtain deep manifold features of HSI. MMDBN designed a hierarchical initialization method that initializes the network by local geometric structure hidden in data. On this basis, a multi-DBN structure is built to learn deep features in each land-cover class, and it was used as the front-end of the whole model. Then, a discrimination manifold layer is developed to improve the discriminability of extracted deep features. To discover the manifold structure contained in HSI, an intrinsic graph and a penalty graph are constructed in this layer by using label information of training samples. After that, the deep manifold features can be obtained for classification. MMDBN not only effectively extracts the deep features from each class in HSI, but also maximizes the margins between different manifolds in low-dimensional embedding space. Experimental results on Indian Pines, Salinas, and Botswana datasets reach 78.25%, 90.48%, and 97.35% indicating that MMDBN possesses better classification performance by comparing with some state-of-the-art methods.
Collapse
|
8
|
Bao Q, Zhang S, Guo J, Xu Z, Zhang Z. Modeling of dynamic data-driven approach for the distributed steel rolling heating furnace temperature field. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06917-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
9
|
Jiang R, Zhang J, Tang Y, Feng J, Wang C. Self-adaptive DE algorithm without niching parameters for multi-modal optimization problems. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03003-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
10
|
Liang J, Chen G, Qu B, Yue C, Yu K, Qiao K. Niche-based cooperative co-evolutionary ensemble neural network for classification. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107951] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
11
|
Li H, Zhang L. A Bilevel Learning Model and Algorithm for Self-Organizing Feed-Forward Neural Networks for Pattern Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:4901-4915. [PMID: 33017295 DOI: 10.1109/tnnls.2020.3026114] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Conventional artificial neural network (ANN) learning algorithms for classification tasks, either derivative-based optimization algorithms or derivative-free optimization algorithms work by training ANN first (or training and validating ANN) and then testing ANN, which are a two-stage and one-pass learning mechanism. Thus, this learning mechanism may not guarantee the generalization ability of a trained ANN. In this article, a novel bilevel learning model is constructed for self-organizing feed-forward neural network (FFNN), in which the training and testing processes are integrated into a unified framework. In this bilevel model, the upper level optimization problem is built for testing error on testing data set and network architecture based on network complexity, whereas the lower level optimization problem is constructed for network weights based on training error on training data set. For the bilevel framework, an interactive learning algorithm is proposed to optimize the architecture and weights of an FFNN with consideration of both training error and testing error. In this interactive learning algorithm, a hybrid binary particle swarm optimization (BPSO) taken as an upper level optimizer is used to self-organize network architecture, whereas the Levenberg-Marquardt (LM) algorithm as a lower level optimizer is utilized to optimize the connection weights of an FFNN. The bilevel learning model and algorithm have been tested on 20 benchmark classification problems. Experimental results demonstrate that the bilevel learning algorithm can significantly produce more compact FFNNs with more excellent generalization ability when compared with conventional learning algorithms.
Collapse
|
12
|
Training Feedforward Neural Network Using Enhanced Black Hole Algorithm: A Case Study on COVID-19 Related ACE2 Gene Expression Classification. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021; 46:3807-3828. [PMID: 33520590 PMCID: PMC7823180 DOI: 10.1007/s13369-020-05217-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2020] [Accepted: 12/07/2020] [Indexed: 11/05/2022]
Abstract
The aim of this paper is twofold. First, black hole algorithm (BHA) is proposed as a new training algorithm for feedforward neural networks (FNNs), since most traditional and metaheuristic algorithms for training FNNs suffer from the problem of slow coverage and getting stuck at local optima. BHA provides a reliable alternative to address these drawbacks. Second, complementary learning components and Levy flight random walk are introduced into BHA to result in a novel optimization algorithm (BHACRW) for the purpose of improving the FNNs’ accuracy by finding optimal weights and biases. Four benchmark functions are first used to evaluate BHACRW’s performance in numerical optimization problems. Later, the classification performance of the suggested models, using BHA and BHACRW for training FNN, is evaluated against seven various benchmark datasets: iris, wine, blood, liver disorders, seeds, Statlog (Heart), balance scale. Experimental result demonstrates that the BHACRW performs better in terms of mean square error (MSE) and accuracy of training FNN, compared to standard BHA and eight well-known metaheuristic algorithms: whale optimization algorithm (WOA), biogeography-based optimizer (BBO), gravitational search algorithm (GSA), genetic algorithm (GA), cuckoo search (CS), multiverse optimizer (MVO), symbiotic organisms search (SOS), and particle swarm optimization (PSO). Moreover, we examined the classification performance of the suggested approach on the angiotensin-converting enzyme 2 (ACE2) gene expression as a coronavirus receptor, which has been overexpressed in human rhinovirus-infected nasal tissue. Results demonstrate that BHACRW-FNN achieves the highest accuracy on the dataset compared to other classifiers.
Collapse
|
13
|
Resilient back-propagation approach in small-world feed-forward neural network topology based on Newman–Watts algorithm. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05161-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
14
|
Wang C, Fang H, He S. Adaptive optimal controller design for a class of LDI-based neural network systems with input time-delays. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.12.084] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
15
|
BiPhase adaptive learning-based neural network model for cloud datacenter workload forecasting. Soft comput 2020. [DOI: 10.1007/s00500-020-04808-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|