1
|
Sun L, Liang J, Liu S, Yong H, Zhang L. Perception-Distortion Balanced Super-Resolution: A Multi-Objective Optimization Perspective. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4444-4458. [PMID: 39088501 DOI: 10.1109/tip.2024.3434426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/03/2024]
Abstract
High perceptual quality and low distortion degree are two important goals in image restoration tasks such as super-resolution (SR). Most of the existing SR methods aim to achieve these goals by minimizing the corresponding yet conflicting losses, such as the l1 loss and the adversarial loss. Unfortunately, the commonly used gradient-based optimizers, such as Adam, are hard to balance these objectives due to the opposite gradient decent directions of the contradictory losses. In this paper, we formulate the perception-distortion trade-off in SR as a multi-objective optimization problem and develop a new optimizer by integrating the gradient-free evolutionary algorithm (EA) with gradient-based Adam, where EA and Adam focus on the divergence and convergence of the optimization directions respectively. As a result, a population of optimal models with different perception-distortion preferences is obtained. We then design a fusion network to merge these models into a single stronger one for an effective perception-distortion trade-off. Experiments demonstrate that with the same backbone network, the perception-distortion balanced SR model trained by our method can achieve better perceptual quality than its competitors while attaining better reconstruction fidelity. Codes and models can be found at https://github.com/csslc/EA-Adam.
Collapse
|
2
|
Zhou Y, Hu B, Yuan X, Huang K, Yi Z, Yen GG. Multiobjective Evolutionary Generative Adversarial Network Compression for Image Translation. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 2024; 28:798-809. [DOI: 10.1109/tevc.2023.3261135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/06/2024]
Affiliation(s)
- Yao Zhou
- College of Computer Science, Sichuan University, Chengdu, China
| | - Bing Hu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Xianglei Yuan
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Kaide Huang
- College of Computer Science, Sichuan University, Chengdu, China
| | - Zhang Yi
- College of Computer Science, Sichuan University, Chengdu, China
| | - Gary G. Yen
- School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, USA
| |
Collapse
|
3
|
Li N, Ma L, Yu G, Xue B, Zhang M, Jin Y. Survey on Evolutionary Deep Learning: Principles, Algorithms, Applications, and Open Issues. ACM COMPUTING SURVEYS 2024; 56:1-34. [DOI: 10.1145/3603704] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 05/31/2023] [Indexed: 01/04/2025]
Abstract
Over recent years, there has been a rapid development of deep learning (DL) in both industry and academia fields. However, finding the optimal hyperparameters of a DL model often needs high computational cost and human expertise. To mitigate the above issue, evolutionary computation (EC) as a powerful heuristic search approach has shown significant merits in the automated design of DL models, so-called evolutionary deep learning (EDL). This article aims to analyze EDL from the perspective of automated machine learning (AutoML). Specifically, we first illuminate EDL from DL and EC and regard EDL as an optimization problem. According to the DL pipeline, we systematically introduce EDL methods ranging from data preparation, model generation, to model deployment with a new taxonomy (i.e., what and how to evolve/optimize), and focus on the discussions of solution representation and search paradigm in handling the optimization problem by EC. Finally, key applications, open issues, and potentially promising lines of future research are suggested. This survey has reviewed recent developments of EDL and offers insightful guidelines for the development of EDL.
Collapse
Affiliation(s)
- Nan Li
- Northeastern University, China
| | | | - Guo Yu
- Nanjing Tech University, China
| | - Bing Xue
- Victoria University of Wellington, New Zealand
| | | | | |
Collapse
|
4
|
Li G, Yang P, Qian C, Hong R, Tang K. Stage-Wise Magnitude-Based Pruning for Recurrent Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:1666-1680. [PMID: 35759588 DOI: 10.1109/tnnls.2022.3184730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
A recurrent neural network (RNN) has shown powerful performance in tackling various natural language processing (NLP) tasks, resulting in numerous powerful models containing both RNN neurons and feedforward neurons. On the other hand, the deep structure of RNN has heavily restricted its implementation on mobile devices, where quite a few applications involve NLP tasks. Magnitude-based pruning (MP) is a promising way to address such a challenge. However, the existing MP methods are mostly designed for feedforward neural networks that do not involve a recurrent structure, and, thus, have performed less satisfactorily on pruning models containing RNN layers. In this article, a novel stage-wise MP method is proposed by explicitly taking the featured recurrent structure of RNN into account, which can effectively prune feedforward layers and RNN layers, simultaneously. The connections of neural networks are first grouped into three types according to how they are intersected with recurrent neurons. Then, an optimization-based pruning method is applied to compress each group of connections, respectively. Empirical studies show that the proposed method performs significantly better than the commonly used RNN pruning methods; i.e., up to 96.84% connections are pruned with little or even no degradation of precision indicators on the testing datasets.
Collapse
|
5
|
Jiang P, Xue Y, Neri F. Convolutional neural network pruning based on multi-objective feature map selection for image classification. Appl Soft Comput 2023. [DOI: 10.1016/j.asoc.2023.110229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
|
6
|
Zhang X, Xie W, Li Y, Lei J, Du Q. Filter Pruning via Learned Representation Median in the Frequency Domain. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:3165-3175. [PMID: 34797771 DOI: 10.1109/tcyb.2021.3124284] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, we propose a novel filter pruning method for deep learning networks by calculating the learned representation median (RM) in frequency domain (LRMF). In contrast to the existing filter pruning methods that remove relatively unimportant filters in the spatial domain, our newly proposed approach emphasizes the removal of absolutely unimportant filters in the frequency domain. Through extensive experiments, we observed that the criterion for "relative unimportance" cannot be generalized well and that the discrete cosine transform (DCT) domain can eliminate redundancy and emphasize low-frequency representation, which is consistent with the human visual system. Based on these important observations, our LRMF calculates the learned RM in the frequency domain and removes its corresponding filter, since it is absolutely unimportant at each layer. Thanks to this, the time-consuming fine-tuning process is not required in LRMF. The results show that LRMF outperforms state-of-the-art pruning methods. For example, with ResNet110 on CIFAR-10, it achieves a 52.3% FLOPs reduction with an improvement of 0.04% in Top-1 accuracy. With VGG16 on CIFAR-100, it reduces FLOPs by 35.9% while increasing accuracy by 0.5%. On ImageNet, ResNet18 and ResNet50 are accelerated by 53.3% and 52.7% with only 1.76% and 0.8% accuracy loss, respectively. The code is based on PyTorch and is available at https://github.com/zhangxin-xd/LRMF.
Collapse
|
7
|
Shang R, Li W, Zhu S, Jiao L, Li Y. Multi-teacher knowledge distillation based on joint Guidance of Probe and Adaptive Corrector. Neural Netw 2023; 164:345-356. [PMID: 37163850 DOI: 10.1016/j.neunet.2023.04.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 03/05/2023] [Accepted: 04/11/2023] [Indexed: 05/12/2023]
Abstract
Knowledge distillation (KD) has been widely used in model compression. But, in the current multi-teacher KD algorithms, the student can only passively acquire the knowledge of the teacher's middle layer in a single form and all teachers use identical a guiding scheme to the student. To solve these problems, this paper proposes a multi-teacher KD based on joint Guidance of Probe and Adaptive Corrector (GPAC) method. First, GPAC proposes a teacher selection strategy guided by the Linear Classifier Probe (LCP). This strategy allows the student to select better teachers in the middle layer. Teachers are evaluated using the classification accuracy detected by LCP. Then, GPAC designs an adaptive multi-teacher instruction mechanism. The mechanism uses instructional weights to emphasize the student's predicted direction and reduce the student's difficulty learning from teachers. At the same time, every teacher can formulate guiding scheme according to the Kullback-Leibler divergence loss of the student and itself. Finally, GPAC develops a multi-level mechanism for adjusting spatial attention loss. this mechanism uses a piecewise function that varies with the number of epochs to adjust the spatial attention loss. This piecewise function classifies the student' learning about spatial attention into three levels, which can efficiently use spatial attention of teachers. GPAC and the current state-of-the-art distillation methods are tested on CIFAR-10 and CIFAR-100 datasets. The experimental results demonstrate that the proposed method in this paper can obtain higher classification accuracy.
Collapse
Affiliation(s)
- Ronghua Shang
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, Shaanxi, China
| | - Wenzheng Li
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Guangzhou Institute of Technology, Xidian University, Guangzhou, Guangdong, China.
| | - Songling Zhu
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, Shaanxi, China
| | - Licheng Jiao
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, Shaanxi, China
| | - Yangyang Li
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, Shaanxi, China
| |
Collapse
|
8
|
Yu G, Jin Y, Olhofer M, Liu Q, Du W. Solution Set Augmentation for Knee Identification in Multiobjective Decision Analysis. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:2480-2493. [PMID: 34767520 DOI: 10.1109/tcyb.2021.3125071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In multiobjective decision making, most knee identification algorithms implicitly assume that the given solutions are well distributed and can provide sufficient information for identifying knee solutions. However, this assumption may fail to hold when the number of objectives is large or when the shape of the Pareto front is complex. To address the above issues, we propose a knee-oriented solution augmentation (KSA) framework that converts the Pareto front into a multimodal auxiliary function whose basins correspond to the knee regions of the Pareto front. The auxiliary function is then approximated using a surrogate and its basins are identified by a peak detection method. Additional solutions are then generated in the detected basins in the objective space and mapped to the decision space with the help of an inverse model. These solutions are evaluated by the original objective functions and added to the given solution set. To assess the quality of the augmented solution set, a measurement is proposed for the verification of knee solutions when the true Pareto front is unknown. The effectiveness of KSA is verified on widely used benchmark problems and successfully applied to a hybrid electric vehicle controller design problem.
Collapse
|
9
|
Genetic algorithm based approach to compress and accelerate the trained Convolution Neural Network model. INT J MACH LEARN CYB 2023. [DOI: 10.1007/s13042-022-01768-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
10
|
Wu T, Song C, Zeng P. Model pruning based on filter similarity for edge device deployment. Front Neurorobot 2023; 17:1132679. [PMID: 36937554 PMCID: PMC10017522 DOI: 10.3389/fnbot.2023.1132679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 02/13/2023] [Indexed: 03/06/2023] Open
Abstract
Filter pruning is widely used for inference acceleration and compatibility with off-the-shelf hardware devices. Some filter pruning methods have proposed various criteria to approximate the importance of filters, and then sort the filters globally or locally to prune the redundant parameters. However, the current criterion-based methods have problems: (1) parameters with smaller criterion values for extracting edge features are easily ignored, and (2) there is a strong correlation between different criteria, resulting in similar pruning structures. In this article, we propose a novel simple but effective pruning method based on filter similarity, which is used to evaluate the similarity between filters instead of the importance of a single filter. The proposed method first calculates the similarity of the filters pairwise in one convolutional layer and then obtains the similarity distribution. Finally, the filters with high similarity to others are deleted from the distribution or set to zero. In addition, the proposed algorithm does not need to specify the pruning rate for each layer, and only needs to set the desired FLOPs or parameter reduction to obtain the final compression model. We also provide iterative pruning strategies for hard pruning and soft pruning to satisfy the tradeoff requirements of accuracy and memory in different scenarios. Extensive experiments on various representative benchmark datasets across different network architectures demonstrate the effectiveness of our proposed method. For example, on CIFAR10, the proposed algorithm achieves 61.1% FLOPs reduction by removing 58.3% of the parameters, with no loss in Top-1 accuracy on ResNet-56; and reduces 53.05% FLOPs on ResNet-50 with only 0.29% Top-1 accuracy degradation on ILSVRC-2012.
Collapse
Affiliation(s)
- Tingting Wu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Key Laboratory of Networked Control Systems, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Chunhe Song
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Key Laboratory of Networked Control Systems, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- *Correspondence: Chunhe Song
| | - Peng Zeng
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Key Laboratory of Networked Control Systems, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- Peng Zeng
| |
Collapse
|
11
|
Zhou Y, Yuan X, Zhang X, Liu W, Wu Y, Yen GG, Hu B, Yi Z. Evolutionary Neural Architecture Search for Automatic Esophageal Lesion Identification and Segmentation. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE 2022; 3:436-450. [DOI: 10.1109/tai.2021.3134600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Yao Zhou
- Center of Intelligent Medicine, College of Computer Science, Sichuan University, Chengdu, China
| | - Xianglei Yuan
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Xiaozhi Zhang
- Center of Intelligent Medicine, College of Computer Science, Sichuan University, Chengdu, China
| | - Wei Liu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Yu Wu
- Center of Intelligent Medicine, College of Computer Science, Sichuan University, Chengdu, China
| | - Gary G. Yen
- School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, USA
| | - Bing Hu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Zhang Yi
- Center of Intelligent Medicine, College of Computer Science, Sichuan University, Chengdu, China
| |
Collapse
|
12
|
Louati H, Bechikh S, Louati A, Aldaej A, Said LB. Joint design and compression of convolutional neural networks as a Bi-level optimization problem. Neural Comput Appl 2022; 34:15007-15029. [PMID: 35599971 PMCID: PMC9112272 DOI: 10.1007/s00521-022-07331-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Accepted: 04/18/2022] [Indexed: 01/18/2023]
Abstract
Over the last decade, deep neural networks have shown great success in the fields of machine learning and computer vision. Currently, the CNN (convolutional neural network) is one of the most successful networks, having been applied in a wide variety of application domains, including pattern recognition, medical diagnosis and signal processing. Despite CNNs' impressive performance, their architectural design remains a significant challenge for researchers and practitioners. The problem of selecting hyperparameters is extremely important for these networks. The reason for this is that the search space grows exponentially in size as the number of layers increases. In fact, all existing classical and evolutionary pruning methods take as input an already pre-trained or designed architecture. None of them take pruning into account during the design process. However, to evaluate the quality and possible compactness of any generated architecture, filter pruning should be applied before the communication with the data set to compute the classification error. For instance, a medium-quality architecture in terms of classification could become a very light and accurate architecture after pruning, and vice versa. Many cases are possible, and the number of possibilities is huge. This motivated us to frame the whole process as a bi-level optimization problem where: (1) architecture generation is done at the upper level (with minimum NB and NNB) while (2) its filter pruning optimization is done at the lower level. Motivated by evolutionary algorithms' (EAs) success in bi-level optimization, we use the newly suggested co-evolutionary migration-based algorithm (CEMBA) as a search engine in this research to address our bi-level architectural optimization problem. The performance of our suggested technique, called Bi-CNN-D-C (Bi-level convolution neural network design and compression), is evaluated using the widely used benchmark data sets for image classification, called CIFAR-10, CIFAR-100 and ImageNet. Our proposed approach is validated by means of a set of comparative experiments with respect to relevant state-of-the-art architectures.
Collapse
Affiliation(s)
- Hassen Louati
- Department of Information Systems, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, 11942 Saudi Arabia
- SMART Lab, University of Tunis,ISG, Tunis, Tunisia
| | - Slim Bechikh
- SMART Lab, University of Tunis,ISG, Tunis, Tunisia
| | - Ali Louati
- Department of Information Systems, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, 11942 Saudi Arabia
- SMART Lab, University of Tunis,ISG, Tunis, Tunisia
| | - Abdulaziz Aldaej
- Department of Information Systems, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, 11942 Saudi Arabia
| | | |
Collapse
|
13
|
The Possibility of Combining and Implementing Deep Neural Network Compression Methods. AXIOMS 2022. [DOI: 10.3390/axioms11050229] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In the paper, the possibility of combining deep neural network (DNN) model compression methods to achieve better compression results was considered. To compare the advantages and disadvantages of each method, all methods were applied to the ResNet18 model for pretraining to the NCT-CRC-HE-100K dataset while using CRC-VAL-HE-7K as the validation dataset. In the proposed method, quantization, pruning, weight clustering, QAT (quantization-aware training), preserve cluster QAT (hereinafter PCQAT), and distillation were performed for the compression of ResNet18. The final evaluation of the obtained models was carried out on a Raspberry Pi 4 device using the validation dataset. The greatest model compression result on the disk was achieved by applying the PCQAT method, whose application led to a reduction in size of the initial model by as much as 45 times, whereas the greatest model acceleration result was achieved via distillation on the MobileNetV2 model. All methods led to the compression of the initial size of the model, with a slight loss in the model accuracy or an increase in the model accuracy in the case of QAT and weight clustering. INT8 quantization and knowledge distillation also led to a significant decrease in the model execution time.
Collapse
|
14
|
Ray T, Singh HK, Rahi KH, Rodemann T, Olhofer M. Towards identification of solutions of interest for multi-objective problems considering both objective and variable space information. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108505] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
15
|
Gaiewski MJ, Drewell RA, Dresch JM. Fitting thermodynamic-based models: Incorporating parameter sensitivity improves the performance of an evolutionary algorithm. Math Biosci 2021; 342:108716. [PMID: 34687735 DOI: 10.1016/j.mbs.2021.108716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 09/10/2021] [Accepted: 09/17/2021] [Indexed: 11/30/2022]
Abstract
A detailed comprehension of transcriptional regulation is critical to understanding the genetic control of development and disease across many different organisms. To more fully investigate the complex molecular interactions controlling the precise expression of genes, many groups have constructed mathematical models to complement their experimental approaches. A critical step in such studies is choosing the most appropriate parameter estimation algorithm to enable detailed analysis of the parameters that contribute to the models. In this study, we develop a novel set of evolutionary algorithms that use a pseudo-random Sobol Set to construct the initial population and incorporate parameter sensitivities into the adaptation of mutation rates, using local, global, and hybrid strategies. Comparison of the performance of these new algorithms to a number of current state-of-the-art global parameter estimation algorithms on a range of continuous test functions, as well as synthetic biological data representing models of gene regulatory systems, reveals improved performance of the new algorithms in terms of runtime, error and reproducibility. In addition, by analyzing the ability of these algorithms to fit datasets of varying quality, we provide the experimentalist with a guide to how the algorithms perform across a range of noisy data. These results demonstrate the improved performance of the new set of parameter estimation algorithms and facilitate meaningful integration of model parameters and predictions in our understanding of the molecular mechanisms of gene regulation.
Collapse
Affiliation(s)
- Michael J Gaiewski
- Department of Mathematics and Computer Science, Clark University, Worcester, MA, USA; Department of Mathematics, University of Connecticut, Storrs, CT, USA.
| | | | | |
Collapse
|
16
|
Recent Meta-Heuristic Algorithms with a Novel Premature Covergence Method for Determining the Parameters of PV Cells and Modules. ELECTRONICS 2021. [DOI: 10.3390/electronics10151846] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Currently, the incorporation of solar panels in many applications is a booming trend, which necessitates accurate simulations and analysis of their performance under different operating conditions for further decision making. In this paper, various optimization algorithms are addressed comprehensively through a comparative study and further discussions for extracting the unknown parameters. Efficient use of the iterations within the optimization process may help meta-heuristic algorithms in accelerating convergence plus attaining better accuracy for the final outcome. In this paper, a method, namely, the premature convergence method (PCM), is proposed to boost the convergence of meta-heuristic algorithms with significant improvement in their accuracies. PCM is based on updating the current position around the best-so-far solution with two-step sizes: the first is based on the distance between two individuals selected randomly from the population to encourage the exploration capability, and the second is based on the distance between the current position and the best-so-far solution to promote exploitation. In addition, PCM uses a weight variable, known also as a controlling factor, as a trade-off between the two-step sizes. The proposed method is integrated with three well-known meta-heuristic algorithms to observe its efficacy for estimating efficiently and effectively the unknown parameters of the single diode model (SDM). In addition, an RTC France Si solar cell, and three PV modules, namely, Photowatt-PWP201, Ultra 85-P, and STM6-40/36, are investigated with the improved algorithms and selected standard approaches to compare their performances in estimating the unknown parameters for those different types of PV cells and modules. The experimental results point out the efficacy of the PCM in accelerating the convergence speed with improved final outcomes.
Collapse
|