1
|
Huang J, Chen C, Vong CM, Cheung YM. Broad Multitask Learning System With Group Sparse Regularization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:8265-8278. [PMID: 38949943 DOI: 10.1109/tnnls.2024.3416191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/03/2024]
Abstract
The broad learning system (BLS) featuring lightweight, incremental extension, and strong generalization capabilities has been successful in its applications. Despite these advantages, BLS struggles in multitask learning (MTL) scenarios with its limited ability to simultaneously unravel multiple complex tasks where existing BLS models cannot adequately capture and leverage essential information across tasks, decreasing their effectiveness and efficacy in MTL scenarios. To address these limitations, we proposed an innovative MTL framework explicitly designed for BLS, named group sparse regularization for broad multitask learning system using related task-wise (BMtLS-RG). This framework combines a task-related BLS learning mechanism with a group sparse optimization strategy, significantly boosting BLS's ability to generalize in MTL environments. The task-related learning component harnesses task correlations to enable shared learning and optimize parameters efficiently. Meanwhile, the group sparse optimization approach helps minimize the effects of irrelevant or noisy data, thus enhancing the robustness and stability of BLS in navigating complex learning scenarios. To address the varied requirements of MTL challenges, we presented two additional variants of BMtLS-RG: BMtLS-RG with sharing parameters of feature mapped nodes (BMtLS-RGf), which integrates a shared feature mapping layer, and BMtLS-RGf and enhanced nodes (BMtLS-RGfe), which further includes an enhanced node layer atop the shared feature mapping structure. These adaptations provide customized solutions tailored to the diverse landscape of MTL problems. We compared BMtLS-RG with state-of-the-art (SOTA) MTL and BLS algorithms through comprehensive experimental evaluation across multiple practical MTL and UCI datasets. BMtLS-RG outperformed SOTA methods in 97.81% of classification tasks and achieved optimal performance in 96.00% of regression tasks, demonstrating its superior accuracy and robustness. Furthermore, BMtLS-RG exhibited satisfactory training efficiency, outperforming existing MTL algorithms by 8.04-42.85 times.
Collapse
|
2
|
Dong J, Wang Y, Xie X, Lai J, Ong YS. Generalizable and Discriminative Representations for Adversarially Robust Few-Shot Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5480-5493. [PMID: 38536695 DOI: 10.1109/tnnls.2024.3379172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Few-shot image classification (FSIC) is beneficial for a variety of real-world scenarios, aiming to construct a recognition system with limited training data. In this article, we extend the original FSIC task by incorporating defense against malicious adversarial examples. This can be an arduous challenge because numerous deep learning-based approaches remain susceptible to adversarial examples, even when trained with ample amounts of data. Previous studies on this problem have predominantly concentrated on the meta-learning framework, which involves sampling numerous few-shot tasks during the training stage. In contrast, we propose a straightforward but effective baseline via learning robust and discriminative representations without tedious meta-task sampling, which can further be generalized to unforeseen adversarial FSIC tasks. Specifically, we introduce an adversarial-aware (AA) mechanism that exploits feature-level distinctions between the legitimate and the adversarial domains to provide supplementary supervision. Moreover, we design a novel adversarial reweighting training strategy to ameliorate the imbalance among adversarial examples. To further enhance the adversarial robustness without compromising discriminative features, we propose the cyclic feature purifier during the postprocessing projection, which can reduce the interference of unforeseen adversarial examples. Furthermore, our method can obtain robust feature embeddings that maintain superior transferability, even when facing cross-domain adversarial examples. Extensive experiments and systematic analyses demonstrate that our method achieves state-of-the-art robustness as well as natural performance among adversarially robust FSIC algorithms on three standard benchmarks by a substantial margin.
Collapse
|
3
|
Boudardara F, Boussif A, Meyer PJ, Ghazel M. INNAbstract: An INN-Based Abstraction Method for Large-Scale Neural Network Verification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:18455-18469. [PMID: 37792651 DOI: 10.1109/tnnls.2023.3316551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
Neural networks (NNs) have witnessed widespread deployment across various domains, including some safetycritical applications. In this regard, the demand for verifying means of such artificial intelligence techniques is more and more pressing. Nowadays, the development of evaluation approaches for NNs is a hot topic that is attracting considerable interest, and a number of verification methods have been proposed. Yet, a challenging issue for NN verification is pertaining to the scalability when some NNs of practical interest have to be evaluated. This work aims to present INNAbstract, an abstraction method to reduce the size of NNs, which leads to improving the scalability of NN verification and reachability analysis methods. This is achieved by merging neurons while ensuring that the obtained model (i.e., abstract model) overapproximates the original one. INNAbstract supports networks with numerous activation functions. In addition, we propose a heuristic for nodes' selection to build more precise abstract models, in the sense that the outputs are closer to those of the original network. The experimental results illustrate the efficiency of the proposed approach compared to the existing relevant abstraction techniques. Furthermore, they demonstrate that INNAbstract can help the existing verification tools to be applied on larger networks while considering various activation functions.
Collapse
|
4
|
Liang L, Ma H, Zhao L, Xie X, Hua C, Zhang M, Zhang Y. Vehicle Detection Algorithms for Autonomous Driving: A Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:3088. [PMID: 38793942 PMCID: PMC11125132 DOI: 10.3390/s24103088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 05/07/2024] [Accepted: 05/10/2024] [Indexed: 05/26/2024]
Abstract
Autonomous driving, as a pivotal technology in modern transportation, is progressively transforming the modalities of human mobility. In this domain, vehicle detection is a significant research direction that involves the intersection of multiple disciplines, including sensor technology and computer vision. In recent years, many excellent vehicle detection methods have been reported, but few studies have focused on summarizing and analyzing these algorithms. This work provides a comprehensive review of existing vehicle detection algorithms and discusses their practical applications in the field of autonomous driving. First, we provide a brief description of the tasks, evaluation metrics, and datasets for vehicle detection. Second, more than 200 classical and latest vehicle detection algorithms are summarized in detail, including those based on machine vision, LiDAR, millimeter-wave radar, and sensor fusion. Finally, this article discusses the strengths and limitations of different algorithms and sensors, and proposes future trends.
Collapse
Affiliation(s)
- Liang Liang
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
| | - Haihua Ma
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
- Key Laboratory of Grain Information Processing and Control of Ministry of Education, Henan University of Technology, Zhengzhou 450001, China
| | - Le Zhao
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
- Key Laboratory of Grain Information Processing and Control of Ministry of Education, Henan University of Technology, Zhengzhou 450001, China
| | - Xiaopeng Xie
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
| | - Chengxin Hua
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
| | - Miao Zhang
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
- Key Laboratory of Grain Information Processing and Control of Ministry of Education, Henan University of Technology, Zhengzhou 450001, China
| | - Yonghui Zhang
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
| |
Collapse
|
5
|
Li S, Yang X, Lin X, Zhang Y, Wu J. Real-Time Vehicle Detection from UAV Aerial Images Based on Improved YOLOv5. SENSORS (BASEL, SWITZERLAND) 2023; 23:5634. [PMID: 37420800 DOI: 10.3390/s23125634] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/14/2023] [Accepted: 06/14/2023] [Indexed: 07/09/2023]
Abstract
Aerial vehicle detection has significant applications in aerial surveillance and traffic control. The pictures captured by the UAV are characterized by many tiny objects and vehicles obscuring each other, significantly increasing the detection challenge. In the research of detecting vehicles in aerial images, there is a widespread problem of missed and false detections. Therefore, we customize a model based on YOLOv5 to be more suitable for detecting vehicles in aerial images. Firstly, we add one additional prediction head to detect smaller-scale objects. Furthermore, to keep the original features involved in the training process of the model, we introduce a Bidirectional Feature Pyramid Network (BiFPN) to fuse the feature information from various scales. Lastly, Soft-NMS (soft non-maximum suppression) is employed as a prediction frame filtering method, alleviating the missed detection due to the close alignment of vehicles. The experimental findings on the self-made dataset in this research indicate that compared with YOLOv5s, the mAP@0.5 and mAP@0.5:0.95 of YOLOv5-VTO increase by 3.7% and 4.7%, respectively, and the two indexes of accuracy and recall are also improved.
Collapse
Affiliation(s)
- Shuaicai Li
- College of Intelligent Equipment, Shandong University of Science and Technology, Taian 271019, China
| | - Xiaodong Yang
- College of Intelligent Equipment, Shandong University of Science and Technology, Taian 271019, China
| | - Xiaoxia Lin
- College of Intelligent Equipment, Shandong University of Science and Technology, Taian 271019, China
| | - Yanyi Zhang
- College of Intelligent Equipment, Shandong University of Science and Technology, Taian 271019, China
| | - Jiahui Wu
- College of Intelligent Equipment, Shandong University of Science and Technology, Taian 271019, China
| |
Collapse
|
6
|
Bouguettaya A, Zarzour H, Kechida A, Taberkit AM. A survey on deep learning-based identification of plant and crop diseases from UAV-based aerial images. CLUSTER COMPUTING 2022; 26:1297-1317. [PMID: 35968221 PMCID: PMC9362359 DOI: 10.1007/s10586-022-03627-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 04/12/2022] [Accepted: 05/10/2022] [Indexed: 06/15/2023]
Abstract
The agricultural crop productivity can be affected and reduced due to many factors such as weeds, pests, and diseases. Traditional methods that are based on terrestrial engines, devices, and farmers' naked eyes are facing many limitations in terms of accuracy and the required time to cover large fields. Currently, precision agriculture that is based on the use of deep learning algorithms and Unmanned Aerial Vehicles (UAVs) provides an effective solution to achieve agriculture applications, including plant disease identification and treatment. In the last few years, plant disease monitoring using UAV platforms is one of the most important agriculture applications that have gained increasing interest by researchers. Accurate detection and treatment of plant diseases at early stages is crucial to improving agricultural production. To this end, in this review, we analyze the recent advances in the use of computer vision techniques that are based on deep learning algorithms and UAV technologies to identify and treat crop diseases.
Collapse
Affiliation(s)
- Abdelmalek Bouguettaya
- Research Centre in Industrial Technologies (CRTI), P.O. Box 64, Cheraga, 16014 Algiers, Algeria
| | - Hafed Zarzour
- LIM Research, Department of Mathematics and Computer Science, Souk Ahras University, 41000 Souk Ahras, Algeria
| | - Ahmed Kechida
- Research Centre in Industrial Technologies (CRTI), P.O. Box 64, Cheraga, 16014 Algiers, Algeria
| | - Amine Mohammed Taberkit
- Research Centre in Industrial Technologies (CRTI), P.O. Box 64, Cheraga, 16014 Algiers, Algeria
| |
Collapse
|
7
|
Mutual Guidance Meets Supervised Contrastive Learning: Vehicle Detection in Remote Sensing Images. REMOTE SENSING 2022. [DOI: 10.3390/rs14153689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Vehicle detection is an important but challenging problem in Earth observation due to the intricately small sizes and varied appearances of the objects of interest. In this paper, we use these issues to our advantage by considering them results of latent image augmentation. In particular, we propose using supervised contrastive loss in combination with a mutual guidance matching process to helps learn stronger object representations and tackles the misalignment of localization and classification in object detection. Extensive experiments are performed to understand the combination of the two strategies and show the benefits for vehicle detection on aerial and satellite images, achieving performance on par with state-of-the-art methods designed for small and very small object detection. As the proposed method is domain-agnostic, it might also be used for visual representation learning in generic computer vision problems.
Collapse
|
8
|
Swarm Intelligence with Deep Transfer Learning Driven Aerial Image Classification Model on UAV Networks. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Nowadays, unmanned aerial vehicles (UAVs) have gradually attracted the attention of many academicians and researchers. The UAV has been found to be useful in variety of applications, such as disaster management, intelligent transportation system, wildlife monitoring, and surveillance. In UAV aerial images, learning effectual image representation was central to scene classifier method. The previous approach to the scene classification method depends on feature coding models with lower-level handcrafted features or unsupervised feature learning. The emergence of convolutional neural network (CNN) is developing image classification techniques more effectively. Due to the limited resource in UAVs, it can be difficult to fine-tune the hyperparameter and the trade-offs amongst computation complexity and classifier results. This article focuses on the design of swarm intelligence with deep transfer learning driven aerial image classification (SIDTLD-AIC) model on UAV networks. The presented SIDTLD-AIC model involves the proper identification and classification of images into distinct kinds. For accomplishing this, the presented SIDTLD-AIC model follows a feature extraction module using RetinaNet model in which the hyperparameter optimization process is performed by the use of salp swarm algorithm (SSA). In addition, a cascaded long short term memory (CLSTM) model is executed for classifying the aerial images. At last, seeker optimization algorithm (SOA) is applied as a hyperparameter optimizer of the CLSTM model and thereby results in enhanced classification accuracy. To assure the better performance of the SIDTLD-AIC model, a wide range of simulations are implemented and the outcomes are investigated in many aspects. The comparative study reported the better performance of the SIDTLD-AIC model over recent approaches.
Collapse
|
9
|
Konen K, Hecking T. Increased Robustness of Object Detection on Aerial Image Datasets using Simulated Imagery. INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING 2022. [DOI: 10.1142/s1793351x22420016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
10
|
Mittal P, Sharma A, Singh R, Sangaiah AK. On the performance evaluation of object classification models in low altitude aerial data. THE JOURNAL OF SUPERCOMPUTING 2022; 78:14548-14570. [PMID: 35399758 PMCID: PMC8982665 DOI: 10.1007/s11227-022-04469-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 03/17/2022] [Indexed: 06/14/2023]
Abstract
This paper compares the classification performance of machine learning classifiers vs. deep learning-based handcrafted models and various pretrained deep networks. The proposed study performs a comprehensive analysis of object classification techniques implemented on low-altitude UAV datasets using various machine and deep learning models. Multiple UAV object classification is performed through widely deployed machine learning-based classifiers such as K nearest neighbor, decision trees, naïve Bayes, random forest, a deep handcrafted model based on convolutional layers, and pretrained deep models. The best result obtained using random forest classifiers on the UAV dataset is 90%. The handcrafted deep model's accuracy score suggests the efficacy of deep models over machine learning-based classifiers in low-altitude aerial images. This model attains 92.48% accuracy, which is a significant improvement over machine learning-based classifiers. Thereafter, we analyze several pretrained deep learning models, such as VGG-D, InceptionV3, DenseNet, Inception-ResNetV4, and Xception. The experimental assessment demonstrates nearly 100% accuracy values using pretrained VGG16- and VGG19-based deep networks. This paper provides a compilation of machine learning-based classifiers and pretrained deep learning models and a comprehensive classification report for the respective performance measures.
Collapse
Affiliation(s)
| | | | - Raman Singh
- Thapar Institute of Engineering and Technology, Patiala, India
| | - Arun Kumar Sangaiah
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India
- Department of Industrial Engineering and Management, National Yunlin University of Science and Technology, Douliu, Taiwan
| |
Collapse
|
11
|
Detecting Moving Trucks on Roads Using Sentinel-2 Data. REMOTE SENSING 2022. [DOI: 10.3390/rs14071595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
In most countries, freight is predominantly transported by road cargo trucks. We present a new satellite remote sensing method for detecting moving trucks on roads using Sentinel-2 data. The method exploits a temporal sensing offset of the Sentinel-2 multispectral instrument, causing spatially and spectrally distorted signatures of moving objects. A random forest classifier was trained (overall accuracy: 84%) on visual-near-infrared-spectra of 2500 globally labelled targets. Based on the classification, the target objects were extracted using a developed recursive neighbourhood search. The speed and the heading of the objects were approximated. Detections were validated by employing 350 globally labelled target boxes (mean F1 score: 0.74). The lowest F1 score was achieved in Kenya (0.36), the highest in Poland (0.88). Furthermore, validated at 26 traffic count stations in Germany on in sum 390 dates, the truck detections correlate spatio-temporally with station figures (Pearson r-value: 0.82, RMSE: 43.7). Absolute counts were underestimated on 81% of the dates. The detection performance may differ by season and road condition. Hence, the method is only suitable for approximating the relative truck traffic abundance rather than providing accurate absolute counts. However, existing road cargo monitoring methods that rely on traffic count stations or very high resolution remote sensing data have limited global availability. The proposed moving truck detection method could fill this gap, particularly where other information on road cargo traffic are sparse by employing globally and freely available Sentinel-2 data. It is inferior to the accuracy and the temporal detail of station counts, but superior in terms of spatial coverage.
Collapse
|
12
|
Operational Rule Extraction and Construction Based on Task Scenario Analysis. INFORMATION 2022. [DOI: 10.3390/info13030144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Changes in the information age have induced the necessity for a more efficient and effective self-decision-making requirement. A method of extracting and constructing naval operations decision-making rules based on scenario analysis is proposed. The template specifications of Event Condition Action (ECA) rules are defined, and a consistency detection method of ECA rules based on SWRL is proposed. The logical relationships and state transitions of the naval operational process is analyzed in detail, and the association of objects, events, and behaviors is realized. Finally, the operation of the proposed methods is illustrated through an example process, showing the method can effectively solve the problems of self-decision-making rule extraction and construction among naval battlefield decision environment, and avoid relying on artificial intelligence, which may have brought some uncertain factors.
Collapse
|
13
|
Bouguettaya A, Zarzour H, Kechida A, Taberkit AM. Deep learning techniques to classify agricultural crops through UAV imagery: a review. Neural Comput Appl 2022; 34:9511-9536. [PMID: 35281624 PMCID: PMC8898032 DOI: 10.1007/s00521-022-07104-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 02/14/2022] [Indexed: 02/06/2023]
Abstract
During the last few years, Unmanned Aerial Vehicles (UAVs) technologies are widely used to improve agriculture productivity while reducing drudgery, inspection time, and crop management cost. Moreover, they are able to cover large areas in a matter of a few minutes. Due to the impressive technological advancement, UAV-based remote sensing technologies are increasingly used to collect valuable data that could be used to achieve many precision agriculture applications, including crop/plant classification. In order to process these data accurately, we need powerful tools and algorithms such as Deep Learning approaches. Recently, Convolutional Neural Network (CNN) has emerged as a powerful tool for image processing tasks achieving remarkable results making it the state-of-the-art technique for vision applications. In the present study, we reviewed the recent CNN-based methods applied to the UAV-based remote sensing image analysis for crop/plant classification to help researchers and farmers to decide what algorithms they should use accordingly to their studied crops and the used hardware. Fusing different UAV-based data and deep learning approaches have emerged as a powerful tool to classify different crop types accurately. The readers of the present review could acquire the most challenging issues facing researchers to classify different crop types from UAV imagery and their potential solutions to improve the performance of deep learning-based algorithms.
Collapse
Affiliation(s)
- Abdelmalek Bouguettaya
- Research Centre in Industrial Technologies (CRTI), P.O. Box 64, 16014 Cheraga, Algiers Algeria
| | - Hafed Zarzour
- Department of Mathematics and Computer Science, Souk Ahras University, 41000 Souk Ahras, Algeria
| | - Ahmed Kechida
- Research Centre in Industrial Technologies (CRTI), P.O. Box 64, 16014 Cheraga, Algiers Algeria
| | - Amine Mohammed Taberkit
- Research Centre in Industrial Technologies (CRTI), P.O. Box 64, 16014 Cheraga, Algiers Algeria
| |
Collapse
|