1
|
Enhanced PRIM recognition using PRI sound and deep learning techniques. PLoS One 2024; 19:e0298373. [PMID: 38691542 PMCID: PMC11062556 DOI: 10.1371/journal.pone.0298373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 01/24/2024] [Indexed: 05/03/2024] Open
Abstract
Pulse repetition interval modulation (PRIM) is integral to radar identification in modern electronic support measure (ESM) and electronic intelligence (ELINT) systems. Various distortions, including missing pulses, spurious pulses, unintended jitters, and noise from radar antenna scans, often hinder the accurate recognition of PRIM. This research introduces a novel three-stage approach for PRIM recognition, emphasizing the innovative use of PRI sound. A transfer learning-aided deep convolutional neural network (DCNN) is initially used for feature extraction. This is followed by an extreme learning machine (ELM) for real-time PRIM classification. Finally, a gray wolf optimizer (GWO) refines the network's robustness. To evaluate the proposed method, we develop a real experimental dataset consisting of sound of six common PRI patterns. We utilized eight pre-trained DCNN architectures for evaluation, with VGG16 and ResNet50V2 notably achieving recognition accuracies of 97.53% and 96.92%. Integrating ELM and GWO further optimized the accuracy rates to 98.80% and 97.58. This research advances radar identification by offering an enhanced method for PRIM recognition, emphasizing the potential of PRI sound to address real-world distortions in ESM and ELINT systems.
Collapse
|
2
|
Hyperspectral imaging combined with GA-SVM for maize variety identification. Food Sci Nutr 2024; 12:3177-3187. [PMID: 38726456 PMCID: PMC11077206 DOI: 10.1002/fsn3.3984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 01/05/2024] [Accepted: 01/10/2024] [Indexed: 05/12/2024] Open
Abstract
The demand for identification of maize varieties has increased dramatically due to the phenomenon of mixed seeds and inferior varieties pretending to be high-quality varieties continuing to occur. It is urgent to solve the problem of efficient and accurate identification of maize varieties. A hyperspectral image acquisition system was used to acquire images of maize seeds. Regions of interest (ROI) with an embryo size of 10 × 10 pixel were extracted, and the average spectral information in the range of 949.43-1709.49 nm was intercepted for the subsequent study in order to eliminate random noise at both ends. Savitzky-Golay (SG) smoothing algorithm and multiple scattering correction (MSC) were used to pretreat the full-band spectrum. The feature wavelengths were screened by successive projection algorithms (SPA), competitive adaptive reweighted sampling (CARS) single screening, and two combinations of CARS-SPA and CARS + SPA, respectively. Support vector machines (SVMs) and models optimized based on genetic algorithm (GA), particle swarm optimization (PSO) were established by using full bands (FB) and feature bands as the model input. The results showed that the MSC-(CARS-SPA)-GA-SVM model had the best performance with 93.00% of the test set accuracy, 8 feature variables, and a running time of 24.45 s. MSC pretreatment can effectively eliminate the scattering effect of spectral data, and the feature wavelengths extracted by CARS-SPA can represent all wavelength information. The study proved that hyperspectral imaging combined with GA-SVM can realize the identification of maize varieties, which provided a theoretical basis for maize variety classification and authenticity identification.
Collapse
|
3
|
EResNet-SVM: an overfitting-relieved deep learning model for recognition of plant diseases and pests. JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE 2024. [PMID: 38483173 DOI: 10.1002/jsfa.13462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 02/01/2024] [Accepted: 03/14/2024] [Indexed: 04/28/2024]
Abstract
BACKGROUND The accurate recognition and early warning for plant diseases and pests are a prerequisite of intelligent prevention and control for plant diseases and pests. As a result of the phenotype similarity of the hazarded plant after plant diseases and pests occur, as well as the interference of the external environment, traditional deep learning models often face the overfitting problem in phenotype recognition of plant diseases and pests, which leads to not only the slow convergence speed of the network, but also low recognition accuracy. RESULTS Motivated by the above problems, the present study proposes a deep learning model EResNet-support vector machine (SVM) to alleviate the overfitting for the recognition and classification of plant diseases and pests. First, the feature extraction capability of the model is improved by increasing feature extraction layers in the convolutional neural network. Second, the order-reduced modules are embedded and a sparsely activated function is introduced to reduce model complexity and alleviate overfitting. Finally, a classifier fused by SVM and fully connected layers are introduced to transforms the original non-linear classification problem into a linear classification problem in high-dimensional space to further alleviate the overfitting and improve the recognition accuracy of plant diseases and pests. The ablation experiments further demonstrate that the fused structure can effectively alleviate the overfitting and improve the recognition accuracy. The experimental recognition results for typical plant diseases and pests show that the proposed EResNet-SVM model has 99.30% test accuracy for eight conditions (seven plant diseases and one normal), which is 5.90% higher than the original ResNet18. Compared with the classic AlexNet, GoogLeNet, Xception, SqueezeNet and DenseNet201 models, the accuracy of the EResNet-SVM model has improved by 5.10%, 7%, 8.10%, 6.20% and 1.90%, respectively. The testing accuracy of the EResNet-SVM model for 6 insect pests is 100%, which is 3.90% higher than that of the original ResNet18 model. CONCLUSION This research provides not only useful references for alleviating the overfitting problem in deep learning, but also a theoretical and technical support for the intelligent detection and control of plant diseases and pests. © 2024 Society of Chemical Industry.
Collapse
|
4
|
SCGNet: efficient sparsely connected group convolution network for wheat grains classification. FRONTIERS IN PLANT SCIENCE 2023; 14:1304962. [PMID: 38186591 PMCID: PMC10766779 DOI: 10.3389/fpls.2023.1304962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 12/01/2023] [Indexed: 01/09/2024]
Abstract
Introduction Efficient and accurate varietal classification of wheat grains is crucial for maintaining varietal purity and reducing susceptibility to pests and diseases, thereby enhancing crop yield. Traditional manual and machine learning methods for wheat grain identification often suffer from inefficiencies and the use of large models. In this study, we propose a novel classification and recognition model called SCGNet, designed for rapid and efficient wheat grain classification. Methods Specifically, our proposed model incorporates several modules that enhance information exchange and feature multiplexing between group convolutions. This mechanism enables the network to gather feature information from each subgroup of the previous layer, facilitating effective utilization of upper-layer features. Additionally, we introduce sparsity in channel connections between groups to further reduce computational complexity without compromising accuracy. Furthermore, we design a novel classification output layer based on 3-D convolution, replacing the traditional maximum pooling layer and fully connected layer in conventional convolutional neural networks (CNNs). This modification results in more efficient classification output generation. Results We conduct extensive experiments using a curated wheat grain dataset, demonstrating the superior performance of our proposed method. Our approach achieves an impressive accuracy of 99.56%, precision of 99.59%, recall of 99.55%, and an F 1-score of 99.57%. Discussion Notably, our method also exhibits the lowest number of Floating-Point Operations (FLOPs) and the number of parameters, making it a highly efficient solution for wheat grains classification.
Collapse
|
5
|
Coal Flow Foreign Body Classification Based on ESCBAM and Multi-Channel Feature Fusion. SENSORS (BASEL, SWITZERLAND) 2023; 23:6831. [PMID: 37571614 PMCID: PMC10422397 DOI: 10.3390/s23156831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 07/29/2023] [Accepted: 07/30/2023] [Indexed: 08/13/2023]
Abstract
Foreign bodies often cause belt scratching and tearing, coal stacking, and plugging during the transportation of coal via belt conveyors. To overcome the problems of large parameters, heavy computational complexity, low classification accuracy, and poor processing speed in current classification networks, a novel network based on ESCBAM and multichannel feature fusion is proposed in this paper. Firstly, to improve the utilization rate of features and the network's ability to learn detailed information, a multi-channel feature fusion strategy was designed to fully integrate the independent feature information between each channel. Then, to reduce the computational amount while maintaining excellent feature extraction capability, an information fusion network was constructed, which adopted the depthwise separable convolution and improved residual network structure as the basic feature extraction unit. Finally, to enhance the understanding ability of image context and improve the feature performance of the network, a novel ESCBAM attention mechanism with strong generalization and portability was constructed by integrating space and channel features. The experimental results demonstrate that the proposed method has the advantages of fewer parameters, low computational complexity, high accuracy, and fast processing speed, which can effectively classify foreign bodies on the belt conveyor.
Collapse
|
6
|
Identification of Soybean Mutant Lines Based on Dual-Branch CNN Model Fusion Framework Utilizing Images from Different Organs. PLANTS (BASEL, SWITZERLAND) 2023; 12:2315. [PMID: 37375940 DOI: 10.3390/plants12122315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/08/2023] [Accepted: 06/13/2023] [Indexed: 06/29/2023]
Abstract
The accurate identification and classification of soybean mutant lines is essential for developing new plant varieties through mutation breeding. However, most existing studies have focused on the classification of soybean varieties. Distinguishing mutant lines solely by their seeds can be challenging due to their high genetic similarities. Therefore, in this paper, we designed a dual-branch convolutional neural network (CNN) composed of two identical single CNNs to fuse the image features of pods and seeds together to solve the soybean mutant line classification problem. Four single CNNs (AlexNet, GoogLeNet, ResNet18, and ResNet50) were used to extract features, and the output features were fused and input into the classifier for classification. The results demonstrate that dual-branch CNNs outperform single CNNs, with the dual-ResNet50 fusion framework achieving a 90.22 ± 0.19% classification rate. We also identified the most similar mutant lines and genetic relationships between certain soybean lines using a clustering tree and t-distributed stochastic neighbor embedding algorithm. Our study represents one of the primary efforts to combine various organs for the identification of soybean mutant lines. The findings of this investigation provide a new path to select potential lines for soybean mutation breeding and signify a meaningful advancement in the propagation of soybean mutant line recognition technology.
Collapse
|
7
|
Peanut leaf disease identification with deep learning algorithms. MOLECULAR BREEDING : NEW STRATEGIES IN PLANT IMPROVEMENT 2023; 43:25. [PMID: 37313521 PMCID: PMC10248705 DOI: 10.1007/s11032-023-01370-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/11/2023] [Indexed: 06/15/2023]
Abstract
Peanut is an essential food and oilseed crop. One of the most critical factors contributing to the low yield and destruction of peanut plant growth is leaf disease attack, which will directly reduce the yield and quality of peanut plants. The existing works have shortcomings such as strong subjectivity and insufficient generalization ability. So, we proposed a new deep learning model for peanut leaf disease identification. The proposed model is a combination of an improved X-ception, a parts-activated feature fusion module, and two attention-augmented branches. We obtained an accuracy of 99.69%, which was 9.67%-23.34% higher than those of Inception-V4, ResNet 34, and MobileNet-V3. Besides, supplementary experiments were performed to confirm the generality of the proposed model. The proposed model was applied to cucumber, apple, rice, corn, and wheat leaf disease identification, and yielded an average accuracy of 99.61%. The experimental results demonstrate that the proposed model can identify different crop leaf diseases, proving its feasibility and generalization. The proposed model has a positive significance for exploring other crop diseases' detection. Supplementary Information The online version contains supplementary material available at 10.1007/s11032-023-01370-8.
Collapse
|
8
|
Establishment and Application of a Multiplex PCR Assay for Detection of Sclerotium rolfsii, Lasiodiplodia theobromae, and Fusarium oxysporum in Peanut. Mol Biotechnol 2023:10.1007/s12033-022-00647-1. [PMID: 36607498 DOI: 10.1007/s12033-022-00647-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Accepted: 12/16/2022] [Indexed: 01/07/2023]
Abstract
Southern blight, stem rot, and root rot are serious soil-borne fungal diseases of peanut, which are caused by Sclerotium rolfsii, Lasiodiplodia theobromae, and Fusarium oxysporum, respectively. These diseases are difficult to be diagnosed in early stage of infection, causing the optimal treatment period was often missed. Therefore, establishing a rapid detection system is of great significance for early prevention of peanut soil-borne fungal diseases. Here, we have invented a multiplex PCR detection system to detect fungal pathogens of peanut southern blight, stem rot, and root rot at the same time. The quarantine fungal pathogen primer pairs were amplified to the specific number of base pairs in each of the following fungal pathogens: 1005-bp (F. oxysporum), 238-bp (L. theobromae), and 638-bp (S. rolfsii). The detection limit for the single and multiplex PCR primer sets was 1 ng of template DNA under in vitro conditions. Amplification of fungi of non-target species yielded no non-specific products. The validation showed that the multiplex PCR could effectively detect single and mixed infections in field samples. Overview, this study proved that this mPCR assay was a rapid, reliable, and simple tool for the simultaneous detection of three important peanut soil-borne diseases, which facilitated prompt treatment and prevention of peanut root diseases.
Collapse
|
9
|
AI-Assisted Fusion of Scanning Electrochemical Microscopy Images Using Novel Soft Probe. ACS MEASUREMENT SCIENCE AU 2022; 2:576-583. [PMID: 36785775 PMCID: PMC9885998 DOI: 10.1021/acsmeasuresciau.2c00032] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 08/01/2022] [Accepted: 08/02/2022] [Indexed: 06/18/2023]
Abstract
Scanning electrochemical microscopy (SECM) is one of the scanning probe techniques that has attracted considerable attention because of its ability to interrogate surface morphology or electrochemical reactivity. However, the quality of SECM images generally depends on the sizes of the electrodes and many uncontrollable factors. Furthermore, manipulating fragile glass ultramicroelectrodes and blurred images sometimes frustrate researchers. To overcome the challenges of modern SECM, we developed novel soft gold probes and then established the AI-assisted methodology for image fusion. A novel gold microelectrode probe with high softness was developed to scan fragile samples. The distribution of EGFR (protein biomarker) in oral cancer was investigated. Then, we fused the optical microscopic and SECM images to enhance the image quality using Matlab software. However, thousands of fused images were generated by changing the parameters for image fusion, which is annoying for researchers. Thus, a deep learning model was built to select the best-fused images according to the contrast and clarity of the fused images. Therefore, the quality of the SECM images was improved using a novel soft probe and combining the image fusion technique. In the future, a new scanning probe with AI-assisted fused SECM image processing may be interpreted more preciously and contribute to the early detection of cancers.
Collapse
|
10
|
Citrus disease detection using convolution neural network generated features and Softmax classifier on hyperspectral image data. FRONTIERS IN PLANT SCIENCE 2022; 13:1043712. [PMID: 36570926 PMCID: PMC9768035 DOI: 10.3389/fpls.2022.1043712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 11/18/2022] [Indexed: 06/17/2023]
Abstract
Identification and segregation of citrus fruit with diseases and peel blemishes are required to preserve market value. Previously developed machine vision approaches could only distinguish cankerous from non-cankerous citrus, while this research focused on detecting eight different peel conditions on citrus fruit using hyperspectral (HSI) imagery and an AI-based classification algorithm. The objectives of this paper were: (i) selecting the five most discriminating bands among 92 using PCA, (ii) training and testing a custom convolution neural network (CNN) model for classification with the selected bands, and (iii) comparing the CNN's performance using 5 PCA bands compared to five randomly selected bands. A hyperspectral imaging system from earlier work was used to acquire reflectance images in the spectral region from 450 to 930 nm (92 spectral bands). Ruby Red grapefruits with normal, cankerous, and 5 other common peel diseases including greasy spot, insect damage, melanose, scab, and wind scar were tested. A novel CNN based on the VGG-16 architecture was developed for feature extraction, and SoftMax for classification. The PCA-based bands were found to be 666.15, 697.54, 702.77, 849.24 and 917.25 nm, which resulted in an average accuracy, sensitivity, and specificity of 99.84%, 99.84% and 99.98% respectively. However, 10 trials of five randomly selected bands resulted in only a slightly lower performance, with accuracy, sensitivity, and specificity of 98.87%, 98.43% and 99.88%, respectively. These results demonstrate that an AI-based algorithm can successfully classify eight different peel conditions. The findings reported herein can be used as a precursor to develop a machine vision-based, real-time peel condition classification system for citrus processing.
Collapse
|
11
|
A nomogram based on radiomics signature and deep-learning signature for preoperative prediction of axillary lymph node metastasis in breast cancer. Front Oncol 2022; 12:940655. [PMID: 36338691 PMCID: PMC9633001 DOI: 10.3389/fonc.2022.940655] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 10/07/2022] [Indexed: 10/03/2023] Open
Abstract
PURPOSE To develop a nomogram based on radiomics signature and deep-learning signature for predicting the axillary lymph node (ALN) metastasis in breast cancer. METHODS A total of 151 patients were assigned to a training cohort (n = 106) and a test cohort (n = 45) in this study. Radiomics features were extracted from DCE-MRI images, and deep-learning features were extracted by VGG-16 algorithm. Seven machine learning models were built using the selected features to evaluate the predictive value of radiomics or deep-learning features for the ALN metastasis in breast cancer. A nomogram was then constructed based on the multivariate logistic regression model incorporating radiomics signature, deep-learning signature, and clinical risk factors. RESULTS Five radiomics features and two deep-learning features were selected for machine learning model construction. In the test cohort, the AUC was above 0.80 for most of the radiomics models except DecisionTree and ExtraTrees. In addition, the K-nearest neighbor (KNN), XGBoost, and LightGBM models using deep-learning features had AUCs above 0.80 in the test cohort. The nomogram, which incorporated the radiomics signature, deep-learning signature, and MRI-reported LN status, showed good calibration and performance with the AUC of 0.90 (0.85-0.96) in the training cohort and 0.90 (0.80-0.99) in the test cohort. The DCA showed that the nomogram could offer more net benefit than radiomics signature or deep-learning signature. CONCLUSIONS Both radiomics and deep-learning features are diagnostic for predicting ALN metastasis in breast cancer. The nomogram incorporating radiomics and deep-learning signatures can achieve better prediction performance than every signature used alone.
Collapse
|
12
|
GADF-VGG16 based fault diagnosis method for HVDC transmission lines. PLoS One 2022; 17:e0274613. [DOI: 10.1371/journal.pone.0274613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 09/01/2022] [Indexed: 11/19/2022] Open
Abstract
Transmission lines are most prone to faults in the transmission system, so high-precision fault diagnosis is very important for quick troubleshooting. There are some problems in current intelligent fault diagnosis research methods, such as difficulty in extracting fault features accurately, low fault recognition accuracy and poor fault tolerance. In order to solve these problems, this paper proposes an intelligent fault diagnosis method for high voltage direct current transmission lines (HVDC) based on Gramian angular difference field (GADF) domain and improved convolutional neural network (VGG16). This method first performs variational modal decomposition (VMD) on the original fault voltage signal, and then uses the correlation coefficient method to select the appropriate intrinsic mode function (IMF) component, and converts it into a two-dimensional image using the Gramian Angular Difference Field(GADF). Finally, the improved VGG16 network is used to extract and classify fault features adaptively to realize fault diagnosis. In order to improve the performance of the VGG16 fault diagnosis model, batch normalization, dense connection and global average pooling techniques are introduced. The comparative experimental results show that the model proposed in this paper can further identify fault features and has a high fault diagnosis accuracy. In addition, the method is not affected by fault type, transitional resistance and fault distance, has good anti-interference ability, strong fault tolerance, and has great potential in practical applications.
Collapse
|
13
|
Identification of Oil Tea (Camellia oleifera C.Abel) Cultivars Using EfficientNet-B4 CNN Model with Attention Mechanism. FORESTS 2021. [DOI: 10.3390/f13010001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Cultivar identification is a basic task in oil tea (Camellia oleifera C.Abel) breeding, quality analysis, and an adjustment in the industrial structure. However, because the differences in texture, shape, and color under different cultivars of oil tea are usually inconspicuous and subtle, the identification of oil tea cultivars can be a significant challenge. The main goal of this study is to propose an automatic and accurate method for identifying oil tea cultivars. In this study, a new deep learning model is built, called EfficientNet-B4-CBAM, to identify oil tea cultivars. First, 4725 images containing four cultivars were collected to build an oil tea cultivar identification dataset. EfficientNet-B4 was selected as the basic model of oil tea cultivar identification, and the Convolutional Block Attention Module (CBAM) was integrated into EfficientNet-B4 to build EfficientNet-B4-CBAM, thereby improving the focusing ability of the fruit areas and the information expression capability of the fruit areas. Finally, the cultivar identification capability of EfficientNet-B4-CBAM was tested on the testing dataset and compared with InceptionV3, VGG16, ResNet50, EfficientNet-B4, and EfficientNet-B4-SE. The experiment results showed that the EfficientNet-B4-CBAM model achieves an overall accuracy of 97.02% and a kappa coefficient of 0.96, which is higher than that of other methods used in comparative experiments. In addition, gradient-weighted class activation mapping network visualization also showed that EfficientNet-B4-CBAM can pay more attention to the fruit areas that play a key role in cultivar identification. This study provides new effective strategies and a theoretical basis for the application of deep learning technology in the identification of oil tea cultivars and provides technical support for the automatic identification and non-destructive testing of oil tea cultivars.
Collapse
|