1
|
Shi Y, Yu Y, Zhang J, Yin C, Chen Y, Men H. Origin traceability of agricultural products: A lightweight collaborative neural network for spectral information processing. Food Res Int 2025; 208:116131. [PMID: 40263820 DOI: 10.1016/j.foodres.2025.116131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2024] [Revised: 01/14/2025] [Accepted: 02/28/2025] [Indexed: 04/24/2025]
Abstract
The natural conditions of various regions, including climate, soil, and water quality, significantly influence the nutrient composition and quality of agricultural products. Identifying the origin of agricultural products can prevent adulteration, imitation, and other fraudulent practices, ensuring food quality and safety. This work proposes a Lightweight Collaborative Neural Network (LC-Net) integrated with a hyperspectral system to recognize the origin of peanuts and rice from seven different origins. The Collaborative Spectral Feature Extraction Module (CSFEM) enhances the expression of spectral features, improving detection performance through local and global deep spectral feature extraction. LC-Net achieves 99.33 % accuracy, 98.98 % precision, and 99.28 % recall for peanuts, and 99.76 % accuracy, 99.63 % precision, and 99.73 % recall for rice. This AI-based method, combined with spectral analysis, provides a reliable technique for ensuring the quality and safety of agricultural products.
Collapse
Affiliation(s)
- Yan Shi
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; Advanced Sensor Research Institution, Northeast Electric Power University, Jilin 132012, China; Bionic Sensing and Pattern Recognition Team, Northeast Electric Power University, Jilin 132012, China.
| | - Yang Yu
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; Advanced Sensor Research Institution, Northeast Electric Power University, Jilin 132012, China; Bionic Sensing and Pattern Recognition Team, Northeast Electric Power University, Jilin 132012, China.
| | - Jinyue Zhang
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; Bionic Sensing and Pattern Recognition Team, Northeast Electric Power University, Jilin 132012, China.
| | - Chongbo Yin
- School of Bioengineering, Chongqing University, Chongqing 400044, China.
| | - Yizhou Chen
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh 15213, United States of America
| | - Hong Men
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; Advanced Sensor Research Institution, Northeast Electric Power University, Jilin 132012, China.
| |
Collapse
|
2
|
Wu H, Qu G, Xiao Z, Chunyu F. Enhancing left ventricular segmentation in echocardiography with a modified mixed attention mechanism in SegFormer architecture. Heliyon 2024; 10:e34845. [PMID: 39170227 PMCID: PMC11336270 DOI: 10.1016/j.heliyon.2024.e34845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 07/16/2024] [Accepted: 07/17/2024] [Indexed: 08/23/2024] Open
Abstract
Echocardiography is a key tool for the diagnosis of cardiac diseases, and accurate left ventricular (LV) segmentation in echocardiographic videos is crucial for the assessment of cardiac function. However, since semantic segmentation of video needs to take into account the temporal correlation between frames, this makes the task very challenging. This article introduces an innovative method that incorporates a modified mixed attention mechanism into the SegFormer architecture, enabling it to effectively grasp the temporal correlation present in video data. The proposed method processes each time series by encoding the image input into the encoder to obtain the current time feature map. This map, along with the historical time feature map, is then fed into a time-sensitive mixed attention mechanism type of convolution block attention module (TCBAM). Its output can serve as the historical time feature map for the subsequent sequence, and a combination of the current time feature map and historical time feature map for the current sequence. The processed feature map is then input into the Multilayer Perceptron (MLP) and subsequent networks to generate the final segmented image. Through extensive experiments conducted on two different datasets: Hamad Medical Corporation, Tampere University, and Qatar University (HMC-QU), Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CAMUS) and Sunnybrook Cardiac Data (SCD), achieving a Dice coefficient of 97.92 % on the SCD dataset and an F1 score of 0.9263 on the CAMUS dataset, outperforming all other models. This research provides a promising solution to the temporal modeling challenge in video semantic segmentation tasks using transformer-based models and points out a promising direction for future research in this field.
Collapse
Affiliation(s)
- Hanqiong Wu
- Internal Medicine, The First Hospital of Jinzhou Medical University, Jinzhou, 121001, China
| | - Gangrong Qu
- Cardiovascular Medicine, Chongqing General Hospital of the Armed Police Force, Chongqing, 400061, China
| | - Zhifeng Xiao
- China Nanhu Academy of Electronics and Information Technology, Jiaxing, 314050, China
| | - Fan Chunyu
- Department of Cardiovascular Medicine, The People's Hospital of Liaoning Province, Shengyang, 110067, China
| |
Collapse
|
3
|
Zhang Z, Han C, Wang X, Li H, Li J, Zeng J, Sun S, Wu W. Large field-of-view pine wilt disease tree detection based on improved YOLO v4 model with UAV images. FRONTIERS IN PLANT SCIENCE 2024; 15:1381367. [PMID: 38966144 PMCID: PMC11222607 DOI: 10.3389/fpls.2024.1381367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Accepted: 05/29/2024] [Indexed: 07/06/2024]
Abstract
Introduction Pine wilt disease spreads rapidly, leading to the death of a large number of pine trees. Exploring the corresponding prevention and control measures for different stages of pine wilt disease is of great significance for its prevention and control. Methods To address the issue of rapid detection of pine wilt in a large field of view, we used a drone to collect multiple sets of diseased tree samples at different times of the year, which made the model trained by deep learning more generalizable. This research improved the YOLO v4(You Only Look Once version 4) network for detecting pine wilt disease, and the channel attention mechanism module was used to improve the learning ability of the neural network. Results The ablation experiment found that adding the attention mechanism SENet module combined with the self-designed feature enhancement module based on the feature pyramid had the best improvement effect, and the mAP of the improved model was 79.91%. Discussion Comparing the improved YOLO v4 model with SSD, Faster RCNN, YOLO v3, and YOLO v5, it was found that the mAP of the improved YOLO v4 model was significantly higher than the other four models, which provided an efficient solution for intelligent diagnosis of pine wood nematode disease. The improved YOLO v4 model enables precise location and identification of pine wilt trees under changing light conditions. Deployment of the model on a UAV enables large-scale detection of pine wilt disease and helps to solve the challenges of rapid detection and prevention of pine wilt disease.
Collapse
Affiliation(s)
- Zhenbang Zhang
- College of Engineering, South China Agricultural University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Utilization and Conservation of Food and Medicinal Resources in Northern Region, Shaoguan University, Shaoguan, China
- College of Intelligent Engineering, Shaoguan University, Shaoguan, China
| | - Chongyang Han
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Xinrong Wang
- College of Plant Protection, South China Agricultural University, Guangzhou, China
| | - Haoxin Li
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Jie Li
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Jinbin Zeng
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Si Sun
- College of Forestry and Landscape Architecture, South China Agricultural University, Guangzhou, China
| | - Weibin Wu
- College of Engineering, South China Agricultural University, Guangzhou, China
| |
Collapse
|
4
|
Tan Y, Su W, Zhao L, Lai Q, Wang C, Jiang J, Wang Y, Li P. Navigation path extraction for inter-row robots in Panax notoginseng shade house based on Im-YOLOv5s. FRONTIERS IN PLANT SCIENCE 2023; 14:1246717. [PMID: 37915513 PMCID: PMC10616975 DOI: 10.3389/fpls.2023.1246717] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 09/28/2023] [Indexed: 11/03/2023]
Abstract
Introduction The accurate extraction of navigation paths is crucial for the automated navigation of agricultural robots. Navigation line extraction in complex environments such as Panax notoginseng shade house can be challenging due to factors including similar colors between the fork rows and soil, and the shadows cast by shade nets. Methods In this paper, we propose a new method for navigation line extraction based on deep learning and least squares (DL-LS) algorithms. We improve the YOLOv5s algorithm by introducing MobileNetv3 and ECANet. The trained model detects the seven-fork roots in the effective area between rows and uses the root point substitution method to determine the coordinates of the localization base points of the seven-fork root points. The seven-fork column lines on both sides of the plant monopoly are fitted using the least squares method. Results The experimental results indicate that Im-YOLOv5s achieves higher detection performance than other detection models. Through these improvements, Im-YOLOv5s achieves a mAP (mean Average Precision) of 94.9%. Compared to YOLOv5s, Im-YOLOv5s improves the average accuracy and frame rate by 1.9% and 27.7%, respectively, and the weight size is reduced by 47.9%. The results also reveal the ability of DL-LS to accurately extract seven-fork row lines, with a maximum deviation of the navigation baseline row direction of 1.64°, meeting the requirements of robot navigation line extraction. Discussion The results shows that compared to existing models, this model is more effective in detecting the seven-fork roots in images, and the computational complexity of the model is smaller. Our proposed method provides a basis for the intelligent mechanization of Panax notoginseng planting.
Collapse
Affiliation(s)
- Yu Tan
- Faculty of Modern Agricultural Engineering, Kunming University of Technology, Kunming, China
| | - Wei Su
- Faculty of Modern Agricultural Engineering, Kunming University of Technology, Kunming, China
| | - Lijun Zhao
- College of Intelligent and Manufacturing Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| | - Qinghui Lai
- School of Energy and Environmental Science, Yunnan Normal University, Kunming, China
| | - Chenglin Wang
- Faculty of Modern Agricultural Engineering, Kunming University of Technology, Kunming, China
| | - Jin Jiang
- Faculty of Modern Agricultural Engineering, Kunming University of Technology, Kunming, China
| | - Yongjie Wang
- Faculty of Modern Agricultural Engineering, Kunming University of Technology, Kunming, China
| | - Peihang Li
- Faculty of Modern Agricultural Engineering, Kunming University of Technology, Kunming, China
| |
Collapse
|
5
|
Wang C, Chen L, Zhang Y, Zhang L, Tan T. A Novel Cross-Sensor Transfer Diagnosis Method with Local Attention Mechanism: Applied in a Reciprocating Pump. SENSORS (BASEL, SWITZERLAND) 2023; 23:7432. [PMID: 37687888 PMCID: PMC10490796 DOI: 10.3390/s23177432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/17/2023] [Accepted: 08/24/2023] [Indexed: 09/10/2023]
Abstract
Data-driven mechanical fault diagnosis has been successfully developed in recent years, and the task of training and testing data from the same distribution has been well-solved. However, for some large machines with complex mechanical structures, such as reciprocating pumps, it is often not possible to obtain data from specific sensor locations. When the sensor position is changed, the distribution of the features of the signal data also changes and the fault diagnosis problem becomes more complicated. In this paper, a cross-sensor transfer diagnosis method is proposed, which utilizes the sharing of information collected by sensors between different locations of the machine to complete a more accurate and comprehensive fault diagnosis. To enhance the model's perception ability towards the critical part of the fault signal, the local attention mechanism is embedded into the proposed method. Finally, the proposed method is validated by applying it to experimentally acquired vibration signal data of reciprocating pumps. Excellent performance is demonstrated in terms of fault diagnosis accuracy and sensor generalization capability. The transferability of practical industrial faults among different sensors is confirmed.
Collapse
Affiliation(s)
- Chen Wang
- School of Nuclear Science and Technology, Naval University of Engineering, Wuhan 430033, China; (C.W.); (L.C.); (L.Z.); (T.T.)
| | - Ling Chen
- School of Nuclear Science and Technology, Naval University of Engineering, Wuhan 430033, China; (C.W.); (L.C.); (L.Z.); (T.T.)
| | - Yongfa Zhang
- School of Nuclear Science and Technology, Naval University of Engineering, Wuhan 430033, China; (C.W.); (L.C.); (L.Z.); (T.T.)
| | - Liming Zhang
- School of Nuclear Science and Technology, Naval University of Engineering, Wuhan 430033, China; (C.W.); (L.C.); (L.Z.); (T.T.)
- Chongqing Pump Industry Co., Ltd., Chongqing 400033, China
- Chongqing Machine Tool Co., Ltd., Chongqing 401336, China
| | - Tian Tan
- School of Nuclear Science and Technology, Naval University of Engineering, Wuhan 430033, China; (C.W.); (L.C.); (L.Z.); (T.T.)
| |
Collapse
|
6
|
An enhanced SSD with feature cross-reinforcement for small-object detection. APPL INTELL 2023. [DOI: 10.1007/s10489-023-04544-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
7
|
Tang D, Jin W, Liu D, Che J, Yang Y. Siam Deep Feature KCF Method and Experimental Study for Pedestrian Tracking. SENSORS (BASEL, SWITZERLAND) 2023; 23:482. [PMID: 36617099 PMCID: PMC9824739 DOI: 10.3390/s23010482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 12/01/2022] [Accepted: 12/06/2022] [Indexed: 06/17/2023]
Abstract
The tracking of a particular pedestrian is an important issue in computer vision to guarantee societal safety. Due to the limited computing performances of unmanned aerial vehicle (UAV) systems, the Correlation Filter (CF) algorithm has been widely used to perform the task of tracking. However, it has a fixed template size and cannot effectively solve the occlusion problem. Thus, a tracking-by-detection framework was designed in the current research. A lightweight YOLOv3-based (You Only Look Once version 3) mode which had Efficient Channel Attention (ECA) was integrated into the CF algorithm to provide deep features. In addition, a lightweight Siamese CNN with Cross Stage Partial (CSP) provided the representations of features learned from massive face images, allowing the target similarity in data association to be guaranteed. As a result, a Deep Feature Kernelized Correlation Filters method coupled with Siamese-CSP(Siam-DFKCF) was established to increase the tracking robustness. From the experimental results, it can be concluded that the anti-occlusion and re-tracking performance of the proposed method was increased. The tracking accuracy Distance Precision (DP) and Overlap Precision (OP) had been increased to 0.934 and 0.909 respectively in our test data.
Collapse
Affiliation(s)
- Di Tang
- College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310014, China
| | - Weijie Jin
- College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310014, China
| | - Dawei Liu
- China Aerodynamics Research and Development Center, High Speed Aerodynamic Institute, Mianyang 621000, China
| | - Jingqi Che
- College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310014, China
| | - Yin Yang
- China Aerodynamics Research and Development Center, High Speed Aerodynamic Institute, Mianyang 621000, China
| |
Collapse
|
8
|
Li Z, Xie D, Liu L, Wang H, Chen L. Inter-row information recognition of maize in the middle and late stages via LiDAR supplementary vision. FRONTIERS IN PLANT SCIENCE 2022; 13:1024360. [PMID: 36874920 PMCID: PMC9983608 DOI: 10.3389/fpls.2022.1024360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 10/31/2022] [Indexed: 06/18/2023]
Abstract
In the middle and late stages of maize, light is limited and non-maize obstacles exist. When a plant protection robot uses the traditional visual navigation method to obtain navigation information, some information will be missing. Therefore, this paper proposed a method using LiDAR (laser imaging, detection and ranging) point cloud data to supplement machine vision data for recognizing inter-row information in the middle and late stages of maize. Firstly, we improved the YOLOv5 (You Only Look Once, version 5) algorithm based on the characteristics of the actual maize inter-row environment in the middle and late stages by introducing MobileNetv2 and ECANet. Compared with that of YOLOv5, the frame rate of the improved YOLOv5 (Im-YOLOv5) increased by 17.91% and the weight size decreased by 55.56% when the average accuracy was reduced by only 0.35%, improving the detection performance and shortening the time of model reasoning. Secondly, we identified obstacles (such as stones and clods) between the rows using the LiDAR point cloud data to obtain auxiliary navigation information. Thirdly, the auxiliary navigation information was used to supplement the visual information, so that not only the recognition accuracy of the inter-row navigation information in the middle and late stages of maize was improved but also the basis of the stable and efficient operation of the inter-row plant protection robot was provided for these stages. The experimental results from a data acquisition robot equipped with a camera and a LiDAR sensor are presented to show the efficacy and remarkable performance of the proposed method.
Collapse
Affiliation(s)
- Zhiqiang Li
- College of Engineering, Anhui Agricultural University, Hefei, China
- Anhui Intelligent Agricultural Machinery Equipment Engineering Laboratory, Hefei, China
| | - Dongbo Xie
- College of Engineering, Anhui Agricultural University, Hefei, China
- Anhui Intelligent Agricultural Machinery Equipment Engineering Laboratory, Hefei, China
| | - Lichao Liu
- College of Engineering, Anhui Agricultural University, Hefei, China
- Anhui Intelligent Agricultural Machinery Equipment Engineering Laboratory, Hefei, China
| | - Hai Wang
- College of Engineering, Anhui Agricultural University, Hefei, China
- Discipline of Engineering and Energy, Murdoch University, Perth, WA, Australia
| | - Liqing Chen
- College of Engineering, Anhui Agricultural University, Hefei, China
- Anhui Intelligent Agricultural Machinery Equipment Engineering Laboratory, Hefei, China
| |
Collapse
|
9
|
Hou S, Xiao S, Dong W, Qu J. Multi-Level Features Fusion via Cross-Layer Guided Attention for Hyperspectral Pansharpening. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.07.071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|