1
|
Xiong B, Li D, Zhang Q, Desneux N, Luo C, Hu Z. Image detection model construction of Apolygus lucorum and Empoasca spp. based on improved YOLOv5. PEST MANAGEMENT SCIENCE 2024; 80:2577-2586. [PMID: 38243837 DOI: 10.1002/ps.7964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 10/21/2023] [Accepted: 01/04/2024] [Indexed: 01/22/2024]
Abstract
BACKGROUND The polyphagous mirid bug Apolygus lucorum (Meyer-Dür) and the green leafhopper Empoasca spp. Walsh are small pests that are widely distributed and important pests of many economically important crops, especially kiwis. Conventional monitoring methods are expensive, laborious and error-prone. Currently, deep learning methods are ineffective at recognizing them. This study proposes a new deep-learning-based YOLOv5s_HSSE model to automatically detect and count them on sticky card traps. RESULTS Based on a database of 1502 images, all images were collected from kiwi orchards at multiple locations and times. We trained the YOLOv5s model to detect and count them and then changed the activation function to Hard swish in YOLOv5s, introduced the SIoU Loss function, and added the squeeze-and-excitation attention mechanism to form a new YOLOv5s_HSSE model. Mean average precision of this model in the test dataset was 95.9%, the recall rate was 93.9% and the frames per second was 155, which are higher than those of other single-stage deep-learning models, such as SSD, YOLOv3 and YOLOv4. CONCLUSION The proposed YOLOv5s_HSSE model can be used to identify and count A. lucorum and Empoasca spp., and it is a new, efficient and accurate monitoring method. Pest detection will benefit from the broader applications of deep learning. © 2024 Society of Chemical Industry.
Collapse
Affiliation(s)
- Bo Xiong
- State Key Laboratory of Crop Stress Biology for Arid Areas, Key Laboratory of Plant Protection Resources and Pest Management of Ministry of Education, Key Laboratory of Integrated Pest Management on the Loess Plateau of Ministry of Agriculture and Rural Affairs, College of Plant Protection, Northwest A&F University, Yangling, China
| | - Delu Li
- State Key Laboratory of Crop Stress Biology for Arid Areas, Key Laboratory of Plant Protection Resources and Pest Management of Ministry of Education, Key Laboratory of Integrated Pest Management on the Loess Plateau of Ministry of Agriculture and Rural Affairs, College of Plant Protection, Northwest A&F University, Yangling, China
| | - Qi Zhang
- State Key Laboratory of Crop Stress Biology for Arid Areas, Key Laboratory of Plant Protection Resources and Pest Management of Ministry of Education, Key Laboratory of Integrated Pest Management on the Loess Plateau of Ministry of Agriculture and Rural Affairs, College of Plant Protection, Northwest A&F University, Yangling, China
| | | | - Chen Luo
- State Key Laboratory of Crop Stress Biology for Arid Areas, Key Laboratory of Plant Protection Resources and Pest Management of Ministry of Education, Key Laboratory of Integrated Pest Management on the Loess Plateau of Ministry of Agriculture and Rural Affairs, College of Plant Protection, Northwest A&F University, Yangling, China
| | - Zuqing Hu
- State Key Laboratory of Crop Stress Biology for Arid Areas, Key Laboratory of Plant Protection Resources and Pest Management of Ministry of Education, Key Laboratory of Integrated Pest Management on the Loess Plateau of Ministry of Agriculture and Rural Affairs, College of Plant Protection, Northwest A&F University, Yangling, China
| |
Collapse
|
2
|
Wang J, Renninger HJ, Ma Q, Jin S. Measuring stomatal and guard cell metrics for plant physiology and growth using StoManager1. PLANT PHYSIOLOGY 2024; 195:378-394. [PMID: 38298139 DOI: 10.1093/plphys/kiae049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 01/08/2024] [Accepted: 01/09/2024] [Indexed: 02/02/2024]
Abstract
Automated guard cell detection and measurement are vital for understanding plant physiological performance and ecological functioning in global water and carbon cycles. Most current methods for measuring guard cells and stomata are laborious, time-consuming, prone to bias, and limited in scale. We developed StoManager1, a high-throughput tool utilizing geometrical, mathematical algorithms, and convolutional neural networks to automatically detect, count, and measure over 30 guard cell and stomatal metrics, including guard cell and stomatal area, length, width, stomatal aperture area/guard cell area, orientation, stomatal evenness, divergence, and aggregation index. Combined with leaf functional traits, some of these StoManager1-measured guard cell and stomatal metrics explained 90% and 82% of tree biomass and intrinsic water use efficiency (iWUE) variances in hardwoods, making them substantial factors in leaf physiology and tree growth. StoManager1 demonstrated exceptional precision and recall (mAP@0.5 over 0.96), effectively capturing diverse stomatal properties across over 100 species. StoManager1 facilitates the automation of measuring leaf stomatal and guard cells, enabling broader exploration of stomatal control in plant growth and adaptation to environmental stress and climate change. This has implications for global gross primary productivity (GPP) modeling and estimation, as integrating stomatal metrics can enhance predictions of plant growth and resource usage worldwide. Easily accessible open-source code and standalone Windows executable applications are available on a GitHub repository (https://github.com/JiaxinWang123/StoManager1) and Zenodo (https://doi.org/10.5281/zenodo.7686022).
Collapse
Affiliation(s)
- Jiaxin Wang
- Department of Forestry, Mississippi State University, Mississippi State, MS 39762, USA
| | - Heidi J Renninger
- Department of Forestry, Mississippi State University, Mississippi State, MS 39762, USA
| | - Qin Ma
- School of Geography, Nanjing Normal University, Nanjing 210023, China
| | - Shichao Jin
- Plant Phenomics Research Centre, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Centre for Modern Crop Production co-sponsored by Province and Ministry, Nanjing Agricultural University, Nanjing 210095, China
| |
Collapse
|
3
|
Ahmed N, Zhang B, Deng L, Bozdar B, Li J, Chachar S, Chachar Z, Jahan I, Talpur A, Gishkori MS, Hayat F, Tu P. Advancing horizons in vegetable cultivation: a journey from ageold practices to high-tech greenhouse cultivation-a review. FRONTIERS IN PLANT SCIENCE 2024; 15:1357153. [PMID: 38685958 PMCID: PMC11057267 DOI: 10.3389/fpls.2024.1357153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Accepted: 03/20/2024] [Indexed: 05/02/2024]
Abstract
Vegetable cultivation stands as a pivotal element in the agricultural transformation illustrating a complex interplay between technological advancements, evolving environmental perspectives, and the growing global demand for food. This comprehensive review delves into the broad spectrum of developments in modern vegetable cultivation practices. Rooted in historical traditions, our exploration commences with conventional cultivation methods and traces the progression toward contemporary practices emphasizing the critical shifts that have refined techniques and outcomes. A significant focus is placed on the evolution of seed selection and quality assessment methods underlining the growing importance of seed treatments in enhancing both germination and plant growth. Transitioning from seeds to the soil, we investigate the transformative journey from traditional soil-based cultivation to the adoption of soilless cultures and the utilization of sustainable substrates like biochar and coir. The review also examines modern environmental controls highlighting the use of advanced greenhouse technologies and artificial intelligence in optimizing plant growth conditions. We underscore the increasing sophistication in water management strategies from advanced irrigation systems to intelligent moisture sensing. Additionally, this paper discusses the intricate aspects of precision fertilization, integrated pest management, and the expanding influence of plant growth regulators in vegetable cultivation. A special segment is dedicated to technological innovations, such as the integration of drones, robots, and state-of-the-art digital monitoring systems, in the cultivation process. While acknowledging these advancements, the review also realistically addresses the challenges and economic considerations involved in adopting cutting-edge technologies. In summary, this review not only provides a comprehensive guide to the current state of vegetable cultivation but also serves as a forward-looking reference emphasizing the critical role of continuous research and the anticipation of future developments in this field.
Collapse
Affiliation(s)
- Nazir Ahmed
- College of Horticulture and Landscape Architecture, Zhongkai University of Agriculture and Engineering, Guangzhou, Guangdong, China
| | - Baige Zhang
- Key Laboratory for New Technology Research of Vegetables, Vegetable Research Institute, Guangdong Academy of Agricultural Science, Guangzhou, China
| | - Lansheng Deng
- College of Natural Resources and Environment, South China Agricultural University, Guangzhou, China
| | - Bilquees Bozdar
- Faculty of Crop Production, Sindh Agriculture University, Tandojam, Pakistan
| | - Juan Li
- College of Horticulture and Landscape Architecture, Zhongkai University of Agriculture and Engineering, Guangzhou, Guangdong, China
| | - Sadaruddin Chachar
- College of Horticulture and Landscape Architecture, Zhongkai University of Agriculture and Engineering, Guangzhou, Guangdong, China
| | - Zaid Chachar
- College of Agriculture and Biology, Zhongkai University of Agriculture and Engineering, Guangzhou, Guangdong, China
| | - Itrat Jahan
- Faculty of Crop Production, Sindh Agriculture University, Tandojam, Pakistan
| | - Afifa Talpur
- Faculty of Crop Production, Sindh Agriculture University, Tandojam, Pakistan
| | | | - Faisal Hayat
- College of Horticulture and Landscape Architecture, Zhongkai University of Agriculture and Engineering, Guangzhou, Guangdong, China
| | - Panfeng Tu
- College of Horticulture and Landscape Architecture, Zhongkai University of Agriculture and Engineering, Guangzhou, Guangdong, China
| |
Collapse
|
4
|
Kang R, Huang J, Zhou X, Ren N, Sun S. Toward Real Scenery: A Lightweight Tomato Growth Inspection Algorithm for Leaf Disease Detection and Fruit Counting. PLANT PHENOMICS (WASHINGTON, D.C.) 2024; 6:0174. [PMID: 38629080 PMCID: PMC11018486 DOI: 10.34133/plantphenomics.0174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 03/19/2024] [Indexed: 04/19/2024]
Abstract
The deployment of intelligent surveillance systems to monitor tomato plant growth poses substantial challenges due to the dynamic nature of disease patterns and the complexity of environmental conditions such as background and lighting. In this study, an integrated cascade framework that synergizes detectors and trackers was introduced for the simultaneous identification of tomato leaf diseases and fruit counting. We applied an autonomous robot with smartphone camera to collect images for leaf disease and fruits in greenhouses. Further, we improved the deep learning network YOLO-TGI by incorporating Ghost and CBAM modules, which was trained and tested in conjunction with premier lightweight detection models like YOLOX and NanoDet in evaluating leaf health conditions. For the cascading with various base detectors, we integrated state-of-the-art trackers such as Byte-Track, Motpy, and FairMot to enable fruit counting in video streams. Experimental results indicated that the combination of YOLO-TGI and Byte-Track achieved the most robust performance. Particularly, YOLO-TGI-N emerged as the model with the least computational demands, registering the lowest FLOPs at 2.05 G and checkpoint weights at 3.7 M, while still maintaining a mAP of 0.72 for leaf disease detection. Regarding the fruit counting, the combination of YOLO-TGI-S and Byte-Track achieved the best R2 of 0.93 and the lowest RMSE of 9.17, boasting an inference speed that doubles that of the YOLOX series, and is 2.5 times faster than the NanoDet series. The developed network framework is a potential solution for researchers facilitating the deployment of similar surveillance models for a broad spectrum of fruit and vegetable crops.
Collapse
Affiliation(s)
- Rui Kang
- Institute of Agricultural Information, Jiangsu Academy of Agricultural Sciences, Nanjing 210044, China
- Bioresource Engineering Department,
McGill University, Montreal, QC H9X 3V9, Canada
| | - Jiaxin Huang
- Institute of Agricultural Information, Jiangsu Academy of Agricultural Sciences, Nanjing 210044, China
| | - Xuehai Zhou
- Bioresource Engineering Department,
McGill University, Montreal, QC H9X 3V9, Canada
| | - Ni Ren
- Institute of Agricultural Information, Jiangsu Academy of Agricultural Sciences, Nanjing 210044, China
| | - Shangpeng Sun
- Bioresource Engineering Department,
McGill University, Montreal, QC H9X 3V9, Canada
| |
Collapse
|
5
|
Qu H, Zheng C, Ji H, Huang R, Wei D, Annis S, Drummond F. A deep multi-task learning approach to identifying mummy berry infection sites, the disease stage, and severity. FRONTIERS IN PLANT SCIENCE 2024; 15:1340884. [PMID: 38606063 PMCID: PMC11007028 DOI: 10.3389/fpls.2024.1340884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 02/26/2024] [Indexed: 04/13/2024]
Abstract
Introduction Mummy berry is a serious disease that may result in up to 70 percent of yield loss for lowbush blueberries. Practical mummy berry disease detection, stage classification and severity estimation remain great challenges for computer vision-based approaches because images taken in lowbush blueberry fields are usually a mixture of different plant parts (leaves, bud, flowers and fruits) with a very complex background. Specifically, typical problems hindering this effort included data scarcity due to high manual labelling cost, tiny and low contrast disease features interfered and occluded by healthy plant parts, and over-complicated deep neural networks which made deployment of a predictive system difficult. Methods Using real and raw blueberry field images, this research proposed a deep multi-task learning (MTL) approach to simultaneously accomplish three disease detection tasks: identification of infection sites, classification of disease stage, and severity estimation. By further incorporating novel superimposed attention mechanism modules and grouped convolutions to the deep neural network, enabled disease feature extraction from both channel and spatial perspectives, achieving better detection performance in open and complex environments, while having lower computational cost and faster convergence rate. Results Experimental results demonstrated that our approach achieved higher detection efficiency compared with the state-of-the-art deep learning models in terms of detection accuracy, while having three main advantages: 1) field images mixed with various types of lowbush blueberry plant organs under a complex background can be used for disease detection; 2) parameter sharing among different tasks greatly reduced the size of training samples and saved 60% training time than when the three tasks (data preparation, model development and exploration) were trained separately; and 3) only one-sixth of the network parameter size (23.98M vs. 138.36M) and one-fifteenth of the computational cost (1.13G vs. 15.48G FLOPs) were used when compared with the most popular Convolutional Neural Network VGG16. Discussion These features make our solution very promising for future mobile deployment such as a drone carried task unit for real-time field surveillance. As an automatic approach to fast disease diagnosis, it can be a useful technical tool to provide growers real time disease information that can prevent further disease transmission and more severe effects on yield due to fruit mummification.
Collapse
Affiliation(s)
- Hongchun Qu
- Institute of Ecological Safety and College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
- College of Computer Science, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Chaofang Zheng
- Institute of Ecological Safety and College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Hao Ji
- Institute of Ecological Safety and College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
- College of Computer Science, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Rui Huang
- College of Computer Science, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Dianwen Wei
- Institute of Natural Resources and Ecology, Heilongjiang Academy of Sciences, Harbin, China
| | - Seanna Annis
- School of Biology and Ecology, University of Maine, Orono, ME, United States
- Cooperative Extension, University of Maine, Orono, ME, United States
| | - Francis Drummond
- School of Biology and Ecology, University of Maine, Orono, ME, United States
- Cooperative Extension, University of Maine, Orono, ME, United States
| |
Collapse
|
6
|
Khan A, Malebary SJ, Dang LM, Binzagr F, Song HK, Moon H. AI-Enabled Crop Management Framework for Pest Detection Using Visual Sensor Data. PLANTS (BASEL, SWITZERLAND) 2024; 13:653. [PMID: 38475499 DOI: 10.3390/plants13050653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 02/23/2024] [Accepted: 02/23/2024] [Indexed: 03/14/2024]
Abstract
Our research focuses on addressing the challenge of crop diseases and pest infestations in agriculture by utilizing UAV technology for improved crop monitoring through unmanned aerial vehicles (UAVs) and enhancing the detection and classification of agricultural pests. Traditional approaches often require arduous manual feature extraction or computationally demanding deep learning (DL) techniques. To address this, we introduce an optimized model tailored specifically for UAV-based applications. Our alterations to the YOLOv5s model, which include advanced attention modules, expanded cross-stage partial network (CSP) modules, and refined multiscale feature extraction mechanisms, enable precise pest detection and classification. Inspired by the efficiency and versatility of UAVs, our study strives to revolutionize pest management in sustainable agriculture while also detecting and preventing crop diseases. We conducted rigorous testing on a medium-scale dataset, identifying five agricultural pests, namely ants, grasshoppers, palm weevils, shield bugs, and wasps. Our comprehensive experimental analysis showcases superior performance compared to various YOLOv5 model versions. The proposed model obtained higher performance, with an average precision of 96.0%, an average recall of 93.0%, and a mean average precision (mAP) of 95.0%. Furthermore, the inherent capabilities of UAVs, combined with the YOLOv5s model tested here, could offer a reliable solution for real-time pest detection, demonstrating significant potential to optimize and improve agricultural production within a drone-centric ecosystem.
Collapse
Affiliation(s)
- Asma Khan
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Sharaf J Malebary
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, P.O. Box 344, Rabigh 21911, Saudi Arabia
| | - L Minh Dang
- Department of Information and Communication Engineering and Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic of Korea
| | - Faisal Binzagr
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, P.O. Box 344, Rabigh 21911, Saudi Arabia
| | - Hyoung-Kyu Song
- Department of Information and Communication Engineering and Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic of Korea
| | - Hyeonjoon Moon
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea
| |
Collapse
|
7
|
Liu Y, Yu Q, Geng S. Real-time and lightweight detection of grape diseases based on Fusion Transformer YOLO. FRONTIERS IN PLANT SCIENCE 2024; 15:1269423. [PMID: 38463562 PMCID: PMC10920279 DOI: 10.3389/fpls.2024.1269423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 02/07/2024] [Indexed: 03/12/2024]
Abstract
Introduction Grapes are prone to various diseases throughout their growth cycle, and the failure to promptly control these diseases can result in reduced production and even complete crop failure. Therefore, effective disease control is essential for maximizing grape yield. Accurate disease identification plays a crucial role in this process. In this paper, we proposed a real-time and lightweight detection model called Fusion Transformer YOLO for 4 grape diseases detection. The primary source of the dataset comprises RGB images acquired from plantations situated in North China. Methods Firstly, we introduce a lightweight high-performance VoVNet, which utilizes ghost convolutions and learnable downsampling layer. This backbone is further improved by integrating effective squeeze and excitation blocks and residual connections to the OSA module. These enhancements contribute to improved detection accuracy while maintaining a lightweight network. Secondly, an improved dual-flow PAN+FPN structure with Real-time Transformer is adopted in the neck component, by incorporating 2D position embedding and a single-scale Transformer Encoder into the last feature map. This modification enables real-time performance and improved accuracy in detecting small targets. Finally, we adopt the Decoupled Head based on the improved Task Aligned Predictor in the head component, which balances accuracy and speed. Results Experimental results demonstrate that FTR-YOLO achieves the high performance across various evaluation metrics, with a mean Average Precision (mAP) of 90.67%, a Frames Per Second (FPS) of 44, and a parameter size of 24.5M. Conclusion The FTR-YOLO presented in this paper provides a real-time and lightweight solution for the detection of grape diseases. This model effectively assists farmers in detecting grape diseases.
Collapse
Affiliation(s)
- Yifan Liu
- College of Information Technology Engineering, Tianjin University of Technology and Education, Tianjin, China
| | - Qiudong Yu
- College of Information Technology Engineering, Tianjin University of Technology and Education, Tianjin, China
| | - Shuze Geng
- College of Information Technology Engineering, Tianjin University of Technology and Education, Tianjin, China
| |
Collapse
|
8
|
Li R, He Y, Li Y, Qin W, Abbas A, Ji R, Li S, Wu Y, Sun X, Yang J. Identification of cotton pest and disease based on CFNet- VoV-GCSP -LSKNet-YOLOv8s: a new era of precision agriculture. FRONTIERS IN PLANT SCIENCE 2024; 15:1348402. [PMID: 38444536 PMCID: PMC10913016 DOI: 10.3389/fpls.2024.1348402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 01/24/2024] [Indexed: 03/07/2024]
Abstract
Introduction The study addresses challenges in detecting cotton leaf pests and diseases under natural conditions. Traditional methods face difficulties in this context, highlighting the need for improved identification techniques. Methods The proposed method involves a new model named CFNet-VoV-GCSP-LSKNet-YOLOv8s. This model is an enhancement of YOLOv8s and includes several key modifications: (1) CFNet Module. Replaces all C2F modules in the backbone network to improve multi-scale object feature fusion. (2) VoV-GCSP Module. Replaces C2F modules in the YOLOv8s head, balancing model accuracy with reduced computational load. (3) LSKNet Attention Mechanism. Integrated into the small object layers of both the backbone and head to enhance detection of small objects. (4) XIoU Loss Function. Introduced to improve the model's convergence performance. Results The proposed method achieves high performance metrics: Precision (P), 89.9%. Recall Rate (R), 90.7%. Mean Average Precision (mAP@0.5), 93.7%. The model has a memory footprint of 23.3MB and a detection time of 8.01ms. When compared with other models like YOLO v5s, YOLOX, YOLO v7, Faster R-CNN, YOLOv8n, YOLOv7-tiny, CenterNet, EfficientDet, and YOLOv8s, it shows an average accuracy improvement ranging from 1.2% to 21.8%. Discussion The study demonstrates that the CFNet-VoV-GCSP-LSKNet-YOLOv8s model can effectively identify cotton pests and diseases in complex environments. This method provides a valuable technical resource for the identification and control of cotton pests and diseases, indicating significant improvements over existing methods.
Collapse
Affiliation(s)
- Rujia Li
- School of Big Data, Yunnan Agricultural University, Kunming, China
| | - Yiting He
- School of Big Data, Yunnan Agricultural University, Kunming, China
| | - Yadong Li
- School of Big Data, Yunnan Agricultural University, Kunming, China
| | - Weibo Qin
- College of Plant Protection, Jilin Agricultural University, Changchun, China
| | - Arzlan Abbas
- College of Plant Protection, Jilin Agricultural University, Changchun, China
| | - Rongbiao Ji
- School of Big Data, Yunnan Agricultural University, Kunming, China
| | - Shuang Li
- School of Big Data, Yunnan Agricultural University, Kunming, China
| | - Yehui Wu
- School of Big Data, Yunnan Agricultural University, Kunming, China
| | - Xiaohai Sun
- Jilin Haicheng Technology Co., Ltd., Changchun, China
| | - Jianping Yang
- School of Big Data, Yunnan Agricultural University, Kunming, China
| |
Collapse
|
9
|
Khan A, Hassan T, Shafay M, Fahmy I, Werghi N, Mudigansalage S, Hussain I. Tomato maturity recognition with convolutional transformers. Sci Rep 2023; 13:22885. [PMID: 38129680 PMCID: PMC10739758 DOI: 10.1038/s41598-023-50129-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 12/15/2023] [Indexed: 12/23/2023] Open
Abstract
Tomatoes are a major crop worldwide, and accurately classifying their maturity is important for many agricultural applications, such as harvesting, grading, and quality control. In this paper, the authors propose a novel method for tomato maturity classification using a convolutional transformer. The convolutional transformer is a hybrid architecture that combines the strengths of convolutional neural networks (CNNs) and transformers. Additionally, this study introduces a new tomato dataset named KUTomaData, explicitly designed to train deep-learning models for tomato segmentation and classification. KUTomaData is a compilation of images sourced from a greenhouse in the UAE, with approximately 700 images available for training and testing. The dataset is prepared under various lighting conditions and viewing perspectives and employs different mobile camera sensors, distinguishing it from existing datasets. The contributions of this paper are threefold: firstly, the authors propose a novel method for tomato maturity classification using a modular convolutional transformer. Secondly, the authors introduce a new tomato image dataset that contains images of tomatoes at different maturity levels. Lastly, the authors show that the convolutional transformer outperforms state-of-the-art methods for tomato maturity classification. The effectiveness of the proposed framework in handling cluttered and occluded tomato instances was evaluated using two additional public datasets, Laboro Tomato and Rob2Pheno Annotated Tomato, as benchmarks. The evaluation results across these three datasets demonstrate the exceptional performance of our proposed framework, surpassing the state-of-the-art by 58.14%, 65.42%, and 66.39% in terms of mean average precision scores for KUTomaData, Laboro Tomato, and Rob2Pheno Annotated Tomato, respectively. This work can potentially improve the efficiency and accuracy of tomato harvesting, grading, and quality control processes.
Collapse
Affiliation(s)
- Asim Khan
- Department of Mechanical Engineering, Khalifa University, Abu Dhabi, UAE
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE
| | - Taimur Hassan
- Department of Electrical, Computer and Biomedical Engineering, Abu Dhabi University, Abu Dhabi, UAE
| | - Muhammad Shafay
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Israa Fahmy
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Naoufel Werghi
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Seneviratne Mudigansalage
- Department of Mechanical Engineering, Khalifa University, Abu Dhabi, UAE
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE
| | - Irfan Hussain
- Department of Mechanical Engineering, Khalifa University, Abu Dhabi, UAE.
- Khalifa University Center for Robotics and Autonomous Systems (KUCARS), Khalifa University, Abu Dhabi, UAE.
| |
Collapse
|
10
|
Zhang T, Wang D. Classification of crop disease-pest questions based on BERT-BiGRU-CapsNet with attention pooling. FRONTIERS IN PLANT SCIENCE 2023; 14:1300580. [PMID: 38143585 PMCID: PMC10740160 DOI: 10.3389/fpls.2023.1300580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 11/22/2023] [Indexed: 12/26/2023]
Abstract
Crop disease-pest question classification is an essential part of pest knowledge intelligent question answering system. A crop disease-pest question classification method is proposed on the basis of bidirectional encoder representations from transformers (BERT), bidirectional gated unit (BiGRU), capsule network (CapsNet), and BERT-BiGRU-CapsNet with attention pooling (BBGCAP). In BBGCAP, the unstructured text data are preprocessed vectorically using BERT, BiGRU is used to extract the deep features of the text, attention pooling is used to assign the corresponding weights to the extracted deep information, and CapsNet is used to route the right alternative. BBGCAP is a synthetic model by integrating the advantages of BERT, BiGRU, CapsNet, and attention pooling. The experimental results on the cucumber-pest question database show that the proposed method is superior to the methods based on traditional template matching, support vector machines (SVM), and convolutional neural network-long short-term memory (LSTM), and the accuracy rates of precision, recall, and F1 are all above 902.15%. This method provides technical support for intelligent question answering system of crop disease-pests.
Collapse
Affiliation(s)
- Ting Zhang
- College of Computing, Xijing University, Xi’an, China
| | | |
Collapse
|
11
|
Popescu D, Dinca A, Ichim L, Angelescu N. New trends in detection of harmful insects and pests in modern agriculture using artificial neural networks. a review. FRONTIERS IN PLANT SCIENCE 2023; 14:1268167. [PMID: 38023916 PMCID: PMC10652400 DOI: 10.3389/fpls.2023.1268167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 10/11/2023] [Indexed: 12/01/2023]
Abstract
Modern and precision agriculture is constantly evolving, and the use of technology has become a critical factor in improving crop yields and protecting plants from harmful insects and pests. The use of neural networks is emerging as a new trend in modern agriculture that enables machines to learn and recognize patterns in data. In recent years, researchers and industry experts have been exploring the use of neural networks for detecting harmful insects and pests in crops, allowing farmers to act and mitigate damage. This paper provides an overview of new trends in modern agriculture for harmful insect and pest detection using neural networks. Using a systematic review, the benefits and challenges of this technology are highlighted, as well as various techniques being taken by researchers to improve its effectiveness. Specifically, the review focuses on the use of an ensemble of neural networks, pest databases, modern software, and innovative modified architectures for pest detection. The review is based on the analysis of multiple research papers published between 2015 and 2022, with the analysis of the new trends conducted between 2020 and 2022. The study concludes by emphasizing the significance of ongoing research and development of neural network-based pest detection systems to maintain sustainable and efficient agricultural production.
Collapse
Affiliation(s)
- Dan Popescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, Bucharest, Romania
| | - Alexandru Dinca
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, Bucharest, Romania
| | - Loretta Ichim
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, Bucharest, Romania
| | - Nicoleta Angelescu
- Faculty of Electrical Engineering, Electronics, and Information Technology, University Valahia of Targoviste, Targoviste, Romania
| |
Collapse
|
12
|
Li X, Wang L, Miao H, Zhang S. Aphid Recognition and Counting Based on an Improved YOLOv5 Algorithm in a Climate Chamber Environment. INSECTS 2023; 14:839. [PMID: 37999038 PMCID: PMC10671967 DOI: 10.3390/insects14110839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 10/23/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023]
Abstract
Due to changes in light intensity, varying degrees of aphid aggregation, and small scales in the climate chamber environment, accurately identifying and counting aphids remains a challenge. In this paper, an improved YOLOv5 aphid detection model based on CNN is proposed to address aphid recognition and counting. First, to reduce the overfitting problem of insufficient data, the proposed YOLOv5 model uses an image enhancement method combining Mosaic and GridMask to expand the aphid dataset. Second, a convolutional block attention mechanism (CBAM) is proposed in the backbone layer to improve the recognition accuracy of aphid small targets. Subsequently, the feature fusion method of bi-directional feature pyramid network (BiFPN) is employed to enhance the YOLOv5 neck, further improving the recognition accuracy and speed of aphids; in addition, a Transformer structure is introduced in front of the detection head to investigate the impact of aphid aggregation and light intensity on recognition accuracy. Experiments have shown that, through the fusion of the proposed methods, the model recognition accuracy and recall rate can reach 99.1%, the value mAP@0.5 can reach 99.3%, and the inference time can reach 9.4 ms, which is significantly better than other YOLO series networks. Moreover, it has strong robustness in actual recognition tasks and can provide a reference for pest prevention and control in climate chambers.
Collapse
Affiliation(s)
| | | | - Hong Miao
- College of Mechanical Engineering, Yangzhou University, Yangzhou 225127, China
| | | |
Collapse
|
13
|
Jin X, Jiao H, Zhang C, Li M, Zhao B, Liu G, Ji J. Hydroponic lettuce defective leaves identification based on improved YOLOv5s. FRONTIERS IN PLANT SCIENCE 2023; 14:1242337. [PMID: 37965019 PMCID: PMC10641003 DOI: 10.3389/fpls.2023.1242337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 10/13/2023] [Indexed: 11/16/2023]
Abstract
Achieving intelligent detection of defective leaves of hydroponic lettuce after harvesting is of great significance for ensuring the quality and value of hydroponic lettuce. In order to improve the detection accuracy and efficiency of hydroponic lettuce defective leaves, firstly, an image acquisition system is designed and used to complete image acquisition for defective leaves of hydroponic lettuce. Secondly, this study proposed EBG_YOLOv5 model which optimized the YOLOv5 model by integrating the attention mechanism ECA in the backbone and introducing bidirectional feature pyramid and GSConv modules in the neck. Finally, the performance of the improved model was verified by ablation experiments and comparison experiments. The experimental results proved that, the Precision, Recall rate and mAP0.5 of the EBG_YOLOv5 were 0.1%, 2.0% and 2.6% higher than those of YOLOv5s, respectively, while the model size, GFLOPs and Parameters are reduced by 15.3%, 18.9% and 16.3%. Meanwhile, the accuracy and model size of EBG_YOLOv5 were higher and smaller compared with other detection algorithms. This indicates that the EBG_YOLOv5 being applied to hydroponic lettuce defective leaves detection can achieve better performance. It can provide technical support for the subsequent research of lettuce intelligent nondestructive classification equipment.
Collapse
Affiliation(s)
- Xin Jin
- College of Agricultural Equipment Engineering, Henan University of Science and Technology, Luoyang, China
- Science and Technology Innovation Center for Completed Set Equipment, Longmen Laboratory, Luoyang, China
| | - Haowei Jiao
- College of Agricultural Equipment Engineering, Henan University of Science and Technology, Luoyang, China
| | - Chao Zhang
- College of Agricultural Equipment Engineering, Henan University of Science and Technology, Luoyang, China
| | - Mingyong Li
- College of Agricultural Equipment Engineering, Henan University of Science and Technology, Luoyang, China
| | - Bo Zhao
- State Key Laboratory of Soil - Plant - Machine System Technology, Chinese Academy of Agricultural Mechanization Sciences, Beijing, China
| | - Guowei Liu
- Eponic Agriculture Co., Ltd, Zhuhai, China
| | - Jiangtao Ji
- College of Agricultural Equipment Engineering, Henan University of Science and Technology, Luoyang, China
| |
Collapse
|
14
|
Li X, Wang X, Ong P, Yi Z, Ding L, Han C. Fast Recognition and Counting Method of Dragon Fruit Flowers and Fruits Based on Video Stream. SENSORS (BASEL, SWITZERLAND) 2023; 23:8444. [PMID: 37896537 PMCID: PMC10611008 DOI: 10.3390/s23208444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 09/28/2023] [Accepted: 10/03/2023] [Indexed: 10/29/2023]
Abstract
Dragon fruit (Hylocereus undatus) is a tropical and subtropical fruit that undergoes multiple ripening cycles throughout the year. Accurate monitoring of the flower and fruit quantities at various stages is crucial for growers to estimate yields, plan orders, and implement effective management strategies. However, traditional manual counting methods are labor-intensive and inefficient. Deep learning techniques have proven effective for object recognition tasks but limited research has been conducted on dragon fruit due to its unique stem morphology and the coexistence of flowers and fruits. Additionally, the challenge lies in developing a lightweight recognition and tracking model that can be seamlessly integrated into mobile platforms, enabling on-site quantity counting. In this study, a video stream inspection method was proposed to classify and count dragon fruit flowers, immature fruits (green fruits), and mature fruits (red fruits) in a dragon fruit plantation. The approach involves three key steps: (1) utilizing the YOLOv5 network for the identification of different dragon fruit categories, (2) employing the improved ByteTrack object tracking algorithm to assign unique IDs to each target and track their movement, and (3) defining a region of interest area for precise classification and counting of dragon fruit across categories. Experimental results demonstrate recognition accuracies of 94.1%, 94.8%, and 96.1% for dragon fruit flowers, green fruits, and red fruits, respectively, with an overall average recognition accuracy of 95.0%. Furthermore, the counting accuracy for each category is measured at 97.68%, 93.97%, and 91.89%, respectively. The proposed method achieves a counting speed of 56 frames per second on a 1080ti GPU. The findings establish the efficacy and practicality of this method for accurate counting of dragon fruit or other fruit varieties.
Collapse
Affiliation(s)
- Xiuhua Li
- School of Electrical Engineering, Guangxi University, Nanning 530004, China; (X.L.); (X.W.); (L.D.); (C.H.)
- Guangxi Key Laboratory of Sugarcane Biology, Guangxi University, Nanning 530004, China
| | - Xiang Wang
- School of Electrical Engineering, Guangxi University, Nanning 530004, China; (X.L.); (X.W.); (L.D.); (C.H.)
| | - Pauline Ong
- Faculty of Mechanical and Manufacturing Engineering, Universiti Tun Hussein Onn Malaysia, Parit Raja 86400, Johor, Malaysia;
| | - Zeren Yi
- School of Electrical Engineering, Guangxi University, Nanning 530004, China; (X.L.); (X.W.); (L.D.); (C.H.)
| | - Lu Ding
- School of Electrical Engineering, Guangxi University, Nanning 530004, China; (X.L.); (X.W.); (L.D.); (C.H.)
| | - Chao Han
- School of Electrical Engineering, Guangxi University, Nanning 530004, China; (X.L.); (X.W.); (L.D.); (C.H.)
| |
Collapse
|
15
|
Liu J, Wang X. Tomato disease object detection method combining prior knowledge attention mechanism and multiscale features. FRONTIERS IN PLANT SCIENCE 2023; 14:1255119. [PMID: 37877077 PMCID: PMC10590886 DOI: 10.3389/fpls.2023.1255119] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 09/21/2023] [Indexed: 10/26/2023]
Abstract
To address the challenges of insufficient accuracy in detecting tomato disease object detection caused by dense target distributions, large-scale variations, and poor feature information of small objects in complex backgrounds, this study proposes the tomato disease object detection method that integrates prior knowledge attention mechanism and multi-scale features (PKAMMF). Firstly, the visual features of tomato disease images are fused with prior knowledge through the prior knowledge attention mechanism to obtain enhanced visual features corresponding to tomato diseases. Secondly, a new feature fusion layer is constructed in the Neck section to reduce feature loss. Furthermore, a specialized prediction layer specifically designed to improve the model's ability to detect small targets is incorporated. Finally, a new loss function known as A-SIOU (Adaptive Structured IoU) is employed to optimize the performance of the model in terms of bounding box regression. The experimental results on the self-built tomato disease dataset demonstrate the effectiveness of the proposed approach, and it achieves a mean average precision (mAP) of 91.96%, which is a 3.86% improvement compared to baseline methods. The results show significant improvements in the detection performance of multi-scale tomato disease objects.
Collapse
Affiliation(s)
- Jun Liu
- Shandong Provincial University Laboratory for Protected Horticulture, Weifang University of Science and Technology, Weifang, China
| | - Xuewei Wang
- Shandong Provincial University Laboratory for Protected Horticulture, Weifang University of Science and Technology, Weifang, China
| |
Collapse
|
16
|
Zhang X, Bu J, Zhou X, Wang X. Automatic pest identification system in the greenhouse based on deep learning and machine vision. FRONTIERS IN PLANT SCIENCE 2023; 14:1255719. [PMID: 37841606 PMCID: PMC10568774 DOI: 10.3389/fpls.2023.1255719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 09/11/2023] [Indexed: 10/17/2023]
Abstract
Monitoring and understanding pest population dynamics is essential to greenhouse management for effectively preventing infestations and crop diseases. Image-based pest recognition approaches demonstrate the potential for real-time pest monitoring. However, the pest detection models are challenged by the tiny pest scale and complex image background. Therefore, high-quality image datasets and reliable pest detection models are required. In this study, we developed a trapping system with yellow sticky paper and LED light for automatic pest image collection, and proposed an improved YOLOv5 model with copy-pasting data augmentation for pest recognition. We evaluated the system in cherry tomato and strawberry greenhouses during 40 days of continuous monitoring. Six diverse pests, including tobacco whiteflies, leaf miners, aphids, fruit flies, thrips, and houseflies, are observed in the experiment. The results indicated that the proposed improved YOLOv5 model obtained an average recognition accuracy of 96% and demonstrated superiority in identification of nearby pests over the original YOLOv5 model. Furthermore, the two greenhouses show different pest numbers and populations dynamics, where the number of pests in the cherry tomato greenhouse was approximately 1.7 times that in the strawberry greenhouse. The developed time-series pest-monitoring system could provide insights for pest control and further applied to other greenhouses.
Collapse
Affiliation(s)
| | | | | | - Xiaochan Wang
- College of Engineering, Nanjing Agricultural University, Nanjing, China
| |
Collapse
|
17
|
Lv W, Chen T, Zeng Y, Liu W, Huang C. A challenge of deep-learning-based object detection for hair follicle dataset. J Cosmet Dermatol 2023; 22:2565-2578. [PMID: 37021716 DOI: 10.1111/jocd.15742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 01/24/2023] [Accepted: 03/14/2023] [Indexed: 04/07/2023]
Abstract
BACKGROUND Deep-learning object detection has been applied in various industries, including healthcare, to address hair loss. METHODS In this paper, YOLOv5 object detection algorithm was used to detect hair follicles in a small and specific image dataset collected using a specialized camera on the scalp of individuals with different ages, regions, and genders. The performance of YOLOv5 was compared with other popular object detection models. RESULTS YOLOv5 performed well in the detection of hair follicles, and the follicles were classified into five classes based on the number of hairs and the type of hair contained. In single-class object detection experiments, a smaller batch size and the smallest YOLOv5s model achieved the best results, with an map of 0.8151. In multiclass object detection experiments, the larger YOLOv5l model was able to achieve the best results, and batch size affected the result of model training. CONCLUSION YOLOv5 is a promising algorithm for detecting hair follicles in a small and specific image dataset, and its performance is comparable to other popular object detection models. However, the challenges of small-scale data and sample imbalance need to be addressed to improve the performance of target detection algorithms.
Collapse
Affiliation(s)
- Wei Lv
- Department of Alibaba Cloud Big Data Application, Zhuhai College of Science and Technology, Zhuhai, China
| | - Tao Chen
- Department of Alibaba Cloud Big Data Application, Zhuhai College of Science and Technology, Zhuhai, China
| | - Yifan Zeng
- Department of Alibaba Cloud Big Data Application, Zhuhai College of Science and Technology, Zhuhai, China
| | - Weihong Liu
- Department of Alibaba Cloud Big Data Application, Zhuhai College of Science and Technology, Zhuhai, China
| | - Chuying Huang
- Department of Alibaba Cloud Big Data Application, Zhuhai College of Science and Technology, Zhuhai, China
| |
Collapse
|
18
|
Liu B, Jia Y, Liu L, Dang Y, Song S. Skip DETR: end-to-end Skip connection model for small object detection in forestry pest dataset. FRONTIERS IN PLANT SCIENCE 2023; 14:1219474. [PMID: 37649993 PMCID: PMC10464905 DOI: 10.3389/fpls.2023.1219474] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 07/25/2023] [Indexed: 09/01/2023]
Abstract
Object detection has a wide range of applications in forestry pest control. However, forest pest detection faces the challenges of a lack of datasets and low accuracy of small target detection. DETR is an end-to-end object detection model based on the transformer, which has the advantages of simple structure and easy migration. However, the object query initialization of DETR is random, and random initialization will cause the model convergence to be slow and unstable. At the same time, the correlation between different network layers is not strong, resulting in DETR is not very ideal in small object training, optimization, and performance. In order to alleviate these problems, we propose Skip DETR, which improves the feature fusion between different network layers through skip connection and the introduction of spatial pyramid pooling layers so as to improve the detection results of small objects. We performed experiments on Forestry Pest Datasets, and the experimental results showed significant AP improvements in our method. When the value of IoU is 0.5, our method is 7.7% higher than the baseline and 6.1% higher than the detection result of small objects. Experimental results show that the application of skip connection and spatial pyramid pooling layer in the detection framework can effectively improve the effect of small-sample obiect detection.
Collapse
Affiliation(s)
- Bing Liu
- College of Computer Science and Technology, Jilin University, Changchun, Jilin, China
- School of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin, China
| | - Yixin Jia
- School of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin, China
| | - Luyang Liu
- School of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin, China
| | - Yuanyuan Dang
- School of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin, China
| | - Shinan Song
- School of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin, China
| |
Collapse
|
19
|
Gong X, Zhang S. An Analysis of Plant Diseases Identification Based on Deep Learning Methods. THE PLANT PATHOLOGY JOURNAL 2023; 39:319-334. [PMID: 37550979 PMCID: PMC10412967 DOI: 10.5423/ppj.oa.02.2023.0034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/25/2023] [Accepted: 06/12/2023] [Indexed: 08/09/2023]
Abstract
Plant disease is an important factor affecting crop yield. With various types and complex conditions, plant diseases cause serious economic losses, as well as modern agriculture constraints. Hence, rapid, accurate, and early identification of crop diseases is of great significance. Recent developments in deep learning, especially convolutional neural network (CNN), have shown impressive performance in plant disease classification. However, most of the existing datasets for plant disease classification are a single background environment rather than a real field environment. In addition, the classification can only obtain the category of a single disease and fail to obtain the location of multiple different diseases, which limits the practical application. Therefore, the object detection method based on CNN can overcome these shortcomings and has broad application prospects. In this study, an annotated apple leaf disease dataset in a real field environment was first constructed to compensate for the lack of existing datasets. Moreover, the Faster R-CNN and YOLOv3 architectures were trained to detect apple leaf diseases in our dataset. Finally, comparative experiments were conducted and a variety of evaluation indicators were analyzed. The experimental results demonstrate that deep learning algorithms represented by YOLOv3 and Faster R-CNN are feasible for plant disease detection and have their own strong points and weaknesses.
Collapse
Affiliation(s)
- Xulu Gong
- College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801,
China
- School of Software, Shanxi Agricultural University, Jinzhong 030801,
China
| | - Shujuan Zhang
- College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801,
China
| |
Collapse
|
20
|
Gao X, Tang Z, Deng Y, Hu S, Zhao H, Zhou G. HSSNet: A End-to-End Network for Detecting Tiny Targets of Apple Leaf Diseases in Complex Backgrounds. PLANTS (BASEL, SWITZERLAND) 2023; 12:2806. [PMID: 37570960 PMCID: PMC10420854 DOI: 10.3390/plants12152806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 07/21/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023]
Abstract
Apple leaf diseases are one of the most important factors that reduce apple quality and yield. The object detection technology based on deep learning can detect diseases in a timely manner and help automate disease control, thereby reducing economic losses. In the natural environment, tiny apple leaf disease targets (a resolution is less than 32 × 32 pixel2) are easily overlooked. To address the problems of complex background interference, difficult detection of tiny targets and biased detection of prediction boxes that exist in standard detectors, in this paper, we constructed a tiny target dataset TTALDD-4 containing four types of diseases, which include Alternaria leaf spot, Frogeye leaf spot, Grey spot and Rust, and proposed the HSSNet detector based on the YOLOv7-tiny benchmark for professional detection of apple leaf disease tiny targets. Firstly, the H-SimAM attention mechanism is proposed to focus on the foreground lesions in the complex background of the image. Secondly, SP-BiFormer Block is proposed to enhance the ability of the model to perceive tiny targets of leaf diseases. Finally, we use the SIOU loss to improve the case of prediction box bias. The experimental results show that HSSNet achieves 85.04% mAP (mean average precision), 67.53% AR (average recall), and 83 FPS (frames per second). Compared with other standard detectors, HSSNet maintains high real-time detection speed with higher detection accuracy. This provides a reference for the automated control of apple leaf diseases.
Collapse
Affiliation(s)
| | | | | | | | - Hongmin Zhao
- College of Computer & Information Engineering, Central South University of Forestry and Technology, Changsha 410004, China; (X.G.); (Z.T.); (Y.D.); (S.H.)
| | - Guoxiong Zhou
- College of Computer & Information Engineering, Central South University of Forestry and Technology, Changsha 410004, China; (X.G.); (Z.T.); (Y.D.); (S.H.)
| |
Collapse
|
21
|
Appe SN, G A, GN B. CAM-YOLO: tomato detection and classification based on improved YOLOv5 using combining attention mechanism. PeerJ Comput Sci 2023; 9:e1463. [PMID: 37547387 PMCID: PMC10403160 DOI: 10.7717/peerj-cs.1463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 06/06/2023] [Indexed: 08/08/2023]
Abstract
Background One of the key elements in maintaining the consistent marketing of tomato fruit is tomato quality. Since ripeness is the most important factor for tomato quality in the viewpoint of consumers, determining the stages of tomato ripeness is a fundamental industrial concern with regard to tomato production to obtain a high quality product. Since tomatoes are one of the most important crops in the world, automatic ripeness evaluation of tomatoes is a significant study topic as it may prove beneficial in ensuring an optimal production of high-quality product, increasing profitability. This article explores and categorises the various maturity/ripeness phases to propose an automated multi-class classification approach for tomato ripeness testing and evaluation. Methods Object detection is the critical component in a wide variety of computer vision problems and applications such as manufacturing, agriculture, medicine, and autonomous driving. Due to the tomato fruits' complex identification background, texture disruption, and partial occlusion, the classic deep learning object detection approach (YOLO) has a poor rate of success in detecting tomato fruits. To figure out these issues, this article proposes an improved YOLOv5 tomato detection algorithm. The proposed algorithm CAM-YOLO uses YOLOv5 for feature extraction, target identification and Convolutional Block Attention Module (CBAM). The CBAM is added to the CAM-YOLO to focus the model on improving accuracy. Finally, non-maximum suppression and distance intersection over union (DIoU) are applied to enhance the identification of overlapping objects in the image. Results Several images from the dataset were chosen for testing to assess the model's performance, and the detection performance of the CAM-YOLO and standard YOLOv5 models under various conditions was compared. The experimental results affirms that CAM-YOLO algorithm is efficient in detecting the overlapped and small tomatoes with an average precision of 88.1%.
Collapse
Affiliation(s)
- Seetharam Nagesh Appe
- Department of Computer Science and Engineering, Annamalai University, Chidambaram, Tamilnadu, India
- Department of Information Technology, CVR College of Engineering, Hyderabad, India
| | - Arulselvi G
- Department of Computer Science and Engineering, Annamalai University, Chidambaram, Tamilnadu, India
| | - Balaji GN
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, India
| |
Collapse
|
22
|
Zhan B, Li M, Luo W, Li P, Li X, Zhang H. Study on the Tea Pest Classification Model Using a Convolutional and Embedded Iterative Region of Interest Encoding Transformer. BIOLOGY 2023; 12:1017. [PMID: 37508446 PMCID: PMC10376105 DOI: 10.3390/biology12071017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 07/01/2023] [Accepted: 07/09/2023] [Indexed: 07/30/2023]
Abstract
Tea diseases are one of the main causes of tea yield reduction, and the use of computer vision for classification and diagnosis is an effective means of tea disease management. However, the random location of lesions, high symptom similarity, and complex background make the recognition and classification of tea images difficult. Therefore, this paper proposes a tea disease IterationVIT diagnosis model that integrates a convolution and iterative transformer. The convolution consists of a superimposed bottleneck layer for extracting the local features of tea leaves. The iterative algorithm incorporates the attention mechanism and bilinear interpolation operation to obtain disease location information by continuously updating the region of interest in location information. The transformer module uses a multi-head attention mechanism for global feature extraction. A total of 3544 images of red leaf spot, algal leaf spot, bird's eye disease, gray wilt, white spot, anthracnose, brown wilt, and healthy tea leaves collected under natural light were used as samples and input into the IterationVIT model for training. The results show that when the patch size is 16, the model performed better with an IterationVIT classification accuracy of 98% and F1 measure of 96.5%, which is superior to mainstream methods such as VIT, Efficient, Shuffle, Mobile, Vgg, etc. In order to verify the robustness of the model, the original images of the test set were blurred, noise- was added and highlighted, and then the images were input into the IterationVIT model. The classification accuracy still reached over 80%. When 60% of the training set was randomly selected, the classification accuracy of the IterationVIT model test set was 8% higher than that of mainstream models, with the ability to analyze fewer samples. Model generalizability was performed using three sets of plant leaf public datasets, and the experimental results were all able to achieve comparable levels of generalizability to the data in this paper. Finally, this paper visualized and interpreted the model using the CAM method to obtain the pixel-level thermal map of tea diseases, and the results show that the established IterationVIT model can accurately capture the location of diseases, which further verifies the effectiveness of the model.
Collapse
Affiliation(s)
- Baishao Zhan
- College of Electrical and Automation Engineering, East China Jiaotong University, Nanchang 330013, China
| | - Ming Li
- College of Electrical and Automation Engineering, East China Jiaotong University, Nanchang 330013, China
| | - Wei Luo
- College of Electrical and Automation Engineering, East China Jiaotong University, Nanchang 330013, China
| | - Peng Li
- College of Electrical and Automation Engineering, East China Jiaotong University, Nanchang 330013, China
| | - Xiaoli Li
- College of Biosystems Engineering and Food Science, Zhejiang University, 866 Yuhangtang Road, Hangzhou 310058, China
| | - Hailiang Zhang
- College of Electrical and Automation Engineering, East China Jiaotong University, Nanchang 330013, China
| |
Collapse
|
23
|
Xu L, Shi X, Tang Z, He Y, Yang N, Ma W, Zheng C, Chen H, Zhou T, Huang P, Wu Z, Wang Y, Zou Z, Kang Z, Dai J, Zhao Y. ASFL-YOLOX: an adaptive spatial feature fusion and lightweight detection method for insect pests of the Papilionidae family. FRONTIERS IN PLANT SCIENCE 2023; 14:1176300. [PMID: 37546271 PMCID: PMC10400454 DOI: 10.3389/fpls.2023.1176300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 05/16/2023] [Indexed: 08/08/2023]
Abstract
Introduction Insect pests from the family Papilionidae (IPPs) are a seasonal threat to citrus orchards, causing damage to young leaves, affecting canopy formation and fruiting. Existing pest detection models used by orchard plant protection equipment lack a balance between inference speed and accuracy. Methods To address this issue, we propose an adaptive spatial feature fusion and lightweight detection model for IPPs, called ASFL-YOLOX. Our model includes several optimizations, such as the use of the Tanh-Softplus activation function, integration of the efficient channel attention mechanism, adoption of the adaptive spatial feature fusion module, and implementation of the soft Dlou non-maximum suppression algorithm. We also propose a structured pruning curation technique to eliminate unnecessary connections and network parameters. Results Experimental results demonstrate that ASFL-YOLOX outperforms previous models in terms of inference speed and accuracy. Our model shows an increase in inference speed by 29 FPS compared to YOLOv7-x, a higher mAP of approximately 10% than YOLOv7-tiny, and a faster inference frame rate on embedded platforms compared to SSD300 and Faster R-CNN. We compressed the model parameters of ASFL-YOLOX by 88.97%, reducing the number of floating point operations per second from 141.90G to 30.87G while achieving an mAP higher than 95%. Discussion Our model can accurately and quickly detect fruit tree pest stress in unstructured orchards and is suitable for transplantation to embedded systems. This can provide technical support for pest identification and localization systems for orchard plant protection equipment.
Collapse
Affiliation(s)
- Lijia Xu
- College of Mechanical and Electrical Engineering, Sichuan Agricultural University, Ya’an, China
| | - Xiaoshi Shi
- College of Mechanical and Electrical Engineering, Sichuan Agricultural University, Ya’an, China
- College of Resources, Sichuan Agricultural University, Chengdu, China
| | - Zuoliang Tang
- College of Mechanical and Electrical Engineering, Sichuan Agricultural University, Ya’an, China
- College of Resources, Sichuan Agricultural University, Chengdu, China
| | - Yong He
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Ning Yang
- College of Electrical and Information Engineering, Jiangsu University, Zhenjiang, China
| | - Wei Ma
- Institute of Urban Agriculture, Chinese Academy of Agricultural Sciences, Chengdu, China
| | - Chengyu Zheng
- Regulation Department, China Telecom Corporation Limited Sichuan Branch, Chengdu, China
| | - Huabao Chen
- College of Agronomy, Sichuan Agricultural University, Chengdu, China
| | - Taigang Zhou
- Changhong Digital Agriculture Research Institute, Sichuan Changhong Yunsu Information Technology Co., Ltd, Chengdu, China
| | - Peng Huang
- College of Mechanical and Electrical Engineering, Sichuan Agricultural University, Ya’an, China
| | - Zhijun Wu
- College of Mechanical and Electrical Engineering, Sichuan Agricultural University, Ya’an, China
| | - Yuchao Wang
- College of Mechanical and Electrical Engineering, Sichuan Agricultural University, Ya’an, China
| | - Zhiyong Zou
- College of Mechanical and Electrical Engineering, Sichuan Agricultural University, Ya’an, China
| | - Zhiliang Kang
- College of Mechanical and Electrical Engineering, Sichuan Agricultural University, Ya’an, China
| | - Jianwu Dai
- College of Mechanical and Electrical Engineering, Sichuan Agricultural University, Ya’an, China
| | - Yongpeng Zhao
- College of Mechanical and Electrical Engineering, Sichuan Agricultural University, Ya’an, China
| |
Collapse
|
24
|
Luo J, Yang Z, Zhang F, Li C. Effect of nitrogen application on enhancing high-temperature stress tolerance of tomato plants during the flowering and fruiting stage. FRONTIERS IN PLANT SCIENCE 2023; 14:1172078. [PMID: 37360700 PMCID: PMC10285307 DOI: 10.3389/fpls.2023.1172078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 05/18/2023] [Indexed: 06/28/2023]
Abstract
This study was conducted to investigate the effects of nitrogen application on growth, photosynthetic performance, nitrogen metabolism activities, and fruit quality of tomato plants under high-temperature (HT) stress. Three levels of daily minimum/daily maximum temperature were adopted during the flowering and fruiting stage, namely control (CK; 18°C/28°C), sub-high temperature (SHT; 25°C/35°C), and high-temperature (HT; 30°C/40°C) stress. The levels of nitrogen (urea, 46% N) were set as 0 (N1), 125 (N2), 187.5 (N3), 250 (N4), and 312.5 (N5) kg hm2, respectively, and the duration lasted for 5 days (short-term). HT stress inhibited the growth, yield, and fruit quality of tomato plants. Interestingly, short-term SHT stress improved growth and yield via higher photosynthetic efficiency and nitrogen metabolism whereas fruit quality was reduced. Appropriate nitrogen application can enhance the high-temperature stress tolerance of tomato plants. The maximum net photosynthetic rate (P Nmax), stomatal conductance (g s), stomatal limit value (LS), water-use efficiency (WUE), nitrate reductase (NR), glutamine synthetase (GS), soluble protein, and free amino acids were the highest in N3, N3, and N2, respectively, for CK, SHT, and HT stress, whereas carbon dioxide concentration (C i), was the lowest. In addition, maximum SPAD value, plant morphology, yield, Vitamin C, soluble sugar, lycopene, and soluble solids occurred at N3-N4, N3-N4, and N2-N3, respectively, for CK, SHT, and HT stress. Based on the principal component analysis and comprehensive evaluation, we found that the optimum nitrogen application for tomato growth, yield, and fruit quality was 230.23 kg hm2 (N3-N4), 230.02 kg hm2 (N3-N4), and 115.32 kg hm2 (N2), respectively, at CK, SHT, and HT stress. Results revealed that the high yield and good fruit quality of tomato plants at high temperatures can be maintained by higher photosynthesis, nitrogen efficiency, and nutrients with moderate nitrogen.
Collapse
|
25
|
Liu Y, Liu J, Cheng W, Chen Z, Zhou J, Cheng H, Lv C. A High-Precision Plant Disease Detection Method Based on a Dynamic Pruning Gate Friendly to Low-Computing Platforms. PLANTS (BASEL, SWITZERLAND) 2023; 12:plants12112073. [PMID: 37299053 DOI: 10.3390/plants12112073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 04/06/2023] [Accepted: 05/17/2023] [Indexed: 06/12/2023]
Abstract
Timely and accurate detection of plant diseases is a crucial research topic. A dynamic-pruning-based method for automatic detection of plant diseases in low-computing situations is proposed. The main contributions of this research work include the following: (1) the collection of datasets for four crops with a total of 12 diseases over a three-year history; (2) the proposition of a re-parameterization method to improve the boosting accuracy of convolutional neural networks; (3) the introduction of a dynamic pruning gate to dynamically control the network structure, enabling operation on hardware platforms with widely varying computational power; (4) the implementation of the theoretical model based on this paper and the development of the associated application. Experimental results demonstrate that the model can run on various computing platforms, including high-performance GPU platforms and low-power mobile terminal platforms, with an inference speed of 58 FPS, outperforming other mainstream models. In terms of model accuracy, subclasses with a low detection accuracy are enhanced through data augmentation and validated by ablation experiments. The model ultimately achieves an accuracy of 0.94.
Collapse
Affiliation(s)
- Yufei Liu
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Jingxin Liu
- College of Economics and Management, China Agricultural University, Beijing 100083, China
| | - Wei Cheng
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Zizhi Chen
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Junyu Zhou
- International College Beijing, China Agricultural University, Beijing 100083, China
| | - Haolan Cheng
- International College Beijing, China Agricultural University, Beijing 100083, China
| | - Chunli Lv
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| |
Collapse
|
26
|
Deng Y, Xi H, Zhou G, Chen A, Wang Y, Li L, Hu Y. An Effective Image-Based Tomato Leaf Disease Segmentation Method Using MC-UNet. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0049. [PMID: 37228512 PMCID: PMC10204749 DOI: 10.34133/plantphenomics.0049] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 04/21/2023] [Indexed: 05/27/2023]
Abstract
Tomato disease control is an urgent requirement in the field of intellectual agriculture, and one of the keys to it is quantitative identification and precise segmentation of tomato leaf diseases. Some diseased areas on tomato leaves are tiny and may go unnoticed during segmentation. Blurred edge also makes the segmentation accuracy poor. Based on UNet, we propose an effective image-based tomato leaf disease segmentation method called Cross-layer Attention Fusion Mechanism combined with Multi-scale Convolution Module (MC-UNet). First, a Multi-scale Convolution Module is proposed. This module obtains multiscale information about tomato disease by employing 3 convolution kernels of different sizes, and it highlights the edge feature information of tomato disease using the Squeeze-and-Excitation Module. Second, a Cross-layer Attention Fusion Mechanism is proposed. This mechanism highlights tomato leaf disease locations via gating structure and fusion operation. Then, we employ SoftPool rather than MaxPool to retain valid information on tomato leaves. Finally, we use the SeLU function appropriately to avoid network neuron dropout. We compared MC-UNet to the existing segmentation network on our self-built tomato leaf disease segmentation dataset and MC-UNet achieved 91.32% accuracy and 6.67M parameters. Our method achieves good results for tomato leaf disease segmentation, which demonstrates the effectiveness of the proposed methods.
Collapse
Affiliation(s)
- Yubao Deng
- College of Computer & Information Engineering,
Central South University of Forestry and Technology, Changsha, 410004, Hunan, China
| | - Haoran Xi
- College of Mechanical & Electrical Engineering,
Central South University of Forestry and Technology, Changsha, 410004, Hunan, China
| | - Guoxiong Zhou
- College of Computer & Information Engineering,
Central South University of Forestry and Technology, Changsha, 410004, Hunan, China
| | - Aibin Chen
- College of Computer & Information Engineering,
Central South University of Forestry and Technology, Changsha, 410004, Hunan, China
| | - Yanfeng Wang
- National University of Defense Technology, 410015, Changsha, Hunan, China
| | - Liujun Li
- Department of Soil and Water Systems,
University of Idaho, Moscow, ID, 83844, USA
| | - Yahui Hu
- Plant Protection Research Institute, Academy of Agricultural Sciences, 410125, Changsha, Hunan, China
| |
Collapse
|
27
|
Khan F, Zafar N, Tahir MN, Aqib M, Waheed H, Haroon Z. A mobile-based system for maize plant leaf disease detection and classification using deep learning. FRONTIERS IN PLANT SCIENCE 2023; 14:1079366. [PMID: 37255561 PMCID: PMC10226393 DOI: 10.3389/fpls.2023.1079366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 03/24/2023] [Indexed: 06/01/2023]
Abstract
Artificial Intelligence has been used for many applications such as medical, communication, object detection, and object tracking. Maize crop, which is the major crop in the world, is affected by several types of diseases which lower its yield and affect the quality. This paper focuses on this issue and provides an application for the detection and classification of diseases in maize crop using deep learning models. In addition to this, the developed application also returns the segmented images of affected leaves and thus enables us to track the disease spots on each leaf. For this purpose, a dataset of three maize crop diseases named Blight, Sugarcane Mosaic virus, and Leaf Spot is collected from the University Research Farm Koont, PMAS-AAUR at different growth stages on contrasting weather conditions. This data was used for training different prediction models including YOLOv3-tiny, YOLOv4, YOLOv5s, YOLOv7s, and YOLOv8n and the reported prediction accuracy was 69.40%, 97.50%, 88.23%, 93.30%, and 99.04% respectively. Results demonstrate that the prediction accuracy of the YOLOv8n model is higher than the other applied models. This model has shown excellent results while localizing the affected area of the leaf accurately with a higher confidence score. YOLOv8n is the latest model used for the detection of diseases as compared to the other approaches in the available literature. Also, worked on sugarcane mosaic virus using deep learning models has also been reported for the first time. Further, the models with high accuracy have been embedded in a mobile application to provide a real-time disease detection facility for end users within a few seconds.
Collapse
Affiliation(s)
- Faiza Khan
- University Institute of Information Technology, Pir Meh Ali Shah (PMAS)-Arid Agriculture University, Rawalpindi, Pakistan
- Data Driven Smart Decision Platform for Increased Agriculture Productivity, Pir Meh Ali Shah (PMAS)-Arid Agriculture University, Rawalpindi, Pakistan
| | - Noureen Zafar
- University Institute of Information Technology, Pir Meh Ali Shah (PMAS)-Arid Agriculture University, Rawalpindi, Pakistan
- Data Driven Smart Decision Platform for Increased Agriculture Productivity, Pir Meh Ali Shah (PMAS)-Arid Agriculture University, Rawalpindi, Pakistan
| | - Muhammad Naveed Tahir
- Data Driven Smart Decision Platform for Increased Agriculture Productivity, Pir Meh Ali Shah (PMAS)-Arid Agriculture University, Rawalpindi, Pakistan
- Department of Agronomy, Pir Meh Ali Shah (PMAS)-Arid Agriculture University, Rawalpindi, Pakistan
| | - Muhammad Aqib
- University Institute of Information Technology, Pir Meh Ali Shah (PMAS)-Arid Agriculture University, Rawalpindi, Pakistan
- National Center of Industrial Biotechnology, Pir Meh Ali Shah (PMAS)-Arid Agriculture University, Rawalpindi, Pakistan
| | - Hamna Waheed
- University Institute of Information Technology, Pir Meh Ali Shah (PMAS)-Arid Agriculture University, Rawalpindi, Pakistan
| | - Zainab Haroon
- Data Driven Smart Decision Platform for Increased Agriculture Productivity, Pir Meh Ali Shah (PMAS)-Arid Agriculture University, Rawalpindi, Pakistan
- Department of Land and Water Conservation Engineering, Faculty of Agricultural Engineering and Technology, Pir Meh Ali Shah (PMAS)-Arid Agriculture University, Rawalpindi, Pakistan
| |
Collapse
|
28
|
Hossain S, Tanzim Reza M, Chakrabarty A, Jung YJ. Aggregating Different Scales of Attention on Feature Variants for Tomato Leaf Disease Diagnosis from Image Data: A Transformer Driven Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:3751. [PMID: 37050811 PMCID: PMC10099258 DOI: 10.3390/s23073751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 03/24/2023] [Accepted: 04/03/2023] [Indexed: 06/19/2023]
Abstract
Tomato leaf diseases can incur significant financial damage by having adverse impacts on crops and, consequently, they are a major concern for tomato growers all over the world. The diseases may come in a variety of forms, caused by environmental stress and various pathogens. An automated approach to detect leaf disease from images would assist farmers to take effective control measures quickly and affordably. Therefore, the proposed study aims to analyze the effects of transformer-based approaches that aggregate different scales of attention on variants of features for the classification of tomato leaf diseases from image data. Four state-of-the-art transformer-based models, namely, External Attention Transformer (EANet), Multi-Axis Vision Transformer (MaxViT), Compact Convolutional Transformers (CCT), and Pyramid Vision Transformer (PVT), are trained and tested on a multiclass tomato disease dataset. The result analysis showcases that MaxViT comfortably outperforms the other three transformer models with 97% overall accuracy, as opposed to the 89% accuracy achieved by EANet, 91% by CCT, and 93% by PVT. MaxViT also achieves a smoother learning curve compared to the other transformers. Afterwards, we further verified the legitimacy of the results on another relatively smaller dataset. Overall, the exhaustive empirical analysis presented in the paper proves that the MaxViT architecture is the most effective transformer model to classify tomato leaf disease, providing the availability of powerful hardware to incorporate the model.
Collapse
Affiliation(s)
- Shahriar Hossain
- Department of Computer Science and Engineering, BRAC University, Dhaka 1212, Bangladesh
| | - Md Tanzim Reza
- Department of Computer Science and Engineering, BRAC University, Dhaka 1212, Bangladesh
| | - Amitabha Chakrabarty
- Department of Computer Science and Engineering, BRAC University, Dhaka 1212, Bangladesh
| | - Yong Ju Jung
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
29
|
Shoaib M, Shah B, EI-Sappagh S, Ali A, Ullah A, Alenezi F, Gechev T, Hussain T, Ali F. An advanced deep learning models-based plant disease detection: A review of recent research. FRONTIERS IN PLANT SCIENCE 2023; 14:1158933. [PMID: 37025141 PMCID: PMC10070872 DOI: 10.3389/fpls.2023.1158933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 02/27/2023] [Indexed: 05/14/2023]
Abstract
Plants play a crucial role in supplying food globally. Various environmental factors lead to plant diseases which results in significant production losses. However, manual detection of plant diseases is a time-consuming and error-prone process. It can be an unreliable method of identifying and preventing the spread of plant diseases. Adopting advanced technologies such as Machine Learning (ML) and Deep Learning (DL) can help to overcome these challenges by enabling early identification of plant diseases. In this paper, the recent advancements in the use of ML and DL techniques for the identification of plant diseases are explored. The research focuses on publications between 2015 and 2022, and the experiments discussed in this study demonstrate the effectiveness of using these techniques in improving the accuracy and efficiency of plant disease detection. This study also addresses the challenges and limitations associated with using ML and DL for plant disease identification, such as issues with data availability, imaging quality, and the differentiation between healthy and diseased plants. The research provides valuable insights for plant disease detection researchers, practitioners, and industry professionals by offering solutions to these challenges and limitations, providing a comprehensive understanding of the current state of research in this field, highlighting the benefits and limitations of these methods, and proposing potential solutions to overcome the challenges of their implementation.
Collapse
Affiliation(s)
- Muhammad Shoaib
- Department of Computer Science, CECOS University of IT and Emerging Sciences, Peshawar, Pakistan
- Department of Computer Science and Information Technology, Sarhad University of Science and Information Technology, Peshawar, Pakistan
| | - Babar Shah
- College of Technological Innovation, Zayed University, Dubai, United Arab Emirates
| | - Shaker EI-Sappagh
- Faculty of Computer Science and Engineering, Galala University, Suez, Egypt
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha, Egypt
| | - Akhtar Ali
- Department of Molecular Stress Physiology, Center of Plant Systems Biology and Biotechnology, Plovdiv, Bulgaria
| | - Asad Ullah
- Department of Computer Science and Information Technology, Sarhad University of Science and Information Technology, Peshawar, Pakistan
| | - Fayadh Alenezi
- Department of Electrical Engineering, College of Engineering, Jouf University, Jouf, Saudi Arabia
| | - Tsanko Gechev
- Department of Molecular Stress Physiology, Center of Plant Systems Biology and Biotechnology, Plovdiv, Bulgaria
- Department of Plant Physiology and Molecular Biology, University of Plovdiv, Plovdiv, Bulgaria
| | - Tariq Hussain
- School of Computer Science and Information Engineering, Zhejiang Gongshang University, Hangzhou, China
| | - Farman Ali
- Department of Computer Science and Engineering, School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul, Republic of Korea
| |
Collapse
|
30
|
Obasekore H, Fanni M, Ahmed SM, Parque V, Kang BY. Agricultural Robot-Centered Recognition of Early-Developmental Pest Stage Based on Deep Learning: A Case Study on Fall Armyworm ( Spodoptera frugiperda). SENSORS (BASEL, SWITZERLAND) 2023; 23:3147. [PMID: 36991858 PMCID: PMC10056802 DOI: 10.3390/s23063147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 02/23/2023] [Accepted: 03/06/2023] [Indexed: 06/19/2023]
Abstract
Accurately detecting early developmental stages of insect pests (larvae) from off-the-shelf stereo camera sensor data using deep learning holds several benefits for farmers, from simple robot configuration to early neutralization of this less agile but more disastrous stage. Machine vision technology has advanced from bulk spraying to precise dosage to directly rubbing on the infected crops. However, these solutions primarily focus on adult pests and post-infestation stages. This study suggested using a front-pointing red-green-blue (RGB) stereo camera mounted on a robot to identify pest larvae using deep learning. The camera feeds data into our deep-learning algorithms experimented on eight ImageNet pre-trained models. The combination of the insect classifier and the detector replicates the peripheral and foveal line-of-sight vision on our custom pest larvae dataset, respectively. This enables a trade-off between the robot's smooth operation and localization precision in the pest captured, as it first appeared in the farsighted section. Consequently, the nearsighted part utilizes our faster region-based convolutional neural network-based pest detector to localize precisely. Simulating the employed robot dynamics using CoppeliaSim and MATLAB/SIMULINK with the deep-learning toolbox demonstrated the excellent feasibility of the proposed system. Our deep-learning classifier and detector exhibited 99% and 0.84 accuracy and a mean average precision, respectively.
Collapse
Affiliation(s)
- Hammed Obasekore
- Department of Robot and Smart System Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Mohamed Fanni
- Production Engineering and Mechanical Design Department, Mansoura University, Mansoura 35516, Egypt
- Department of Mechatronics and Robotics Engineering, Egypt-Japan University of Science and Technology, Alexandria 21934, Egypt
| | - Sabah Mohamed Ahmed
- Department of Mechatronics and Robotics Engineering, Egypt-Japan University of Science and Technology, Alexandria 21934, Egypt
- Electrical Engineering Department, Assuit University, Assuit 71515, Egypt
| | - Victor Parque
- Department of Mechatronics and Robotics Engineering, Egypt-Japan University of Science and Technology, Alexandria 21934, Egypt
- Department of Modern Mechanical Engineering, Waseda University, Tokyo 169-8050, Japan
| | - Bo-Yeong Kang
- Department of Robot and Smart System Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| |
Collapse
|
31
|
Yang S, Xing Z, Wang H, Dong X, Gao X, Liu Z, Zhang X, Li S, Zhao Y. Maize-YOLO: A New High-Precision and Real-Time Method for Maize Pest Detection. INSECTS 2023; 14:insects14030278. [PMID: 36975962 PMCID: PMC10051432 DOI: 10.3390/insects14030278] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 03/05/2023] [Accepted: 03/09/2023] [Indexed: 05/31/2023]
Abstract
The frequent occurrence of crop pests and diseases is one of the important factors leading to the reduction of crop quality and yield. Since pests are characterized by high similarity and fast movement, this poses a challenge for artificial intelligence techniques to identify pests in a timely and accurate manner. Therefore, we propose a new high-precision and real-time method for maize pest detection, Maize-YOLO. The network is based on YOLOv7 with the insertion of the CSPResNeXt-50 module and VoVGSCSP module. It can improve network detection accuracy and detection speed while reducing the computational effort of the model. We evaluated the performance of Maize-YOLO in a typical large-scale pest dataset IP102. We trained and tested against those pest species that are more damaging to maize, including 4533 images and 13 classes. The experimental results show that our method outperforms the current state-of-the-art YOLO family of object detection algorithms and achieves suitable performance at 76.3% mAP and 77.3% recall. The method can provide accurate and real-time pest detection and identification for maize crops, enabling highly accurate end-to-end pest detection.
Collapse
Affiliation(s)
- Shuai Yang
- College of Land Science and Technology, China Agricultural University, Beijing 100083, China
| | - Ziyao Xing
- College of Land Science and Technology, China Agricultural University, Beijing 100083, China
| | - Hengbin Wang
- College of Land Science and Technology, China Agricultural University, Beijing 100083, China
| | - Xinrui Dong
- College of Land Science and Technology, China Agricultural University, Beijing 100083, China
| | - Xiang Gao
- College of Land Science and Technology, China Agricultural University, Beijing 100083, China
| | - Zhe Liu
- College of Land Science and Technology, China Agricultural University, Beijing 100083, China
- Key Laboratory of Remote Sensing for Agri-Hazards, Ministry of Agriculture and Rural Affairs, Beijing 100083, China
| | - Xiaodong Zhang
- College of Land Science and Technology, China Agricultural University, Beijing 100083, China
- Key Laboratory of Remote Sensing for Agri-Hazards, Ministry of Agriculture and Rural Affairs, Beijing 100083, China
| | - Shaoming Li
- College of Land Science and Technology, China Agricultural University, Beijing 100083, China
- Key Laboratory of Remote Sensing for Agri-Hazards, Ministry of Agriculture and Rural Affairs, Beijing 100083, China
| | - Yuanyuan Zhao
- College of Land Science and Technology, China Agricultural University, Beijing 100083, China
- Key Laboratory of Remote Sensing for Agri-Hazards, Ministry of Agriculture and Rural Affairs, Beijing 100083, China
| |
Collapse
|
32
|
Zhao Y, Yang Y, Xu X, Sun C. Precision detection of crop diseases based on improved YOLOv5 model. FRONTIERS IN PLANT SCIENCE 2023; 13:1066835. [PMID: 36699833 PMCID: PMC9868932 DOI: 10.3389/fpls.2022.1066835] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
Accurate identification of crop diseases can effectively improve crop yield. Most current crop diseases present small targets, dense numbers, occlusions and similar appearance of different diseases, and the current target detection algorithms are not effective in identifying similar crop diseases. Therefore, in this paper, an improved model based on YOLOv5s was proposed to improve the detection of crop diseases. First, the CSP structure of the original model in the feature fusion stage was improved, and a lightweight structure was used in the improved CSP structure to reduce the model parameters, while the feature information of different layers was extracted in the form of multiple branches. A structure named CAM was proposed, which can extract global and local features of each network layer separately, and the CAM structure can better fuse semantic and scale inconsistent features to enhance the extraction of global information of the network. In order to increase the number of positive samples in the model training process, one more grid was added to the original model with three grids to predict the target, and the formula for the prediction frame centroid offset was modified to obtain the better prediction frame centroid offset when the target centroid falled on the special point of the grid. To solve the problem of the prediction frame being scaled incorrectly during model training, an improved DIoU loss function was used to replace the GIoU loss function used in the original YOLOv5s. Finally, the improved model was trained using transfer learning, the results showed that the improved model had the best mean average precision (mAP) performance compared to the Faster R-CNN, SSD, YOLOv3, YOLOv4, YOLOv4-tiny, and YOLOv5s models, and the mAP, F1 score, and recall of the improved model were 95.92%, 0.91, and 87.89%, respectively. Compared with YOLOv5s, they improved by 4.58%, 5%, and 4.78%, respectively. The detection speed of the improved model was 40.01 FPS, which can meet the requirement of real-time detection. The results showed that the improved model outperformed the original model in several aspects, had stronger robustness and higher accuracy, and can provide better detection for crop diseases.
Collapse
|
33
|
Dai M, Dorjoy MMH, Miao H, Zhang S. A New Pest Detection Method Based on Improved YOLOv5m. INSECTS 2023; 14:54. [PMID: 36661982 PMCID: PMC9863093 DOI: 10.3390/insects14010054] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/21/2022] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
Pest detection in plants is essential for ensuring high productivity. Convolutional neural networks (CNN)-based deep learning advancements recently have made it possible for researchers to increase object detection accuracy. In this study, pest detection in plants with higher accuracy is proposed by an improved YOLOv5m-based method. First, the SWin Transformer (SWinTR) and Transformer (C3TR) mechanisms are introduced into the YOLOv5m network so that they can capture more global features and can increase the receptive field. Then, in the backbone, ResSPP is considered to make the network extract more features. Furthermore, the global features of the feature map are extracted in the feature fusion phase and forwarded to the detection phase via a modification of the three output necks C3 into SWinTR. Finally, WConcat is added to the fusion feature, which increases the feature fusion capability of the network. Experimental results demonstrate that the improved YOLOv5m achieved 95.7% precision rate, 93.1% recall rate, 94.38% F1 score, and 96.4% Mean Average Precision (mAP). Meanwhile, the proposed model is significantly better than the original YOLOv3, YOLOv4, and YOLOv5m models. The improved YOLOv5m model shows greater robustness and effectiveness in detecting pests, and it could more precisely detect different pests from the dataset.
Collapse
|
34
|
Yang Y, Liu Z, Huang M, Zhu Q, Zhao X. Automatic detection of multi-type defects on potatoes using multispectral imaging combined with a deep learning model. J FOOD ENG 2023. [DOI: 10.1016/j.jfoodeng.2022.111213] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
35
|
Wang Y, Wei C, Sun H, Qu A. Design of Intelligent Detection Platform for Wine Grape Pests and Diseases in Ningxia. PLANTS (BASEL, SWITZERLAND) 2022; 12:106. [PMID: 36616237 PMCID: PMC9823901 DOI: 10.3390/plants12010106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 12/22/2022] [Accepted: 12/22/2022] [Indexed: 06/17/2023]
Abstract
In order to reduce the impact of pests and diseases on the yield and quality of Ningxia wine grapes and to improve the efficiency and intelligence of detection, this paper designs an intelligent detection platform for pests and diseases. The optimal underlying network is selected by comparing the recognition accuracy of both MobileNet V2 and YOLOX_s networks trained on the Public Dataset. Based on this network, the effect of adding attention mechanism and replacing loss function on recognition effect is investigated by permutation in the Custom Dataset, resulting in the improved network YOLOX_s + CBAM. The improved network was trained on the Overall Dataset, and finally a recognition model capable of identifying nine types of pests was obtained, with a recognition accuracy of 93.35% in the validation set, an improvement of 1.35% over the original network. The recognition model is deployed on the Web side and Raspberry Pi to achieve independent detection functions; the channel between the two platforms is built through Ngrok, and remote interconnection is achieved through VNC desktop. Users can choose to upload local images on the Web side for detection, handheld Raspberry Pi for field detection, or Raspberry Pi and Web interconnection for remote detection.
Collapse
Affiliation(s)
| | | | | | - Aili Qu
- Correspondence: ; Tel.: +86-199-9538-6860 or +86-0951-2062908
| |
Collapse
|
36
|
Qiu RZ, Chen SP, Chi MX, Wang RB, Huang T, Fan GC, Zhao J, Weng QY. An automatic identification system for citrus greening disease (Huanglongbing) using a YOLO convolutional neural network. FRONTIERS IN PLANT SCIENCE 2022; 13:1002606. [PMID: 36605957 PMCID: PMC9807764 DOI: 10.3389/fpls.2022.1002606] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 12/07/2022] [Indexed: 06/01/2023]
Abstract
Huanglongbing (HLB), or citrus greening disease, has complex and variable symptoms, making its diagnosis almost entirely reliant on subjective experience, which results in a low diagnosis efficiency. To overcome this problem, we constructed and validated a deep learning (DL)-based method for detecting citrus HLB using YOLOv5l from digital images. Three models (Yolov5l-HLB1, Yolov5l-HLB2, and Yolov5l-HLB3) were developed using images of healthy and symptomatic citrus leaves acquired under a range of imaging conditions. The micro F1-scores of the Yolov5l-HLB2 model (85.19%) recognising five HLB symptoms (blotchy mottling, "red-nose" fruits, zinc-deficiency, vein yellowing, and uniform yellowing) in the images were higher than those of the other two models. The generalisation performance of Yolov5l-HLB2 was tested using test set images acquired under two photographic conditions (conditions B and C) that were different from that of the model training set condition (condition A). The results suggested that this model performed well at recognising the five HLB symptom images acquired under both conditions B and C, and yielded a micro F1-score of 84.64% and 85.84%, respectively. In addition, the detection performance of the Yolov5l-HLB2 model was better for experienced users than for inexperienced users. The PCR-positive rate of Candidatus Liberibacter asiaticus (CLas) detection (the causative pathogen for HLB) in the samples with five HLB symptoms as classified using the Yolov5l-HLB2 model was also compared with manual classification by experts. This indicated that the model can be employed as a preliminary screening tool before the collection of field samples for subsequent PCR testing. We also developed the 'HLBdetector' app using the Yolov5l-HLB2 model, which allows farmers to complete HLB detection in seconds with only a mobile phone terminal and without expert guidance. Overall, we successfully constructed a reliable automatic HLB identification model and developed the user-friendly 'HLBdetector' app, facilitating the prevention and timely control of HLB transmission in citrus orchards.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Jian Zhao
- *Correspondence: Jian Zhao, ; Qi-Yong Weng,
| | | |
Collapse
|
37
|
Fu X, Han B, Liu S, Zhou J, Zhang H, Wang H, Zhang H, Ouyang Z. WSVAS: A YOLOv4 -based phenotyping platform for automatically detecting the salt tolerance of wheat based on seed germination vigour. FRONTIERS IN PLANT SCIENCE 2022; 13:1074360. [PMID: 36605955 PMCID: PMC9807913 DOI: 10.3389/fpls.2022.1074360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/30/2022] [Indexed: 06/17/2023]
Abstract
Salt stress is one of the major environmental stress factors that affect and limit wheat production worldwide. Therefore, properly evaluating wheat genotypes during the germination stage could be one of the effective ways to improve yield. Currently, phenotypic identification platforms are widely used in the seed breeding process, which can improve the speed of detection compared with traditional methods. We developed the Wheat Seed Vigour Assessment System (WSVAS), which enables rapid and accurate detection of wheat seed germination using the lightweight convolutional neural network YOLOv4. The WSVAS system can automatically acquire, process and analyse image data of wheat varieties to evaluate the response of wheat seeds to salt stress under controlled environments. The WSVAS image acquisition system was set up to continuously acquire images of seeds of four wheat varieties under three types of salt stress. In this paper, we verified the accuracy of WSVAS by comparing manual scoring. The cumulative germination curves of wheat seeds of four genotypes under three salt stresses were also investigated. In this study, we compared three models, VGG16 + Faster R-CNN, ResNet50 + Faster R-CNN and YOLOv4. We found that YOLOv4 was the best model for wheat seed germination target detection, and the results showed that the model achieved an average detection accuracy (mAP) of 97.59%, a recall rate (Recall) of 97.35% and the detection speed was up to 6.82 FPS. This proved that the model could effectively detect the number of germinating seeds in wheat. In addition, the germination rate and germination index of the two indicators were highly correlated with germination vigour, indicating significant differences in salt tolerance amongst wheat varieties. WSVAS can quantify plant stress caused by salt stress and provides a powerful tool for salt-tolerant wheat breeding.
Collapse
Affiliation(s)
- Xiuqing Fu
- College of Engineering, Nanjing Agricultural University, Nanjing, China
- Key laboratory of Intelligence Agricultural Equipment of Jiangsu Province, Education Department of Jiangsu Province and is managed by the College of Engineering of Nanjing Agricultural University, Nanjing, China
| | - Bing Han
- College of Engineering, Nanjing Agricultural University, Nanjing, China
| | - Shouyang Liu
- Academy For Advanced Interdisciplinary Studies, Nanjing Agricultural University, Nanjing, China
| | - Jiayi Zhou
- College of Engineering, Nanjing Agricultural University, Nanjing, China
| | - Hongwen Zhang
- School of Mechanical and Electrical Engineering, Shihezi University, Shihezi, China
| | - Hongbiao Wang
- College of Mechanical and Electrical Engineering, Tarim University, Alar, China
| | - Hui Zhang
- College of Engineering, Nanjing Agricultural University, Nanjing, China
| | - Zhiqian Ouyang
- College of Engineering, Nanjing Agricultural University, Nanjing, China
| |
Collapse
|
38
|
Aladhadh S, Habib S, Islam M, Aloraini M, Aladhadh M, Al-Rawashdeh HS. An Efficient Pest Detection Framework with a Medium-Scale Benchmark to Increase the Agricultural Productivity. SENSORS (BASEL, SWITZERLAND) 2022; 22:9749. [PMID: 36560117 PMCID: PMC9785034 DOI: 10.3390/s22249749] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 11/18/2022] [Accepted: 11/21/2022] [Indexed: 06/17/2023]
Abstract
Insect pests and crop diseases are considered the major problems for agricultural production, due to the severity and extent of their occurrence causing significant crop losses. To increase agricultural production, it is significant to protect the crop from harmful pests which is possible via soft computing techniques. The soft computing techniques are based on traditional machine and deep learning-based approaches. However, in the traditional methods, the selection of manual feature extraction mechanisms is ineffective, inefficient, and time-consuming, while deep learning techniques are computationally expensive and require a large amount of training data. In this paper, we propose an efficient pest detection method that accurately localized the pests and classify them according to their desired class label. In the proposed work, we modify the YOLOv5s model in several ways such as extending the cross stage partial network (CSP) module, improving the select kernel (SK) in the attention module, and modifying the multiscale feature extraction mechanism, which plays a significant role in the detection and classification of small and large sizes of pest in an image. To validate the model performance, we develop a medium-scale pest detection dataset that includes the five most harmful pests for agriculture products that are ants, grasshopper, palm weevils, shield bugs, and wasps. To check the model's effectiveness, we compare the results of the proposed model with several variations of the YOLOv5 model, where the proposed model achieved the best results in the experiments. Thus, the proposed model has the potential to be applied in real-world applications and further motivate research on pest detection to increase agriculture production.
Collapse
Affiliation(s)
- Suliman Aladhadh
- Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia
| | - Shabana Habib
- Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia
| | - Muhammad Islam
- Department of Electrical Engineering, College of Engineering and Information Technology, Onaizah Colleges, Onaizah 56447, Saudi Arabia
| | - Mohammed Aloraini
- Department of Electrical Engineering, College of Engineering, Qassim University, Unaizah 56452, Saudi Arabia
| | - Mohammed Aladhadh
- Department of Food Science and Human Nutrition, College of Agriculture and Veterinary Medicine, Qassim University, Buraydah 51452, Saudi Arabia
| | - Hazim Saleh Al-Rawashdeh
- Department of Cyber Security, College of Engineering and Information Technology, Onaizah Colleges, Onaizah 56447, Saudi Arabia
| |
Collapse
|
39
|
Huang X, Dong J, Zhu Z, Ma D, Ma F, Lang L. TSD-Truncated Structurally Aware Distance for Small Pest Object Detection. SENSORS (BASEL, SWITZERLAND) 2022; 22:8691. [PMID: 36433294 PMCID: PMC9692880 DOI: 10.3390/s22228691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 11/07/2022] [Accepted: 11/08/2022] [Indexed: 06/16/2023]
Abstract
As deep learning has been successfully applied in various domains, it has recently received considerable research attention for decades, making it possible to efficiently and intelligently detect crop pests. Nevertheless, the detection of pest objects is still challenging due to the lack of discriminative features and pests' aggregation behavior. Recently, intersection over union (IoU)-based object detection has attracted much attention and become the most widely used metric. However, it is sensitive to small-object localization bias; furthermore, IoU-based loss only works when ground truths and predicted bounding boxes are intersected, and it lacks an awareness of different geometrical structures. Therefore, we propose a simple and effective metric and a loss function based on this new metric, truncated structurally aware distance (TSD). Firstly, the distance between two bounding boxes is defined as the standardized Chebyshev distance. We also propose a new regression loss function, truncated structurally aware distance loss, which consider the different geometrical structure relationships between two bounding boxes and whose truncated function is designed to impose different penalties. To further test the effectiveness of our method, we apply it on the Pest24 small-object pest dataset, and the results show that the mAP is 5.0% higher than other detection methods.
Collapse
Affiliation(s)
- Xiaowen Huang
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
- University of Science and Technology of China, Hefei 230026, China
| | - Jun Dong
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
- Anhui Zhongke Deji Intelligence Technology Co., Ltd., Hefei 230045, China
| | - Zhijia Zhu
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
- University of Science and Technology of China, Hefei 230026, China
| | - Dong Ma
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
| | - Fan Ma
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
| | - Luhong Lang
- Wuhu Institute of Technology, Wuhu 241006, China
| |
Collapse
|
40
|
Wang M, Fu B, Fan J, Wang Y, Zhang L, Xia C. Sweet potato leaf detection in a natural scene based on faster R-CNN with a visual attention mechanism and DIoU-NMS. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
41
|
Saleem MH, Potgieter J, Arif KM. A weight optimization-based transfer learning approach for plant disease detection of New Zealand vegetables. FRONTIERS IN PLANT SCIENCE 2022; 13:1008079. [PMID: 36388538 PMCID: PMC9641257 DOI: 10.3389/fpls.2022.1008079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 09/22/2022] [Indexed: 06/16/2023]
Abstract
Deep learning (DL) is an effective approach to identifying plant diseases. Among several DL-based techniques, transfer learning (TL) produces significant results in terms of improved accuracy. However, the usefulness of TL has not yet been explored using weights optimized from agricultural datasets. Furthermore, the detection of plant diseases in different organs of various vegetables has not yet been performed using a trained/optimized DL model. Moreover, the presence/detection of multiple diseases in vegetable organs has not yet been investigated. To address these research gaps, a new dataset named NZDLPlantDisease-v2 has been collected for New Zealand vegetables. The dataset includes 28 healthy and defective organs of beans, broccoli, cabbage, cauliflower, kumara, peas, potato, and tomato. This paper presents a transfer learning method that optimizes weights obtained through agricultural datasets for better outcomes in plant disease identification. First, several DL architectures are compared to obtain the best-suited model, and then, data augmentation techniques are applied. The Faster Region-based Convolutional Neural Network (RCNN) Inception ResNet-v2 attained the highest mean average precision (mAP) compared to the other DL models including different versions of Faster RCNN, Single-Shot Multibox Detector (SSD), Region-based Fully Convolutional Networks (RFCN), RetinaNet, and EfficientDet. Next, weight optimization is performed on datasets including PlantVillage, NZDLPlantDisease-v1, and DeepWeeds using image resizers, interpolators, initializers, batch normalization, and DL optimizers. Updated/optimized weights are then used to retrain the Faster RCNN Inception ResNet-v2 model on the proposed dataset. Finally, the results are compared with the model trained/optimized using a large dataset, such as Common Objects in Context (COCO). The final mAP improves by 9.25% and is found to be 91.33%. Moreover, the robustness of the methodology is demonstrated by testing the final model on an external dataset and using the stratified k-fold cross-validation method.
Collapse
Affiliation(s)
- Muhammad Hammad Saleem
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Auckland, New Zealand
| | - Johan Potgieter
- Massey AgriFood Digital Lab, Massey University, Palmerston North, New Zealand
| | - Khalid Mahmood Arif
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Auckland, New Zealand
| |
Collapse
|
42
|
Lee S, Arora AS, Yun CM. Detecting strawberry diseases and pest infections in the very early stage with an ensemble deep-learning model. FRONTIERS IN PLANT SCIENCE 2022; 13:991134. [PMID: 36311098 PMCID: PMC9597313 DOI: 10.3389/fpls.2022.991134] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 09/23/2022] [Indexed: 05/28/2023]
Abstract
Detecting early signs of plant diseases and pests is important to preclude their progress and minimize the damages caused by them. Many methods are developed to catch signs of diseases and pests from plant images with deep learning techniques, however, detecting early signs is still challenging because of the lack of datasets to train subtle changes in plants. To solve these challenges, we built an automatic data acquisition system for the accumulation of a large dataset of plant images and trained an ensemble model to detect targeted plant diseases and pests. After obtaining 13,393 plant image data, our ensemble model shows a decent detection performance with an average of AUPRC 0.81. Also, this data acquisition and the detection process can be applied to other plant anomalies with the collection of additional data.
Collapse
Affiliation(s)
- Sangyeon Lee
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | | | | |
Collapse
|
43
|
Ilyas T, Jin H, Siddique MI, Lee SJ, Kim H, Chua L. DIANA: A deep learning-based paprika plant disease and pest phenotyping system with disease severity analysis. FRONTIERS IN PLANT SCIENCE 2022; 13:983625. [PMID: 36275542 PMCID: PMC9582859 DOI: 10.3389/fpls.2022.983625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/15/2022] [Indexed: 06/16/2023]
Abstract
The emergence of deep neural networks has allowed the development of fully automated and efficient diagnostic systems for plant disease and pest phenotyping. Although previous approaches have proven to be promising, they are limited, especially in real-life scenarios, to properly diagnose and characterize the problem. In this work, we propose a framework which besides recognizing and localizing various plant abnormalities also informs the user about the severity of the diseases infecting the plant. By taking a single image as input, our algorithm is able to generate detailed descriptive phrases (user-defined) that display the location, severity stage, and visual attributes of all the abnormalities that are present in the image. Our framework is composed of three main components. One of them is a detector that accurately and efficiently recognizes and localizes the abnormalities in plants by extracting region-based anomaly features using a deep neural network-based feature extractor. The second one is an encoder-decoder network that performs pixel-level analysis to generate abnormality-specific severity levels. Lastly is an integration unit which aggregates the information of these units and assigns unique IDs to all the detected anomaly instances, thus generating descriptive sentences describing the location, severity, and class of anomalies infecting plants. We discuss two possible ways of utilizing the abovementioned units in a single framework. We evaluate and analyze the efficacy of both approaches on newly constructed diverse paprika disease and pest recognition datasets, comprising six anomaly categories along with 11 different severity levels. Our algorithm achieves mean average precision of 91.7% for the abnormality detection task and a mean panoptic quality score of 70.78% for severity level prediction. Our algorithm provides a practical and cost-efficient solution to farmers that facilitates proper handling of crops.
Collapse
Affiliation(s)
- Talha Ilyas
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju-si, South Korea
- Division of Electronic and Information Engineering, Jeonbuk National University, Jeonju-si, South Korea
| | - Hyungjun Jin
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju-si, South Korea
- Division of Electronic and Information Engineering, Jeonbuk National University, Jeonju-si, South Korea
| | - Muhammad Irfan Siddique
- Department of Plant Science and Plant Genomics and Breeding Institute, Seoul National University, Seoul, South Korea
- Department of Horticultural Science, North Carolina State University, Mountain Horticultural Crops Research and Extension Center, Mills River, United States
| | - Sang Jun Lee
- Division of Electronic and Information Engineering, Jeonbuk National University, Jeonju-si, South Korea
| | - Hyongsuk Kim
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju-si, South Korea
- Division of Electronic and Information Engineering, Jeonbuk National University, Jeonju-si, South Korea
| | - Leon Chua
- Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA, United States
| |
Collapse
|
44
|
Hamidon MH, Ahamed T. Detection of Tip-Burn Stress on Lettuce Grown in an Indoor Environment Using Deep Learning Algorithms. SENSORS (BASEL, SWITZERLAND) 2022; 22:7251. [PMID: 36236351 PMCID: PMC9571858 DOI: 10.3390/s22197251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 09/20/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
Lettuce grown in indoor farms under fully artificial light is susceptible to a physiological disorder known as tip-burn. A vital factor that controls plant growth in indoor farms is the ability to adjust the growing environment to promote faster crop growth. However, this rapid growth process exacerbates the tip-burn problem, especially for lettuce. This paper presents an automated detection of tip-burn lettuce grown indoors using a deep-learning algorithm based on a one-stage object detector. The tip-burn lettuce images were captured under various light and indoor background conditions (under white, red, and blue LEDs). After augmentation, a total of 2333 images were generated and used for training using three different one-stage detectors, namely, CenterNet, YOLOv4, and YOLOv5. In the training dataset, all the models exhibited a mean average precision (mAP) greater than 80% except for YOLOv4. The most accurate model for detecting tip-burns was YOLOv5, which had the highest mAP of 82.8%. The performance of the trained models was also evaluated on the images taken under different indoor farm light settings, including white, red, and blue LEDs. Again, YOLOv5 was significantly better than CenterNet and YOLOv4. Therefore, detecting tip-burn on lettuce grown in indoor farms under different lighting conditions can be recognized by using deep-learning algorithms with a reliable overall accuracy. Early detection of tip-burn can help growers readjust the lighting and controlled environment parameters to increase the freshness of lettuce grown in plant factories.
Collapse
Affiliation(s)
- Munirah Hayati Hamidon
- Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan
| |
Collapse
|
45
|
Pan J, Xia L, Wu Q, Guo Y, Chen Y, Tian X. Automatic strawberry leaf scorch severity estimation via faster R-CNN and few-shot learning. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101706] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
46
|
YOLOX-Dense-CT: a detection algorithm for cherry tomatoes based on YOLOX and DenseNet. JOURNAL OF FOOD MEASUREMENT AND CHARACTERIZATION 2022. [DOI: 10.1007/s11694-022-01553-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
47
|
Li S, Feng Z, Yang B, Li H, Liao F, Gao Y, Liu S, Tang J, Yao Q. An intelligent monitoring system of diseases and pests on rice canopy. FRONTIERS IN PLANT SCIENCE 2022; 13:972286. [PMID: 36035691 PMCID: PMC9403268 DOI: 10.3389/fpls.2022.972286] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Accepted: 07/25/2022] [Indexed: 05/24/2023]
Abstract
Accurate and timely surveys of rice diseases and pests are important to control them and prevent the reduction of rice yields. The current manual survey method of rice diseases and pests is time-consuming, laborious, highly subjective and difficult to trace historical data. To address these issues, we developed an intelligent monitoring system for detecting and identifying the disease and pest lesions on the rice canopy. The system mainly includes a network camera, an intelligent detection model of diseases and pests on rice canopy, a web client and a server. Each camera of the system can collect rice images in about 310 m2 of paddy fields. An improved model YOLO-Diseases and Pests Detection (YOLO-DPD) was proposed to detect three lesions of Cnaphalocrocis medinalis, Chilo suppressalis, and Ustilaginoidea virens on rice canopy. The residual feature augmentation method was used to narrow the semantic gap between different scale features of rice disease and pest images. The convolution block attention module was added into the backbone network to enhance the regional disease and pest features for suppressing the background noises. Our experiments demonstrated that the improved model YOLO-DPD could detect three species of disease and pest lesions on rice canopy at different image scales with an average precision of 92.24, 87.35 and 90.74%, respectively, and a mean average precision of 90.11%. Compared to RetinaNet, Faster R-CNN and Yolov4 models, the mean average precision of YOLO-DPD increased by 18.20, 6.98, 6.10%, respectively. The average detection time of each image is 47 ms. Our system has the advantages of unattended operation, high detection precision, objective results, and data traceability.
Collapse
Affiliation(s)
- Suxuan Li
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Zelin Feng
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Baojun Yang
- State Key Laboratory of Rice Biology, China National Rice Research Institute, Hangzhou, China
| | - Hang Li
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Fubing Liao
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Yufan Gao
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Shuhua Liu
- State Key Laboratory of Rice Biology, China National Rice Research Institute, Hangzhou, China
| | - Jian Tang
- State Key Laboratory of Rice Biology, China National Rice Research Institute, Hangzhou, China
| | - Qing Yao
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| |
Collapse
|
48
|
Hunger games search based deep convolutional neural network for crop pest identification and classification with transfer learning. EVOLVING SYSTEMS 2022. [DOI: 10.1007/s12530-022-09449-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
49
|
A Lightweight Model for Wheat Ear Fusarium Head Blight Detection Based on RGB Images. REMOTE SENSING 2022. [DOI: 10.3390/rs14143481] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Detection of the Fusarium head blight (FHB) is crucial for wheat yield protection, with precise and rapid FHB detection increasing wheat yield and protecting the agricultural ecological environment. FHB detection tasks in agricultural production are currently handled by cloud servers and utilize unmanned aerial vehicles (UAVs). Hence, this paper proposed a lightweight model for wheat ear FHB detection based on UAV-enabled edge computing, aiming to achieve the purpose of intelligent prevention and control of agricultural disease. Our model utilized the You Only Look Once version 4 (YOLOv4) and MobileNet deep learning architectures and was applicable in edge devices, balancing accuracy, and FHB detection in real-time. Specifically, the backbone network Cross Stage Partial Darknet53 (CSPDarknet53) of YOLOv4 was replaced by a lightweight network, significantly decreasing the network parameters and the computing complexity. Additionally, we employed the Complete Intersection over Union (CIoU) and Non-Maximum Suppression (NMS) to regress the loss function to guarantee the detection accuracy of FHB. Furthermore, the loss function incorporated the focal loss to reduce the error caused by the unbalanced positive and negative sample distribution. Finally, mixed-up and transfer learning schemes enhanced the model’s generalization ability. The experimental results demonstrated that the proposed model performed admirably well in detecting FHB of the wheat ear, with an accuracy of 93.69%, and it was somewhat better than the MobileNetv2-YOLOv4 model (F1 by 4%, AP by 3.5%, Recall by 4.1%, and Precision by 1.6%). Meanwhile, the suggested model was scaled down to a fifth of the size of the state-of-the-art object detection models. Overall, the proposed model could be deployed on UAVs so that wheat ear FHB detection results could be sent back to the end-users to intelligently decide in time, promoting the intelligent control of agricultural disease.
Collapse
|
50
|
Liu B, Liu L, Zhuo R, Chen W, Duan R, Wang G. A Dataset for Forestry Pest Identification. FRONTIERS IN PLANT SCIENCE 2022; 13:857104. [PMID: 35909784 PMCID: PMC9331284 DOI: 10.3389/fpls.2022.857104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 06/13/2022] [Indexed: 06/15/2023]
Abstract
The identification of forest pests is of great significance to the prevention and control of the forest pests' scale. However, existing datasets mainly focus on common objects, which limits the application of deep learning techniques in specific fields (such as agriculture). In this paper, we collected images of forestry pests and constructed a dataset for forestry pest identification, called Forestry Pest Dataset. The Forestry Pest Dataset contains 31 categories of pests and their different forms. We conduct several mainstream object detection experiments on this dataset. The experimental results show that the dataset achieves good performance on various models. We hope that our Forestry Pest Dataset will help researchers in the field of pest control and pest detection in the future.
Collapse
Affiliation(s)
- Bing Liu
- School of Computer Science and Engineering, Changchun University of Technology, Changchun, China
- College of Computer Science and Technology, Jilin University, Changchun, China
| | - Luyang Liu
- School of Computer Science and Engineering, Changchun University of Technology, Changchun, China
| | - Ran Zhuo
- School of Computer Science and Engineering, Changchun University of Technology, Changchun, China
| | - Weidong Chen
- School of Computer Science and Engineering, Changchun University of Technology, Changchun, China
| | - Rui Duan
- School of Computer Science and Engineering, Changchun University of Technology, Changchun, China
| | - Guishen Wang
- School of Computer Science and Engineering, Changchun University of Technology, Changchun, China
| |
Collapse
|