1
|
Deng X, Qi L, Liu Z, Liang S, Gong K, Qiu G. Weed target detection at seedling stage in paddy fields based on YOLOX. PLoS One 2023; 18:e0294709. [PMID: 38091355 PMCID: PMC10718464 DOI: 10.1371/journal.pone.0294709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Accepted: 11/08/2023] [Indexed: 12/18/2023] Open
Abstract
Weeds are one of the greatest threats to the growth of rice, and the loss of crops is greater in the early stage of rice growth. Traditional large-area spraying cannot selectively spray weeds and can easily cause herbicide waste and environmental pollution. To realize the transformation from large-area spraying to precision spraying in rice fields, it is necessary to quickly and efficiently detect the distribution of weeds. Benefiting from the rapid development of vision technology and deep learning, this study applies a computer vision method based on deep-learning-driven rice field weed target detection. To address the need to identify small dense targets at the rice seedling stage in paddy fields, this study propose a method for weed target detection based on YOLOX, which is composed of a CSPDarknet backbone network, a feature pyramid network (FPN) enhanced feature extraction network and a YOLO Head detector. The CSPDarknet backbone network extracts feature layers with dimensions of 80 pixels ⊆ 80 pixels, 40 pixels ⊆ 40 pixels and 20 pixels ⊆ 20 pixels. The FPN fuses the features from these three scales, and YOLO Head realizes the regression of the object classification and prediction boxes. In performance comparisons of different models, including YOLOv3, YOLOv4-tiny, YOLOv5-s, SSD and several models of the YOLOX series, namely, YOLOX-s, YOLOX-m, YOLOX-nano, and YOLOX-tiny, the results show that the YOLOX-tiny model performs best. The mAP, F1, and recall values from the YOLOX-tiny model are 0.980, 0.95, and 0.983, respectively. Meanwhile, the intermediate variable memory generated during the model calculation of YOLOX-tiny is only 259.62 MB, making it suitable for deployment in intelligent agricultural devices. However, although the YOLOX-tiny model is the best on the dataset in this paper, this is not true in general. The experimental results suggest that the method proposed in this paper can improve the model performance for the small target detection of sheltered weeds and dense weeds at the rice seedling stage in paddy fields. A weed target detection model suitable for embedded computing platforms is obtained by comparing different single-stage target detection models, thereby laying a foundation for the realization of unmanned targeted herbicide spraying performed by agricultural robots.
Collapse
Affiliation(s)
- Xiangwu Deng
- College of Electronic Information Engineering, Guangdong University of Petrochemical Technology, Maoming, China
| | - Long Qi
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Zhuwen Liu
- College of Electronic Information Engineering, Guangdong University of Petrochemical Technology, Maoming, China
| | - Song Liang
- College of Electronic Information Engineering, Guangdong University of Petrochemical Technology, Maoming, China
| | - Kunsong Gong
- College of Electronic Information Engineering, Guangdong University of Petrochemical Technology, Maoming, China
| | - Guangjun Qiu
- Institute of Facility Agriculture of Guangdong Academy of Agricultural Sciences, Guangzhou, China
- Life Science and Technology School, Lingnan Normal University, Zhanjiang, China
| |
Collapse
|
2
|
Sapkota R, Stenger J, Ostlie M, Flores P. Towards reducing chemical usage for weed control in agriculture using UAS imagery analysis and computer vision techniques. Sci Rep 2023; 13:6548. [PMID: 37085558 PMCID: PMC10121711 DOI: 10.1038/s41598-023-33042-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Accepted: 04/06/2023] [Indexed: 04/23/2023] Open
Abstract
Currently, applying uniform distribution of chemical herbicide through a sprayer without considering the spatial distribution information of crops and weeds is the most common method of controlling weeds in commercial agricultural production system. This kind of weed management practice lead to excessive amounts of chemical herbicides being applied in a given field. The objective of this study was to perform site-specific weed control (SSWC) in a corn field by: (1) using a unmanned aerial system (UAS) to map the spatial distribution information of weeds in the field; (2) creating a prescription map based on the weed distribution map, and (3) spraying the field using the prescription map and a commercial size sprayer. In this study, we assumed that plants growing outside the corn rows are weeds and they need to be controlled. The first step in implementing such an approach is identifying the corn rows. For that, we are proposing a Crop Row Identification algorithm, a computer vision algorithm that identifies corn rows on UAS imagery. After being identified, the corn rows were then removed from the imagery and remaining vegetation fraction was classified as weeds. Based on that information, a grid-based weed prescription map was created and the weed control application was implemented through a commercial-size sprayer. The decision of spraying herbicides on a particular grid was based on the presence of weeds in that grid cell. All the grids that contained at least one weed were sprayed, while the grids free of weeds were not. Using our SSWC approach, we were able to save 26.2% of the acreage from being sprayed with herbicide compared to the current method. This study presents a full workflow from UAS image collection to field weed control implementation using a commercial size sprayer, and it shows that some level of savings can potentially be obtained even in a situation with high weed infestation, which might provide an opportunity to reduce chemical usage in corn production systems.
Collapse
Affiliation(s)
- Ranjan Sapkota
- Center for Precision and Automated Agricultural Systems, Washington State University, 24106 N. Bunn Rd, Prosser, WA, 99350, USA
- Agricultural and Biosystems Engineering, North Dakota State University, 1221 Albrecht Blvd, Fargo, ND, 58102, USA
| | - John Stenger
- Agricultural and Biosystems Engineering, North Dakota State University, 1221 Albrecht Blvd, Fargo, ND, 58102, USA
| | - Michael Ostlie
- NDSU Carrington Research Extension Center, Carrington, ND, 58421-0219, USA
| | - Paulo Flores
- Agricultural and Biosystems Engineering, North Dakota State University, 1221 Albrecht Blvd, Fargo, ND, 58102, USA.
| |
Collapse
|
3
|
Tang Z, Jin Y, Brown PH, Park M. Estimation of tomato water status with photochemical reflectance index and machine learning: Assessment from proximal sensors and UAV imagery. Front Plant Sci 2023; 14:1057733. [PMID: 37089640 PMCID: PMC10117946 DOI: 10.3389/fpls.2023.1057733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/27/2023] [Indexed: 05/03/2023]
Abstract
Tracking plant water status is a critical step towards the adaptive precision irrigation management of processing tomatoes, one of the most important specialty crops in California. The photochemical reflectance index (PRI) from proximal sensors and the high-resolution unmanned aerial vehicle (UAV) imagery provide an opportunity to monitor the crop water status efficiently. Based on data from an experimental tomato field with intensive aerial and plant-based measurements, we developed random forest machine learning regression models to estimate tomato stem water potential (ψ stem), (using observations from proximal sensors and 12-band UAV imagery, respectively, along with weather data. The proximal sensor-based model estimation agreed well with the plant ψ stem with R 2 of 0.74 and mean absolute error (MAE) of 0.63 bars. The model included PRI, normalized difference vegetation index, vapor pressure deficit, and air temperature and tracked well with the seasonal dynamics of ψ stem across different plots. A separate model, built with multiple vegetation indices (VIs) from UAV imagery and weather variables, had an R 2 of 0.81 and MAE of 0.67 bars. The plant-level ψ stem maps generated from UAV imagery closely represented the water status differences of plots under different irrigation treatments and also tracked well the temporal change among flights. PRI was found to be the most important VI in both the proximal sensor- and the UAV-based models, providing critical information on tomato plant water status. This study demonstrated that machine learning models can accurately estimate the water status by integrating PRI, other VIs, and weather data, and thus facilitate data-driven irrigation management for processing tomatoes.
Collapse
Affiliation(s)
- Zhehan Tang
- Department of Land, Air and Water Resources, University of California, Davis, Davis, CA, United States
- *Correspondence: Zhehan Tang,
| | - Yufang Jin
- Department of Land, Air and Water Resources, University of California, Davis, Davis, CA, United States
| | - Patrick H. Brown
- Department of Plant Sciences, University of California, Davis, Davis, CA, United States
| | - Meerae Park
- Department of Plant Sciences, University of California, Davis, Davis, CA, United States
| |
Collapse
|
4
|
Su J, Zhu X, Li S, Chen W. AI meets UAVs: A survey on AI empowered UAV perception systems for precision agriculture. Neurocomputing 2023; 518:242-270. [DOI: 10.1016/j.neucom.2022.11.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
5
|
Abstract
In contrast to the rapid advances made in plant genotyping, plant phenotyping is considered a bottleneck in plant science. This has promoted high-throughput plant phenotyping (HTP) studies, resulting in an exponential increase in phenotyping-related publications. The development of HTP was originally intended for use as indoor HTP technologies for model plant species under controlled environments. However, this subsequently shifted to HTP for use in crops in fields. Although HTP in fields is much more difficult to conduct due to unstable environmental conditions compared to HTP in controlled environments, recent advances in HTP technology have allowed these difficulties to be overcome, allowing for rapid, efficient, non-destructive, non-invasive, quantitative, repeatable, and objective phenotyping. Recent HTP developments have been accelerated by the advances in data analysis, sensors, and robot technologies, including machine learning, image analysis, three dimensional (3D) reconstruction, image sensors, laser sensors, environmental sensors, and drones, along with high-speed computational resources. This article provides an overview of recent HTP technologies, focusing mainly on canopy-based phenotypes of major crops, such as canopy height, canopy coverage, canopy biomass, and canopy stressed appearance, in addition to crop organ detection and counting in the fields. Current topics in field HTP are also presented, followed by a discussion on the low rates of adoption of HTP in practical breeding programs.
Collapse
Affiliation(s)
- Seishi Ninomiya
- Graduate School of Agriculture and Life Sciences, The University of Tokyo, Nishitokyo, Tokyo 188-0002, Japan
- Plant Phenomics Research Center, Nanjing Agricultural University, Nanjing, China
- Corresponding author (e-mail: )
| |
Collapse
|
6
|
Reedha R, Dericquebourg E, Canals R, Hafiane A. Transformer Neural Network for Weed and Crop Classification of High Resolution UAV Images. Remote Sensing 2022; 14:592. [DOI: 10.3390/rs14030592] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Monitoring crops and weeds is a major challenge in agriculture and food production today. Weeds compete directly with crops for moisture, nutrients, and sunlight. They therefore have a significant negative impact on crop yield if not sufficiently controlled. Weed detection and mapping is an essential step in weed control. Many existing research studies recognize the importance of remote sensing systems and machine learning algorithms in weed management. Deep learning approaches have shown good performance in many agriculture-related remote sensing tasks, such as plant classification, disease detection, etc. However, despite the success of these approaches, they still face many challenges such as high computation cost, the need of large labelled datasets, intra-class discrimination (in growing phase weeds and crops share many attributes similarity as color, texture, and shape), etc. This paper aims to show that the attention-based deep network is a promising approach to address the forementioned problems, in the context of weeds and crops recognition with drone system. The specific objective of this study was to investigate visual transformers (ViT) and apply them to plant classification in Unmanned Aerial Vehicles (UAV) images. Data were collected using a high-resolution camera mounted on a UAV, which was deployed in beet, parsley and spinach fields. The acquired data were augmented to build larger dataset, since ViT requires large sample sets for better performance, we also adopted the transfer learning strategy. Experiments were set out to assess the effect of training and validation dataset size, as well as the effect of increasing the test set while reducing the training set. The results show that with a small labeled training dataset, the ViT models outperform state-of-the-art models such as EfficientNet and ResNet. The results of this study are promising and show the potential of ViT to be applied to a wide range of remote sensing image analysis tasks.
Collapse
|
7
|
Etienne A, Ahmad A, Aggarwal V, Saraswat D. Deep Learning-Based Object Detection System for Identifying Weeds Using UAS Imagery. Remote Sensing 2021; 13:5182. [DOI: 10.3390/rs13245182] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Current methods of broadcast herbicide application cause a negative environmental and economic impact. Computer vision methods, specifically those related to object detection, have been reported to aid in site-specific weed management procedures for targeted herbicide application within a field. However, a major challenge to developing a weed detection system is the requirement for a properly annotated database to differentiate between weeds and crops under field conditions. This research involved creating an annotated database of 374 red, green, and blue (RGB) color images organized into monocot and dicot weed classes. The images were acquired from corn and soybean research plots located in north-central Indiana using an unmanned aerial system (UAS) flown at 30 and 10 m heights above ground level (AGL). A total of 25,560 individual weed instances were manually annotated. The annotated database consisted of four different subsets (Training Image Sets 1–4) to train the You Only Look Once version 3 (YOLOv3) deep learning model for five separate experiments. The best results were observed with Training Image Set 4, consisting of images acquired at 10 m AGL. For monocot and dicot weeds, respectively, an average precision (AP) score of 91.48 % and 86.13% was observed at a 25% IoU threshold (AP @ T = 0.25), as well as 63.37% and 45.13% at a 50% IoU threshold (AP @ T = 0.5). This research has demonstrated a need to develop large, annotated weed databases to evaluate deep learning models for weed identification under field conditions. It also affirms the findings of other limited research studies utilizing object detection for weed identification under field conditions.
Collapse
|
8
|
V P, A D, J P, A I, A B, I V, O K. To the question about remote sensing of the earth for precision farming tasks and assessment of the consequences of techno-environmental events. ARTIF INTELL 2021. [DOI: 10.15407/jai2021.02.096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Some issues of the use of unmanned aircraft and space vehicles in monitoring the consequences of technical and environmental events and precision farming are considered. The proposed technology is aimed at improving the recognition accuracy of infrastructure objects with obtaining the numerical values of their 3D coordinates. The aim of the research is to improve the quality of monitoring using neural network identification and classification of objects in multi-zone satellite images obtained from unmanned aerial vehicles (UAV). Research includes both theoretical research and applied problem solving. The mathematical basis of image processing is the image recognition computer. Practical research is based on experimentation, software implementation, testing of algorithms and technology. An effective method of video surveillance of the territory has been improved. The task of the authors' research is to improve the accuracy of objects recognition on the earth's surface (specific infrastructure objects, the sky, the state of vegetation of agricultural land). The authors have experience in this area. The solution to this problem occurs simultaneously in two directions. The first direction: the technical result is ensured by the fact that the technology offers the use of a UAV equipped with two video cameras. The second direction is the use of scientific idea consisting in the development of a method for joint computer processing of digital and analog images obtained from UAVs, as well as quasi-simultaneous and reusable multi-zone satellite images. A new result of the research is the developed data structure for storing the model of the recognition process, which allows to jointly save dissimilar characteristics and membership functions of different types in the same tables
Collapse
|
9
|
Zhang J, Maleski J, Jespersen D, Waltz FC, Rains G, Schwartz B. Unmanned Aerial System-Based Weed Mapping in Sod Production Using a Convolutional Neural Network. Front Plant Sci 2021; 12:702626. [PMID: 34899768 PMCID: PMC8660967 DOI: 10.3389/fpls.2021.702626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 10/25/2021] [Indexed: 06/14/2023]
Abstract
Weeds are a persistent problem on sod farms, and herbicides to control different weed species are one of the largest chemical inputs. Recent advances in unmanned aerial systems (UAS) and artificial intelligence provide opportunities for weed mapping on sod farms. This study investigates the weed type composition and area through both ground and UAS-based weed surveys and trains a convolutional neural network (CNN) for identifying and mapping weeds in sod fields using UAS-based imagery and a high-level application programming interface (API) implementation (Fastai) of the PyTorch deep learning library. The performance of the CNN was overall similar to, and in some classes (broadleaf and spurge) better than, human eyes indicated by the metric recall. In general, the CNN detected broadleaf, grass weeds, spurge, sedge, and no weeds at a precision between 0.68 and 0.87, 0.57 and 0.82, 0.68 and 0.83, 0.66 and 0.90, and 0.80 and 0.88, respectively, when using UAS images at 0.57 cm-1.28 cm pixel-1 resolution. Recall ranges for the five classes were 0.78-0.93, 0.65-0.87, 0.82-0.93, 0.52-0.79, and 0.94-0.99. Additionally, this study demonstrates that a CNN can achieve precision and recall above 0.9 at detecting different types of weeds during turf establishment when the weeds are mature. The CNN is limited by the image resolution, and more than one model may be needed in practice to improve the overall performance of weed mapping.
Collapse
Affiliation(s)
- Jing Zhang
- Department of Crop and Soil Sciences, University of Georgia, Tifton, GA, United States
| | - Jerome Maleski
- Department of Crop and Soil Sciences, University of Georgia, Tifton, GA, United States
| | - David Jespersen
- Department of Crop and Soil Sciences, University of Georgia, Griffin, GA, United States
| | - F. C. Waltz
- Department of Crop and Soil Sciences, University of Georgia, Griffin, GA, United States
| | - Glen Rains
- Department of Entomology, University of Georgia, Tifton, GA, United States
| | - Brian Schwartz
- Department of Crop and Soil Sciences, University of Georgia, Tifton, GA, United States
| |
Collapse
|
10
|
Latif S, Driss M, Boulila W, Huma ZE, Jamal SS, Idrees Z, Ahmad J. Deep Learning for the Industrial Internet of Things (IIoT): A Comprehensive Survey of Techniques, Implementation Frameworks, Potential Applications, and Future Directions. Sensors (Basel) 2021; 21:7518. [PMID: 34833594 PMCID: PMC8625089 DOI: 10.3390/s21227518] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 10/29/2021] [Accepted: 11/08/2021] [Indexed: 11/16/2022]
Abstract
The Industrial Internet of Things (IIoT) refers to the use of smart sensors, actuators, fast communication protocols, and efficient cybersecurity mechanisms to improve industrial processes and applications. In large industrial networks, smart devices generate large amounts of data, and thus IIoT frameworks require intelligent, robust techniques for big data analysis. Artificial intelligence (AI) and deep learning (DL) techniques produce promising results in IIoT networks due to their intelligent learning and processing capabilities. This survey article assesses the potential of DL in IIoT applications and presents a brief architecture of IIoT with key enabling technologies. Several well-known DL algorithms are then discussed along with their theoretical backgrounds and several software and hardware frameworks for DL implementations. Potential deployments of DL techniques in IIoT applications are briefly discussed. Finally, this survey highlights significant challenges and future directions for future research endeavors.
Collapse
Affiliation(s)
- Shahid Latif
- School of Information Science and Engineering, Fudan University, Shanghai 200433, China; (S.L.); (Z.I.)
| | - Maha Driss
- Security Engineering Lab, Prince Sultan University, Riyadh 12435, Saudi Arabia;
- RIADI Laboratory, National School of Computer Science, University of Manouba, Manouba 2010, Tunisia;
| | - Wadii Boulila
- RIADI Laboratory, National School of Computer Science, University of Manouba, Manouba 2010, Tunisia;
- Robotics and Internet of Things Lab, Prince Sultan University, Riyadh 12435, Saudi Arabia
| | - Zil e Huma
- Department of Electrical Engineering, Institute of Space Technology, Islamabad 44000, Pakistan;
| | - Sajjad Shaukat Jamal
- Department of Mathematics, College of Science, King Khalid University, Abha 61413, Saudi Arabia;
| | - Zeba Idrees
- School of Information Science and Engineering, Fudan University, Shanghai 200433, China; (S.L.); (Z.I.)
| | - Jawad Ahmad
- School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK
| |
Collapse
|
11
|
Rosle R, Che’ya NN, Ang Y, Rahmat F, Wayayok A, Berahim Z, Fazlil Ilahi WF, Ismail MR, Omar MH. Weed Detection in Rice Fields Using Remote Sensing Technique: A Review. Applied Sciences 2021; 11:10701. [DOI: 10.3390/app112210701] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
This paper reviewed the weed problems in agriculture and how remote sensing techniques can detect weeds in rice fields. The comparison of weed detection between traditional practices and automated detection using remote sensing platforms is discussed. The ideal stage for controlling weeds in rice fields was highlighted, and the types of weeds usually found in paddy fields were listed. This paper will discuss weed detection using remote sensing techniques, and algorithms commonly used to differentiate them from crops are deliberated. However, weed detection in rice fields using remote sensing platforms is still in its early stages; weed detection in other crops is also discussed. Results show that machine learning (ML) and deep learning (DL) remote sensing techniques have successfully produced a high accuracy map for detecting weeds in crops using RS platforms. Therefore, this technology positively impacts weed management in many aspects, especially in terms of the economic perspective. The implementation of this technology into agricultural development could be extended further.
Collapse
|
12
|
Rakhmatulin I, Kamilaris A, Andreasen C. Deep Neural Networks to Detect Weeds from Crops in Agricultural Environments in Real-Time: A Review. Remote Sensing 2021; 13:4486. [DOI: 10.3390/rs13214486] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Automation, including machine learning technologies, are becoming increasingly crucial in agriculture to increase productivity. Machine vision is one of the most popular parts of machine learning and has been widely used where advanced automation and control have been required. The trend has shifted from classical image processing and machine learning techniques to modern artificial intelligence (AI) and deep learning (DL) methods. Based on large training datasets and pre-trained models, DL-based methods have proven to be more accurate than previous traditional techniques. Machine vision has wide applications in agriculture, including the detection of weeds and pests in crops. Variation in lighting conditions, failures to transfer learning, and object occlusion constitute key challenges in this domain. Recently, DL has gained much attention due to its advantages in object detection, classification, and feature extraction. DL algorithms can automatically extract information from large amounts of data used to model complex problems and is, therefore, suitable for detecting and classifying weeds and crops. We present a systematic review of AI-based systems to detect weeds, emphasizing recent trends in DL. Various DL methods are discussed to clarify their overall potential, usefulness, and performance. This study indicates that several limitations obstruct the widespread adoption of AI/DL in commercial applications. Recommendations for overcoming these challenges are summarized.
Collapse
|
13
|
Xu K, Zhu Y, Cao W, Jiang X, Jiang Z, Li S, Ni J. Multi-Modal Deep Learning for Weeds Detection in Wheat Field Based on RGB-D Images. Front Plant Sci 2021; 12:732968. [PMID: 34804085 PMCID: PMC8604282 DOI: 10.3389/fpls.2021.732968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 10/19/2021] [Indexed: 06/13/2023]
Abstract
Single-modal images carry limited information for features representation, and RGB images fail to detect grass weeds in wheat fields because of their similarity to wheat in shape. We propose a framework based on multi-modal information fusion for accurate detection of weeds in wheat fields in a natural environment, overcoming the limitation of single modality in weeds detection. Firstly, we recode the single-channel depth image into a new three-channel image like the structure of RGB image, which is suitable for feature extraction of convolutional neural network (CNN). Secondly, the multi-scale object detection is realized by fusing the feature maps output by different convolutional layers. The three-channel network structure is designed to take into account the independence of RGB and depth information, respectively, and the complementarity of multi-modal information, and the integrated learning is carried out by weight allocation at the decision level to realize the effective fusion of multi-modal information. The experimental results show that compared with the weed detection method based on RGB image, the accuracy of our method is significantly improved. Experiments with integrated learning shows that mean average precision (mAP) of 36.1% for grass weeds and 42.9% for broad-leaf weeds, and the overall detection precision, as indicated by intersection over ground truth (IoG), is 89.3%, with weights of RGB and depth images at α = 0.4 and β = 0.3. The results suggest that our methods can accurately detect the dominant species of weeds in wheat fields, and that multi-modal fusion can effectively improve object detection performance.
Collapse
Affiliation(s)
- Ke Xu
- College of Agriculture, Nanjing Agricultural University, Nanjing, China
- National Engineering and Technology Center for Information Agriculture, Nanjing, China
- Engineering Research Center of Smart Agriculture, Ministry of Education, Nanjing, China
- Jiangsu Key Laboratory for Information Agriculture, Nanjing, China
- Jiangsu Collaborative Innovation Center for the Technology and Application of Internet of Things, Nanjing, China
| | - Yan Zhu
- College of Agriculture, Nanjing Agricultural University, Nanjing, China
- National Engineering and Technology Center for Information Agriculture, Nanjing, China
- Engineering Research Center of Smart Agriculture, Ministry of Education, Nanjing, China
- Jiangsu Key Laboratory for Information Agriculture, Nanjing, China
- Jiangsu Collaborative Innovation Center for the Technology and Application of Internet of Things, Nanjing, China
| | - Weixing Cao
- College of Agriculture, Nanjing Agricultural University, Nanjing, China
- National Engineering and Technology Center for Information Agriculture, Nanjing, China
- Engineering Research Center of Smart Agriculture, Ministry of Education, Nanjing, China
- Jiangsu Key Laboratory for Information Agriculture, Nanjing, China
- Jiangsu Collaborative Innovation Center for the Technology and Application of Internet of Things, Nanjing, China
| | - Xiaoping Jiang
- College of Agriculture, Nanjing Agricultural University, Nanjing, China
- National Engineering and Technology Center for Information Agriculture, Nanjing, China
- Engineering Research Center of Smart Agriculture, Ministry of Education, Nanjing, China
- Jiangsu Key Laboratory for Information Agriculture, Nanjing, China
- Jiangsu Collaborative Innovation Center for the Technology and Application of Internet of Things, Nanjing, China
| | - Zhijian Jiang
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Shuailong Li
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Jun Ni
- College of Agriculture, Nanjing Agricultural University, Nanjing, China
- National Engineering and Technology Center for Information Agriculture, Nanjing, China
- Engineering Research Center of Smart Agriculture, Ministry of Education, Nanjing, China
- Jiangsu Key Laboratory for Information Agriculture, Nanjing, China
- Jiangsu Collaborative Innovation Center for the Technology and Application of Internet of Things, Nanjing, China
| |
Collapse
|
14
|
de Freitas Souza M, Monteiro AL, Silva DV, Silva TS, de Melo SB, Barros Júnior AP, Fernandes BCC, Mendonça V. Machine learning models as an alternative to determine productivity losses caused by weeds. Pest Manag Sci 2021; 77:5072-5085. [PMID: 34227226 DOI: 10.1002/ps.6547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 05/03/2021] [Accepted: 07/05/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Weed control can be economically viable if implemented at the necessary time to minimize interference. Empirical mathematical models have been used to determine when to start the weed control in many crops. Furthermore, empirical models have a low generalization capacity to understand different scenarios. However, computational development facilitated the implementation of supervised machine learning models, as artificial neural networks (ANNs), capable of understanding complex relationships. The objectives of our work were to evaluate the ability of ANNs to estimate yield losses in onion (model crop) due to weed interference and compare with multiple linear regression (MLR) and empirical models. RESULTS MLR constructed from non-destructive and destructive methods show R2 and root mean square error (RMSE) values varying between 0.75% and 0.82%, 13.0% and 19.0%, respectively, during testing step. The ANNs has higher R2 (higher than 0.95) and lower RMSE (less than 6.95%) compared to MLR and empirical models for training and testing steps. ANNs considering only the coexistence period and system have similar performance to MLR models. However, the insertion of variables related to weed density (non-destructive ANN) or fresh matter (destructive ANN) increases the predictive capacity of the networks to values close to 99% correct. CONCLUSION The best performing ANNs can indicate the beginning of weed control since they can accurately estimate losses due to competition. These results encourage future studies implementing ANNs based on computer vision to extract information about the weed community.
Collapse
Affiliation(s)
- Matheus de Freitas Souza
- Department of Agronomic and Forestry Sciences, Universidade Federal Rural do Semi-Árido - UFERSA, Plant Science Center, Mossoró, Brazil
| | - Alex Lima Monteiro
- Department of Agronomic and Forestry Sciences, Universidade Federal Rural do Semi-Árido - UFERSA, Plant Science Center, Mossoró, Brazil
| | - Daniel Valadão Silva
- Department of Agronomic and Forestry Sciences, Universidade Federal Rural do Semi-Árido - UFERSA, Plant Science Center, Mossoró, Brazil
| | - Tatiane Severo Silva
- Department of Agronomic and Forestry Sciences, Universidade Federal Rural do Semi-Árido - UFERSA, Plant Science Center, Mossoró, Brazil
| | - Stefeson Bezerra de Melo
- Department of Exact, Technological and Human Sciences, Universidade Federal Rural do Semi-Árido - UFERSA, Angicos, Brazil
| | - Aurélio Paes Barros Júnior
- Department of Agronomic and Forestry Sciences, Universidade Federal Rural do Semi-Árido - UFERSA, Plant Science Center, Mossoró, Brazil
| | - Bruno Caio Chaves Fernandes
- Department of Agronomic and Forestry Sciences, Universidade Federal Rural do Semi-Árido - UFERSA, Plant Science Center, Mossoró, Brazil
| | - Vander Mendonça
- Department of Agronomic and Forestry Sciences, Universidade Federal Rural do Semi-Árido - UFERSA, Plant Science Center, Mossoró, Brazil
| |
Collapse
|
15
|
Liu J, Xiang J, Jin Y, Liu R, Yan J, Wang L. Boost Precision Agriculture with Unmanned Aerial Vehicle Remote Sensing and Edge Intelligence: A Survey. Remote Sensing 2021; 13:4387. [DOI: 10.3390/rs13214387] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In recent years unmanned aerial vehicles (UAVs) have emerged as a popular and cost-effective technology to capture high spatial and temporal resolution remote sensing (RS) images for a wide range of precision agriculture applications, which can help reduce costs and environmental impacts by providing detailed agricultural information to optimize field practices. Furthermore, deep learning (DL) has been successfully applied in agricultural applications such as weed detection, crop pest and disease detection, etc. as an intelligent tool. However, most DL-based methods place high computation, memory and network demands on resources. Cloud computing can increase processing efficiency with high scalability and low cost, but results in high latency and great pressure on the network bandwidth. The emerging of edge intelligence, although still in the early stages, provides a promising solution for artificial intelligence (AI) applications on intelligent edge devices at the edge of the network close to data sources. These devices are with built-in processors enabling onboard analytics or AI (e.g., UAVs and Internet of Things gateways). Therefore, in this paper, a comprehensive survey on the latest developments of precision agriculture with UAV RS and edge intelligence is conducted for the first time. The major insights observed are as follows: (a) in terms of UAV systems, small or light, fixed-wing or industrial rotor-wing UAVs are widely used in precision agriculture; (b) sensors on UAVs can provide multi-source datasets, and there are only a few public UAV dataset for intelligent precision agriculture, mainly from RGB sensors and a few from multispectral and hyperspectral sensors; (c) DL-based UAV RS methods can be categorized into classification, object detection and segmentation tasks, and convolutional neural network and recurrent neural network are the mostly common used network architectures; (d) cloud computing is a common solution to UAV RS data processing, while edge computing brings the computing close to data sources; (e) edge intelligence is the convergence of artificial intelligence and edge computing, in which model compression especially parameter pruning and quantization is the most important and widely used technique at present, and typical edge resources include central processing units, graphics processing units and field programmable gate arrays.
Collapse
|
16
|
Lan Y, Huang K, Yang C, Lei L, Ye J, Zhang J, Zeng W, Zhang Y, Deng J. Real-Time Identification of Rice Weeds by UAV Low-Altitude Remote Sensing Based on Improved Semantic Segmentation Model. Remote Sensing 2021; 13:4370. [DOI: 10.3390/rs13214370] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Real-time analysis of UAV low-altitude remote sensing images at airborne terminals facilitates the timely monitoring of weeds in the farmland. Aiming at the real-time identification of rice weeds by UAV low-altitude remote sensing, two improved identification models, MobileNetV2-UNet and FFB-BiSeNetV2, were proposed based on the semantic segmentation models U-Net and BiSeNetV2, respectively. The MobileNetV2-UNet model focuses on reducing the amount of calculation of the original model parameters, and the FFB-BiSeNetV2 model focuses on improving the segmentation accuracy of the original model. In this study, we first tested and compared the segmentation accuracy and operating efficiency of the models before and after the improvement on the computer platform, and then transplanted the improved models to the embedded hardware platform Jetson AGX Xavier, and used TensorRT to optimize the model structure to improve the inference speed. Finally, the real-time segmentation effect of the two improved models on rice weeds was further verified through the collected low-altitude remote sensing video data. The results show that on the computer platform, the MobileNetV2-UNet model reduced the amount of network parameters, model size, and floating point calculations by 89.12%, 86.16%, and 92.6%, and the inference speed also increased by 2.77 times, when compared with the U-Net model. The FFB-BiSeNetV2 model improved the segmentation accuracy compared with the BiSeNetV2 model and achieved the highest pixel accuracy and mean Intersection over Union ratio of 93.09% and 80.28%. On the embedded hardware platform, the optimized MobileNetV2-UNet model and FFB-BiSeNetV2 model inferred 45.05 FPS and 40.16 FPS for a single image under the weight accuracy of FP16, respectively, both meeting the performance requirements of real-time identification. The two methods proposed in this study realize the real-time identification of rice weeds under low-altitude remote sensing by UAV, which provide a reference for the subsequent integrated operation of plant protection drones in real-time rice weed identification and precision spraying.
Collapse
|
17
|
Gibril MBA, Shafri HZM, Shanableh A, Al-ruzouq R, Wayayok A, Hashim SJ. Deep Convolutional Neural Network for Large-Scale Date Palm Tree Mapping from UAV-Based Images. Remote Sensing 2021; 13:2787. [DOI: 10.3390/rs13142787] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Large-scale mapping of date palm trees is vital for their consistent monitoring and sustainable management, considering their substantial commercial, environmental, and cultural value. This study presents an automatic approach for the large-scale mapping of date palm trees from very-high-spatial-resolution (VHSR) unmanned aerial vehicle (UAV) datasets, based on a deep learning approach. A U-Shape convolutional neural network (U-Net), based on a deep residual learning framework, was developed for the semantic segmentation of date palm trees. A comprehensive set of labeled data was established to enable the training and evaluation of the proposed segmentation model and increase its generalization capability. The performance of the proposed approach was compared with those of various state-of-the-art fully convolutional networks (FCNs) with different encoder architectures, including U-Net (based on VGG-16 backbone), pyramid scene parsing network, and two variants of DeepLab V3+. Experimental results showed that the proposed model outperformed other FCNs in the validation and testing datasets. The generalizability evaluation of the proposed approach on a comprehensive and complex testing dataset exhibited higher classification accuracy and showed that date palm trees could be automatically mapped from VHSR UAV images with an F-score, mean intersection over union, precision, and recall of 91%, 85%, 0.91, and 0.92, respectively. The proposed approach provides an efficient deep learning architecture for the automatic mapping of date palm trees from VHSR UAV-based images.
Collapse
|
18
|
Martos V, Ahmad A, Cartujo P, Ordoñez J. Ensuring Agricultural Sustainability through Remote Sensing in the Era of Agriculture 5.0. Applied Sciences 2021; 11:5911. [DOI: 10.3390/app11135911] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Timely and reliable information about crop management, production, and yield is considered of great utility by stakeholders (e.g., national and international authorities, farmers, commercial units, etc.) to ensure food safety and security. By 2050, according to Food and Agriculture Organization (FAO) estimates, around 70% more production of agricultural products will be needed to fulfil the demands of the world population. Likewise, to meet the Sustainable Development Goals (SDGs), especially the second goal of “zero hunger”, potential technologies like remote sensing (RS) need to be efficiently integrated into agriculture. The application of RS is indispensable today for a highly productive and sustainable agriculture. Therefore, the present study draws a general overview of RS technology with a special focus on the principal platforms of this technology, i.e., satellites and remotely piloted aircrafts (RPAs), and the sensors used, in relation to the 5th industrial revolution. Nevertheless, since 1957, RS technology has found applications, through the use of satellite imagery, in agriculture, which was later enriched by the incorporation of remotely piloted aircrafts (RPAs), which is further pushing the boundaries of proficiency through the upgrading of sensors capable of higher spectral, spatial, and temporal resolutions. More prominently, wireless sensor technologies (WST) have streamlined real time information acquisition and programming for respective measures. Improved algorithms and sensors can, not only add significant value to crop data acquisition, but can also devise simulations on yield, harvesting and irrigation periods, metrological data, etc., by making use of cloud computing. The RS technology generates huge sets of data that necessitate the incorporation of artificial intelligence (AI) and big data to extract useful products, thereby augmenting the adeptness and efficiency of agriculture to ensure its sustainability. These technologies have made the orientation of current research towards the estimation of plant physiological traits rather than the structural parameters possible. Futuristic approaches for benefiting from these cutting-edge technologies are discussed in this study. This study can be helpful for researchers, academics, and young students aspiring to play a role in the achievement of sustainable agriculture.
Collapse
|
19
|
Khan S, Tufail M, Khan MT, Khan ZA, Iqbal J, Wasim A. Real-time recognition of spraying area for UAV sprayers using a deep learning approach. PLoS One 2021; 16:e0249436. [PMID: 33793634 PMCID: PMC8016340 DOI: 10.1371/journal.pone.0249436] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 03/18/2021] [Indexed: 11/18/2022] Open
Abstract
Agricultural production is vital for the stability of the country's economy. Controlling weed infestation through agrochemicals is necessary for increasing crop productivity. However, its excessive use has severe repercussions on the environment (damaging the ecosystem) and the human operators exposed to it. The use of Unmanned Aerial Vehicles (UAVs) has been proposed by several authors in the literature for performing the desired spraying and is considered safer and more precise than the conventional methods. Therefore, the study's objective was to develop an accurate real-time recognition system of spraying areas for UAVs, which is of utmost importance for UAV-based sprayers. A two-step target recognition system was developed by using deep learning for the images collected from a UAV. Agriculture cropland of coriander was considered for building a classifier for recognizing spraying areas. The developed deep learning system achieved an average F1 score of 0.955, while the classifier recognition average computation time was 3.68 ms. The developed deep learning system can be deployed in real-time to UAV-based sprayers for accurate spraying.
Collapse
Affiliation(s)
- Shahbaz Khan
- Department of Mechatronics Engineering, University of Engineering & Technology, Peshawar, Pakistan
- Advanced Robotics and Automation Laboratory, National Center of Robotics and Automation (NCRA), Rawalpindi, Pakistan
| | - Muhammad Tufail
- Department of Mechatronics Engineering, University of Engineering & Technology, Peshawar, Pakistan
- Advanced Robotics and Automation Laboratory, National Center of Robotics and Automation (NCRA), Rawalpindi, Pakistan
| | - Muhammad Tahir Khan
- Department of Mechatronics Engineering, University of Engineering & Technology, Peshawar, Pakistan
- Advanced Robotics and Automation Laboratory, National Center of Robotics and Automation (NCRA), Rawalpindi, Pakistan
| | - Zubair Ahmad Khan
- Department of Mechatronics Engineering, University of Engineering & Technology, Peshawar, Pakistan
| | - Javaid Iqbal
- College of Electrical & Mechanical Engineering (CEME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Arsalan Wasim
- Department of Electrical Engineering, Hitec University, Taxila, Pakistan
| |
Collapse
|
20
|
Kaivosoja J, Hautsalo J, Heikkinen J, Hiltunen L, Ruuttunen P, Näsi R, Niemeläinen O, Lemsalu M, Honkavaara E, Salonen J. Reference Measurements in Developing UAV Systems for Detecting Pests, Weeds, and Diseases. Remote Sensing 2021; 13:1238. [DOI: 10.3390/rs13071238] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
The development of UAV (unmanned aerial vehicle) imaging technologies for precision farming applications is rapid, and new studies are published frequently. In cases where measurements are based on aerial imaging, there is the need to have ground truth or reference data in order to develop reliable applications. However, in several precision farming use cases such as pests, weeds, and diseases detection, the reference data can be subjective or relatively difficult to capture. Furthermore, the collection of reference data is usually laborious and time consuming. It also appears that it is difficult to develop generalisable solutions for these areas. This review studies previous research related to pests, weeds, and diseases detection and mapping using UAV imaging in the precision farming context, underpinning the applied reference measurement techniques. The majority of the reviewed studies utilised subjective visual observations of UAV images, and only a few applied in situ measurements. The conclusion of the review is that there is a lack of quantitative and repeatable reference data measurement solutions in the areas of mapping pests, weeds, and diseases. In addition, the results that the studies present should be reflected in the applied references. An option in the future approach could be the use of synthetic data as reference.
Collapse
|
21
|
Kim SB, Kim DS, Mo X. An image segmentation technique with statistical strategies for pesticide efficacy assessment. PLoS One 2021; 16:e0248592. [PMID: 33720980 DOI: 10.1371/journal.pone.0248592] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Accepted: 03/01/2021] [Indexed: 11/25/2022] Open
Abstract
Image analysis is a useful technique to evaluate the efficacy of a treatment for weed control. In this study, we address two practical challenges in the image analysis. First, it is challenging to accurately quantify the efficacy of a treatment when an entire experimental unit is not affected by the treatment. Second, RGB codes, which can be used to identify weed growth in the image analysis, may not be stable due to various surrounding factors, human errors, and unknown reasons. To address the former challenge, the technique of image segmentation is considered. To address the latter challenge, the proportion of weed area is adjusted under a beta regression model. The beta regression is a useful statistical method when the outcome variable (proportion) ranges between zero and one. In this study, we attempt to accurately evaluate the efficacy of a 35% hydrogen peroxide (HP). The image segmentation was applied to separate two zones, where the HP was directly applied (gray zone) and its surroundings (nongray zone). The weed growth was monitored for five days after the treatment, and the beta regression was implemented to compare the weed growth between the gray zone and the control group and between the nongray zone and the control group. The estimated treatment effect was substantially different after the implementation of image segmentation and the adjustment of green area.
Collapse
|
22
|
Larsen A, Hanigan I, Reich BJ, Qin Y, Cope M, Morgan G, Rappold AG. A deep learning approach to identify smoke plumes in satellite imagery in near-real time for health risk communication. J Expo Sci Environ Epidemiol 2021; 31:170-176. [PMID: 32719441 PMCID: PMC7796988 DOI: 10.1038/s41370-020-0246-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 04/23/2020] [Accepted: 06/29/2020] [Indexed: 05/22/2023]
Abstract
BACKGROUND Wildland fire (wildfire; bushfire) pollution contributes to poor air quality, a risk factor for premature death. The frequency and intensity of wildfires are expected to increase; improved tools for estimating exposure to fire smoke are vital. New-generation satellite-based sensors produce high-resolution spectral images, providing real-time information of surface features during wildfire episodes. Because of the vast size of such data, new automated methods for processing information are required. OBJECTIVE We present a deep fully convolutional neural network (FCN) for predicting fire smoke in satellite imagery in near-real time (NRT). METHODS The FCN identifies fire smoke using output from operational smoke identification methods as training data, leveraging validated smoke products in a framework that can be operationalized in NRT. We demonstrate this for a fire episode in Australia; the algorithm is applicable to any geographic region. RESULTS The algorithm has high classification accuracy (99.5% of pixels correctly classified on average) and precision (average intersection over union = 57.6%). SIGNIFICANCE The FCN algorithm has high potential as an exposure-assessment tool, capable of providing critical information to fire managers, health and environmental agencies, and the general public to prevent the health risks associated with exposure to hazardous smoke from wildland fires in NRT.
Collapse
Affiliation(s)
- Alexandra Larsen
- Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, USA
| | - Ivan Hanigan
- The University of Sydney, University Centre for Rural Health, School of Public Health, Sydney, NSW, Australia
- Centre for Air Pollution, Energy and Health Research (CAR), Woolcock Institute of Medical Research, Sydney, NSW, Australia
- Centre for Research and Action in Public Health, University of Canberra, Canberra, ACT, Australia
| | - Brian J Reich
- Department of Statistics, North Carolina State University, Raleigh, NC, USA
| | - Yi Qin
- Oceans and Atmosphere Research, Commonwealth Science and Industrial Research Organisation, Victoria, Australia
| | - Martin Cope
- Oceans and Atmosphere Research, Commonwealth Science and Industrial Research Organisation, Victoria, Australia
| | - Geoffrey Morgan
- The University of Sydney, University Centre for Rural Health, School of Public Health, Sydney, NSW, Australia
- Centre for Air Pollution, Energy and Health Research (CAR), Woolcock Institute of Medical Research, Sydney, NSW, Australia
| | - Ana G Rappold
- U.S. Environmental Protection Agency, Center for Public Health and Environmental Assessment, Office of Research and Development, Research Triangle Park, NC, USA.
| |
Collapse
|
23
|
Zou K, Chen X, Zhang F, Zhou H, Zhang C. A Field Weed Density Evaluation Method Based on UAV Imaging and Modified U-Net. Remote Sensing 2021; 13:310. [DOI: 10.3390/rs13020310] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Weeds are one of the main factors affecting the yield and quality of agricultural products. Accurate evaluation of weed density is of great significance for field management, especially precision weeding. In this paper, a weed density calculating and mapping method in the field is proposed. An unmanned aerial vehicle (UAV) was used to capture field images. The excess green minus excess red index, combined with the minimum error threshold segmentation method, was used to segment green plants and bare land. A modified U-net was used to segment crops from images. After removing the bare land and crops from the field, images of weeds were obtained. The weed density was evaluated by the ratio of weed area to total area on the segmented image. The accuracy of the green plant segmentation was 93.5%. In terms of crop segmentation, the intersection over union (IoU) was 93.40%, and the segmentation time of a single image was 35.90 ms. Finally, the determination coefficient of the UAV evaluated weed density and the manually observed weed density was 0.94, and the root mean square error was 0.03. With the proposed method, the weed density of a field can be effectively evaluated from UAV images, hence providing critical information for precision weeding.
Collapse
|
24
|
Deng J, Zhong Z, Huang H, Lan Y, Han Y, Zhang Y. Lightweight Semantic Segmentation Network for Real-Time Weed Mapping Using Unmanned Aerial Vehicles. Applied Sciences 2020; 10:7132. [DOI: 10.3390/app10207132] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The timely and efficient generation of weed maps is essential for weed control tasks and precise spraying applications. Based on the general concept of site-specific weed management (SSWM), many researchers have used unmanned aerial vehicle (UAV) remote sensing technology to monitor weed distributions, which can provide decision support information for precision spraying. However, image processing is mainly conducted offline, as the time gap between image collection and spraying significantly limits the applications of SSWM. In this study, we conducted real-time image processing onboard a UAV to reduce the time gap between image collection and herbicide treatment. First, we established a hardware environment for real-time image processing that integrates map visualization, flight control, image collection, and real-time image processing onboard a UAV based on secondary development. Second, we exploited the proposed model design to develop a lightweight network architecture for weed mapping tasks. The proposed network architecture was evaluated and compared with mainstream semantic segmentation models. Results demonstrate that the proposed network outperform contemporary networks in terms of efficiency with competitive accuracy. We also conducted optimization during the inference process. Precision calibration was applied to both the desktop and embedded devices and the precision was reduced from FP32 to FP16. Experimental results demonstrate that this precision calibration further improves inference speed while maintaining reasonable accuracy. Our modified network architecture achieved an accuracy of 80.9% on the testing samples and its inference speed was 4.5 fps on a Jetson TX2 module (Nvidia Corporation, Santa Clara, CA, USA), which demonstrates its potential for practical agricultural monitoring and precise spraying applications.
Collapse
|
25
|
Gée C, Denimal E. RGB Image-Derived Indicators for Spatial Assessment of the Impact of Broadleaf Weeds on Wheat Biomass. Remote Sensing 2020; 12:2982. [DOI: 10.3390/rs12182982] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
In precision agriculture, the development of proximal imaging systems embedded in autonomous vehicles allows to explore new weed management strategies for site-specific plant application. Accurate monitoring of weeds while controlling wheat growth requires indirect measurements of leaf area index (LAI) and above-ground dry matter biomass (BM) at early growth stages. This article explores the potential of RGB images to assess crop-weed competition in a wheat (Triticum aestivum L.) crop by generating two new indicators, the weed pressure (WP) and the local wheat biomass production (δBMc). The fractional vegetation cover (FVC) of the crop and the weeds was automatically determined from the images with a SVM-RBF classifier, using bag of visual word vectors as inputs. It is based on a new vegetation index called MetaIndex, defined as a vote of six indices widely used in the literature. Beyond a simple map of weed infestation, the map of WP describes the crop-weed competition. The map of δBMc, meanwhile, evaluates the local wheat above-ground biomass production and informs us about a potential stress. It is generated from the wheat FVC because it is highly correlated with LAI (r2 = 0.99) and BM (r2 = 0.93) obtained by destructive methods. By combining these two indicators, we aim at determining whether the origin of the wheat stress is due to weeds or not. This approach opens up new perspectives for the monitoring of weeds and the monitoring of their competition during crop growth with non-destructive and proximal sensing technologies in the early stages of development.
Collapse
|
26
|
Sapkota B, Singh V, Neely C, Rajan N, Bagavathiannan M. Detection of Italian Ryegrass in Wheat and Prediction of Competitive Interactions Using Remote-Sensing and Machine-Learning Techniques. Remote Sensing 2020; 12:2977. [DOI: 10.3390/rs12182977] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Italian ryegrass (Lolium perenne ssp. multiflorum (Lam) Husnot) is a troublesome weed species in wheat (Triticum aestivum) production in the United States, severely affecting grain yields. Spatial mapping of ryegrass infestation in wheat fields and early prediction of its impact on yield can assist management decision making. In this study, unmanned aerial systems (UAS)-based red, green and blue (RGB) imageries acquired at an early wheat growth stage in two different experimental sites were used for developing predictive models. Deep neural networks (DNNs) coupled with an extensive feature selection method were used to detect ryegrass in wheat and estimate ryegrass canopy coverage. Predictive models were developed by regressing early-season ryegrass canopy coverage (%) with end-of-season (at wheat maturity) biomass and seed yield of ryegrass, as well as biomass and grain yield reduction (%) of wheat. Italian ryegrass was detected with high accuracy (precision = 95.44 ± 4.27%, recall = 95.48 ± 5.05%, F-score = 95.56 ± 4.11%) using the best model which included four features: hue, saturation, excess green index, and visible atmospheric resistant index. End-of-season ryegrass biomass was predicted with high accuracy (R2 = 0.87), whereas the other variables had moderate to high accuracy levels (R2 values of 0.74 for ryegrass seed yield, 0.73 for wheat biomass reduction, and 0.69 for wheat grain yield reduction). The methodology demonstrated in the current study shows great potential for mapping and quantifying ryegrass infestation and predicting its competitive response in wheat, allowing for timely management decisions.
Collapse
|
27
|
Osorio K, Puerto A, Pedraza C, Jamaica D, Rodríguez L. A Deep Learning Approach for Weed Detection in Lettuce Crops Using Multispectral Images. AgriEngineering 2020; 2:471-88. [DOI: 10.3390/agriengineering2030032] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Weed management is one of the most important aspects of crop productivity; knowing the amount and the locations of weeds has been a problem that experts have faced for several decades. This paper presents three methods for weed estimation based on deep learning image processing in lettuce crops, and we compared them to visual estimations by experts. One method is based on support vector machines (SVM) using histograms of oriented gradients (HOG) as feature descriptor. The second method was based in YOLOV3 (you only look once V3), taking advantage of its robust architecture for object detection, and the third one was based on Mask R-CNN (region based convolutional neural network) in order to get an instance segmentation for each individual. These methods were complemented with a NDVI index (normalized difference vegetation index) as a background subtractor for removing non photosynthetic objects. According to chosen metrics, the machine and deep learning methods had F1-scores of 88%, 94%, and 94% respectively, regarding to crop detection. Subsequently, detected crops were turned into a binary mask and mixed with the NDVI background subtractor in order to detect weed in an indirect way. Once the weed image was obtained, the coverage percentage of weed was calculated by classical image processing methods. Finally, these performances were compared with the estimations of a set from weed experts through a Bland–Altman plot, intraclass correlation coefficients (ICCs) and Dunn’s test to obtain statistical measurements between every estimation (machine-human); we found that these methods improve accuracy on weed coverage estimation and minimize subjectivity in human-estimated data.
Collapse
|
28
|
Veeranampalayam Sivakumar AN, Li J, Scott S, Psota E, J. Jhala A, Luck JD, Shi Y. Comparison of Object Detection and Patch-Based Classification Deep Learning Models on Mid- to Late-Season Weed Detection in UAV Imagery. Remote Sensing 2020; 12:2136. [DOI: 10.3390/rs12132136] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Mid- to late-season weeds that escape from the routine early-season weed management threaten agricultural production by creating a large number of seeds for several future growing seasons. Rapid and accurate detection of weed patches in field is the first step of site-specific weed management. In this study, object detection-based convolutional neural network models were trained and evaluated over low-altitude unmanned aerial vehicle (UAV) imagery for mid- to late-season weed detection in soybean fields. The performance of two object detection models, Faster RCNN and the Single Shot Detector (SSD), were evaluated and compared in terms of weed detection performance using mean Intersection over Union (IoU) and inference speed. It was found that the Faster RCNN model with 200 box proposals had similar good weed detection performance to the SSD model in terms of precision, recall, f1 score, and IoU, as well as a similar inference time. The precision, recall, f1 score and IoU were 0.65, 0.68, 0.66 and 0.85 for Faster RCNN with 200 proposals, and 0.66, 0.68, 0.67 and 0.84 for SSD, respectively. However, the optimal confidence threshold of the SSD model was found to be much lower than that of the Faster RCNN model, which indicated that SSD might have lower generalization performance than Faster RCNN for mid- to late-season weed detection in soybean fields using UAV imagery. The performance of the object detection model was also compared with patch-based CNN model. The Faster RCNN model yielded a better weed detection performance than the patch-based CNN with and without overlap. The inference time of Faster RCNN was similar to patch-based CNN without overlap, but significantly less than patch-based CNN with overlap. Hence, Faster RCNN was found to be the best model in terms of weed detection performance and inference time among the different models compared in this study. This work is important in understanding the potential and identifying the algorithms for an on-farm, near real-time weed detection and management.
Collapse
|
29
|
Sapkota B, Singh V, Cope D, Valasek J, Bagavathiannan M. Mapping and Estimating Weeds in Cotton Using Unmanned Aerial Systems-Borne Imagery. AgriEngineering 2020; 2:350-66. [DOI: 10.3390/agriengineering2020024] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In recent years, Unmanned Aerial Systems (UAS) have emerged as an innovative technology to provide spatio-temporal information about weed species in crop fields. Such information is a critical input for any site-specific weed management program. A multi-rotor UAS (Phantom 4) equipped with an RGB sensor was used to collect imagery in three bands (Red, Green, and Blue; 0.8 cm/pixel resolution) with the objectives of (a) mapping weeds in cotton and (b) determining the relationship between image-based weed coverage and ground-based weed densities. For weed mapping, three different weed density levels (high, medium, and low) were established for a mix of different weed species, with three replications. To determine weed densities through ground truthing, five quadrats (1 m × 1 m) were laid out in each plot. The aerial imageries were preprocessed and subjected to Hough transformation to delineate cotton rows. Following the separation of inter-row vegetation from crop rows, a multi-level classification coupled with machine learning algorithms were used to distinguish intra-row weeds from cotton. Overall, accuracy levels of 89.16%, 85.83%, and 83.33% and kappa values of 0.84, 0.79, and 0.75 were achieved for detecting weed occurrence in high, medium, and low density plots, respectively. Further, ground-truthing based overall weed density values were fairly correlated (r2 = 0.80) with image-based weed coverage assessments. Among the specific weed species evaluated, Palmer amaranth (Amaranthus palmeri S. Watson) showed the highest correlation (r2 = 0.91) followed by red sprangletop (Leptochloa mucronata Michx) (r2 = 0.88). The results highlight the utility of UAS-borne RGB imagery for weed mapping and density estimation in cotton for precision weed management.
Collapse
|
30
|
García-berná JA, Ouhbi S, Benmouna B, García-mateos G, Fernández-alemán JL, Molina-martínez JM. Systematic Mapping Study on Remote Sensing in Agriculture. Applied Sciences 2020; 10:3456. [DOI: 10.3390/app10103456] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
The area of remote sensing techniques in agriculture has reached a significant degree of development and maturity, with numerous journals, conferences, and organizations specialized in it. Moreover, many review papers are available in the literature. The present work describes a literature review that adopts the form of a systematic mapping study, following a formal methodology. Eight mapping questions were defined, analyzing the main types of research, techniques, platforms, topics, and spectral information. A predefined search string was applied in the Scopus database, obtaining 1590 candidate papers. Afterwards, the most relevant 106 papers were selected, considering those with more than six citations per year. These are analyzed in more detail, answering the mapping questions for each paper. In this way, the current trends and new opportunities are discovered. As a result, increasing interest in the area has been observed since 2000; the most frequently addressed problems are those related to parameter estimation, growth vigor, and water usage, using classification techniques, that are mostly applied on RGB and hyperspectral images, captured from drones and satellites. A general recommendation that emerges from this study is to build on existing resources, such as agricultural image datasets, public satellite imagery, and deep learning toolkits.
Collapse
|
31
|
Yang M, Tseng H, Hsu Y, Tsai HP. Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sensing 2020; 12:633. [DOI: 10.3390/rs12040633] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A rapid and precise large-scale agricultural disaster survey is a basis for agricultural disaster relief and insurance but is labor-intensive and time-consuming. This study applies Unmanned Aerial Vehicles (UAVs) images through deep-learning image processing to estimate the rice lodging in paddies over a large area. This study establishes an image semantic segmentation model employing two neural network architectures, FCN-AlexNet, and SegNet, whose effects are explored in the interpretation of various object sizes and computation efficiency. Commercial UAVs imaging rice paddies in high-resolution visible images are used to calculate three vegetation indicators to improve the applicability of visible images. The proposed model was trained and tested on a set of UAV images in 2017 and was validated on a set of UAV images in 2019. For the identification of rice lodging on the 2017 UAV images, the F1-score reaches 0.80 and 0.79 for FCN-AlexNet and SegNet, respectively. The F1-score of FCN-AlexNet using RGB + ExGR combination also reaches 0.78 in the 2019 images for validation. The proposed model adopting semantic segmentation networks is proven to have better efficiency, approximately 10 to 15 times faster, and a lower misinterpretation rate than that of the maximum likelihood method.
Collapse
|
32
|
Abstract
Emerging technologies such as Internet of Things (IoT) can provide significant potential in Smart Farming and Precision Agriculture applications, enabling the acquisition of real-time environmental data. IoT devices such as Unmanned Aerial Vehicles (UAVs) can be exploited in a variety of applications related to crops management, by capturing high spatial and temporal resolution images. These technologies are expected to revolutionize agriculture, enabling decision-making in days instead of weeks, promising significant reduction in cost and increase in the yield. Such decisions enable the effective application of farm inputs, supporting the four pillars of precision agriculture, i.e., apply the right practice, at the right place, at the right time and with the right quantity. However, the actual proliferation and exploitation of UAVs in Smart Farming has not been as robust as expected mainly due to the challenges confronted when selecting and deploying the relevant technologies, including the data acquisition and image processing methods. The main problem is that still there is no standardized workflow for the use of UAVs in such applications, as it is a relatively new area. In this article, we review the most recent applications of UAVs for Precision Agriculture. We discuss the most common applications, the types of UAVs exploited and then we focus on the data acquisition methods and technologies, appointing the benefits and drawbacks of each one. We also point out the most popular processing methods of aerial imagery and discuss the outcomes of each method and the potential applications of each one in the farming operations.
Collapse
|
33
|
Neupane B, Horanont T, Hung ND. Deep learning based banana plant detection and counting using high-resolution red-green-blue (RGB) images collected from unmanned aerial vehicle (UAV). PLoS One 2019; 14:e0223906. [PMID: 31622450 PMCID: PMC6797093 DOI: 10.1371/journal.pone.0223906] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Accepted: 10/01/2019] [Indexed: 11/19/2022] Open
Abstract
The production of banana-one of the highly consumed fruits-is highly affected due to loss of certain number of banana plants in an early phase of vegetation. This affects the ability of farmers to forecast and estimate the production of banana. In this paper, we propose a deep learning (DL) based method to precisely detect and count banana plants on a farm exclusive of other plants, using high resolution RGB aerial images collected from Unmanned Aerial Vehicle (UAV). An attempt to detect the plants on the normal RGB images resulted less than 78.8% recall for our sample images of a commercial banana farm in Thailand. To improve this result, we use three image processing methods-Linear Contrast Stretch, Synthetic Color Transform and Triangular Greenness Index-to enhance the vegetative properties of orthomosaic, generating multiple variants of orthomosaic. Then we separately train a parameter-optimized Convolutional Neural Network (CNN) on manually interpreted banana plant samples seen on each image variants, to produce multiple results of detection on our region of interest. 96.4%, 85.1% and 75.8% of plants were correctly detected on three of our dataset collected from multiple altitude of 40, 50 and 60 meters, of same farm. Further discussion on results obtained from combination of multiple altitude variants are also discussed later in the research, in an attempt to find better altitude combination for data collection from UAV for the detection of banana plants. The results showed that merging the detection results of 40 and 50 meter dataset could detect the plants missed by each other, increasing recall upto 99%.
Collapse
Affiliation(s)
- Bipul Neupane
- School of Information, Computer and Communication Technology, Sirindhorn International Institute of Technology, Pathum Thani, Thailand
| | - Teerayut Horanont
- School of Information, Computer and Communication Technology, Sirindhorn International Institute of Technology, Pathum Thani, Thailand
| | - Nguyen Duy Hung
- School of Information, Computer and Communication Technology, Sirindhorn International Institute of Technology, Pathum Thani, Thailand
| |
Collapse
|
34
|
Guo S, Li J, Yao W, Zhan Y, Li Y, Shi Y. Distribution characteristics on droplet deposition of wind field vortex formed by multi-rotor UAV. PLoS One 2019; 14:e0220024. [PMID: 31329644 PMCID: PMC6645519 DOI: 10.1371/journal.pone.0220024] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Accepted: 07/08/2019] [Indexed: 12/03/2022] Open
Abstract
When the unmanned aerial vehicle (UAV) is used for aerial spraying, the downwash airflow generated by the UAV rotor will interact with the crop canopy and form a conical vortex shape in the crop plant. The size of the vortex will directly affect the outcome of the spraying operation. Six one-way spraying were performed by the UAV in a rice field with different but random flying altitude and velocities within the optimal operational range to form different vortex patterns. The spraying reagent was clear water, which was collected by water sensitive paper (WSP), and then the WSP was analyzed to study the droplets deposition effects in different vortex states. The results showed that the formation of the vortex significantly influenced the droplet deposition. To be specific, the droplet deposition amount in the obvious-vortex (OV) state was about 1.5 times of that in the small-scale (SV) vortex state, and 7 times of that in the non-vortex (NV) state. In the OV state, the droplets mainly deposited directly below and on both sides of the route. The deposition amount, coverage rate and droplet size increased from top to bottom of the crops with the deposition amount, coverage rate, and volume median diameter (VMD) ranging 0.204–0.470 μL/cm2, 3.31%-7.41%, and 306–367μm, respectively. In the SV state, droplets mainly deposited in the vortex area directly below the route. The deposition amount in the downwind direction was bigger than that in the upwind direction. The maximum of deposition amount, coverage rate and droplet size were found in the middle layer of the crops, the range are 0.177–0.334μL/cm2, 2.71%-5.30%, 295–370μm, respectively. In the NV state, the droplet mainly performed drifting motion, and the average droplet deposition amount in the downwind non-effective region was 29.4 times of that in the upwind non-effective region and 8.7 times of the effective vortex region directly below the route. The maximum of deposition amount, coverage rate and droplet size appeared in the upper layer of the crop, the range are 0.006–0.132μL/cm2, 0.17%-1.82%, 120–309μm, respectively, and almost no droplet deposited in the middle and lower part of the crop. The coefficient of variation (CV) of the droplet deposition amount was less than 40% in the state of obvious-vortex and small-scale vortex, and the worst penetration appeared in the non-vortex amounting to 65.97%. This work offers a basis for improving the spraying performance of UAV.
Collapse
Affiliation(s)
- Shuang Guo
- College of Engineering, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
| | - Jiyu Li
- College of Engineering, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
- * E-mail:
| | - Weixiang Yao
- College of Engineering, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
| | - Yilong Zhan
- College of Engineering, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
| | - Yifan Li
- College of Engineering, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
| | - Yeyin Shi
- Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, Nebraska, United States of America
| |
Collapse
|
35
|
Chawade A, van Ham J, Blomquist H, Bagge O, Alexandersson E, Ortiz R. High-Throughput Field-Phenotyping Tools for Plant Breeding and Precision Agriculture. Agronomy 2019; 9:258. [DOI: 10.3390/agronomy9050258] [Citation(s) in RCA: 74] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
High-throughput field phenotyping has garnered major attention in recent years leading to the development of several new protocols for recording various plant traits of interest. Phenotyping of plants for breeding and for precision agriculture have different requirements due to different sizes of the plots and fields, differing purposes and the urgency of the action required after phenotyping. While in plant breeding phenotyping is done on several thousand small plots mainly to evaluate them for various traits, in plant cultivation, phenotyping is done in large fields to detect the occurrence of plant stresses and weeds at an early stage. The aim of this review is to highlight how various high-throughput phenotyping methods are used for plant breeding and farming and the key differences in the applications of such methods. Thus, various techniques for plant phenotyping are presented together with applications of these techniques for breeding and cultivation. Several examples from the literature using these techniques are summarized and the key technical aspects are highlighted.
Collapse
|
36
|
Ma X, Deng X, Qi L, Jiang Y, Li H, Wang Y, Xing X. Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields. PLoS One 2019; 14:e0215676. [PMID: 30998770 PMCID: PMC6472823 DOI: 10.1371/journal.pone.0215676] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 04/05/2019] [Indexed: 11/18/2022] Open
Abstract
To reduce the cost of production and the pollution of the environment that is due to the overapplication of herbicide in paddy fields, the location information of rice seedlings and weeds must be detected in site-specific weed management (SSWM). With the development of deep learning, a semantic segmentation method with the SegNet that is based on fully convolutional network (FCN) was proposed. In this paper, RGB color images of seedling rice were captured in paddy field, and ground truth (GT) images were obtained by manually labeled the pixels in the RGB images with three separate categories, namely, rice seedlings, background, and weeds. The class weight coefficients were calculated to solve the problem of the unbalance of the number of the classification category. GT images and RGB images were used for data training and data testing. Eighty percent of the samples were randomly selected as the training dataset and 20% of samples were used as the test dataset. The proposed method was compared with a classical semantic segmentation model, namely, FCN, and U-Net models. The average accuracy rate of the SegNet method was 92.7%, whereas the average accuracy rates of the FCN and U-Net methods were 89.5% and 70.8%, respectively. The proposed SegNet method realized higher classification accuracy and could effectively classify the pixels of rice seedlings, background, and weeds in the paddy field images and acquire the positions of their regions.
Collapse
Affiliation(s)
- Xu Ma
- College of Engineering, South China Agricultural University, Guangzhou, China
- * E-mail:
| | - Xiangwu Deng
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Long Qi
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Yu Jiang
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Hongwei Li
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Yuwei Wang
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Xupo Xing
- College of Engineering, South China Agricultural University, Guangzhou, China
| |
Collapse
|
37
|
Zhang C, Han Y, Li F, Gao S, Song D, Zhao H, Fan K, Zhang Y. A New CNN-Bayesian Model for Extracting Improved Winter Wheat Spatial Distribution from GF-2 imagery. Remote Sensing 2019; 11:619. [DOI: 10.3390/rs11060619] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
When the spatial distribution of winter wheat is extracted from high-resolution remote sensing imagery using convolutional neural networks (CNN), field edge results are usually rough, resulting in lowered overall accuracy. This study proposed a new per-pixel classification model using CNN and Bayesian models (CNN-Bayesian model) for improved extraction accuracy. In this model, a feature extractor generates a feature vector for each pixel, an encoder transforms the feature vector of each pixel into a category-code vector, and a two-level classifier uses the difference between elements of category-probability vectors as the confidence value to perform per-pixel classifications. The first level is used to determine the category of a pixel with high confidence, and the second level is an improved Bayesian model used to determine the category of low-confidence pixels. The CNN-Bayesian model was trained and tested on Gaofen 2 satellite images. Compared to existing models, our approach produced an improvement in overall accuracy, the overall accuracy of SegNet, DeepLab, VGG-Ex, and CNN-Bayesian was 0.791, 0.852, 0.892, and 0.946, respectively. Thus, this approach can produce superior results when winter wheat spatial distribution is extracted from satellite imagery.
Collapse
|
38
|
Abstract
Damping Bragg scattering from the ocean surface is the basic underlying principle of synthetic aperture radar (SAR) oil slick detection, and they produce dark spots on SAR images. Dark spot detection is the first step in oil spill detection, which affects the accuracy of oil spill detection. However, some natural phenomena (such as waves, ocean currents, and low wind belts, as well as human factors) may change the backscatter intensity on the surface of the sea, resulting in uneven intensity, high noise, and blurred boundaries of oil slicks or lookalikes. In this paper, Segnet is used as a semantic segmentation model to detect dark spots in oil spill areas. The proposed method is applied to a data set of 4200 from five original SAR images of an oil spill. The effectiveness of the method is demonstrated through the comparison with fully convolutional networks (FCN), an initiator of semantic segmentation models, and some other segmentation methods. It is here observed that the proposed method can not only accurately identify the dark spots in SAR images, but also show a higher robustness under high noise and fuzzy boundary conditions.
Collapse
|
39
|
Huang H, Deng J, Lan Y, Yang A, Deng X, Wen S, Zhang H, Zhang Y. Accurate Weed Mapping and Prescription Map Generation Based on Fully Convolutional Networks Using UAV Imagery. Sensors (Basel) 2018; 18:s18103299. [PMID: 30275366 PMCID: PMC6209949 DOI: 10.3390/s18103299] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2018] [Revised: 09/18/2018] [Accepted: 09/18/2018] [Indexed: 11/16/2022]
Abstract
Chemical control is necessary in order to control weed infestation and to ensure a rice yield. However, excessive use of herbicides has caused serious agronomic and environmental problems. Site specific weed management (SSWM) recommends an appropriate dose of herbicides according to the weed coverage, which may reduce the use of herbicides while enhancing their chemical effects. In the context of SSWM, the weed cover map and prescription map must be generated in order to carry out the accurate spraying. In this paper, high resolution unmanned aerial vehicle (UAV) imagery were captured over a rice field. Different workflows were evaluated to generate the weed cover map for the whole field. Fully convolutional networks (FCN) was applied for a pixel-level classification. Theoretical analysis and practical evaluation were carried out to seek for an architecture improvement and performance boost. A chessboard segmentation process was used to build the grid framework of the prescription map. The experimental results showed that the overall accuracy and mean intersection over union (mean IU) for weed mapping using FCN-4s were 0.9196 and 0.8473, and the total time (including the data collection and data processing) required to generate the weed cover map for the entire field (50 × 60 m) was less than half an hour. Different weed thresholds (0.00–0.25, with an interval of 0.05) were used for the prescription map generation. High accuracies (above 0.94) were observed for all of the threshold values, and the relevant herbicide saving ranged from 58.3% to 70.8%. All of the experimental results demonstrated that the method used in this work has the potential to produce an accurate weed cover map and prescription map in SSWM applications.
Collapse
Affiliation(s)
- Huasheng Huang
- College of Engineering, South China Agricultural University, Wushan Road, Guangzhou 510642, China.
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Wushan Road, Guangzhou 510624, China.
| | - Jizhong Deng
- College of Engineering, South China Agricultural University, Wushan Road, Guangzhou 510642, China.
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Wushan Road, Guangzhou 510624, China.
| | - Yubin Lan
- College of Engineering, South China Agricultural University, Wushan Road, Guangzhou 510642, China.
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Wushan Road, Guangzhou 510624, China.
| | - Aqing Yang
- College of Electronic Engineering, South China Agricultural University, Wushan Road, Guangzhou 510624, China.
| | - Xiaoling Deng
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Wushan Road, Guangzhou 510624, China.
- College of Electronic Engineering, South China Agricultural University, Wushan Road, Guangzhou 510624, China.
| | - Sheng Wen
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Wushan Road, Guangzhou 510624, China.
- Engineering Fundamental Teaching and Training Center, South China Agricultural University, Wushan Road, Guangzhou 510624, China.
| | - Huihui Zhang
- USDA, Agricultural Research Service, Water Management Research Unit, 2150 Centre Ave., Building D, Suite 320, Fort Collins, CO 80526-8119, USA.
| | - Yali Zhang
- College of Engineering, South China Agricultural University, Wushan Road, Guangzhou 510642, China.
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Wushan Road, Guangzhou 510624, China.
| |
Collapse
|
40
|
Pflanz M, Nordmeyer H, Schirrmann M. Weed Mapping with UAS Imagery and a Bag of Visual Words Based Image Classifier. Remote Sensing 2018; 10:1530. [DOI: 10.3390/rs10101530] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Weed detection with aerial images is a great challenge to generate field maps for site-specific plant protection application. The requirements might be met with low altitude flights of unmanned aerial vehicles (UAV), to provide adequate ground resolutions for differentiating even single weeds accurately. The following study proposed and tested an image classifier based on a Bag of Visual Words (BoVW) framework for mapping weed species, using a small unmanned aircraft system (UAS) with a commercial camera on board, at low flying altitudes. The image classifier was trained with support vector machines after building a visual dictionary of local features from many collected UAS images. A window-based processing of the models was used for mapping the weed occurrences in the UAS imagery. The UAS flight campaign was carried out over a weed infested wheat field, and images were acquired between a 1 and 6 m flight altitude. From the UAS images, 25,452 weed plants were annotated on species level, along with wheat and soil as background classes for training and validation of the models. The results showed that the BoVW model allowed the discrimination of single plants with high accuracy for Matricaria recutita L. (88.60%), Papaver rhoeas L. (89.08%), Viola arvensis M. (87.93%), and winter wheat (94.09%), within the generated maps. Regarding site specific weed control, the classified UAS images would enable the selection of the right herbicide based on the distribution of the predicted weed species.
Collapse
|
41
|
Huang H, Lan Y, Deng J, Yang A, Deng X, Zhang L, Wen S. A Semantic Labeling Approach for Accurate Weed Mapping of High Resolution UAV Imagery. Sensors (Basel) 2018; 18:E2113. [PMID: 29966392 DOI: 10.3390/s18072113] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2018] [Revised: 06/13/2018] [Accepted: 06/27/2018] [Indexed: 11/17/2022]
Abstract
Weed control is necessary in rice cultivation, but the excessive use of herbicide treatments has led to serious agronomic and environmental problems. Suitable site-specific weed management (SSWM) is a solution to address this problem while maintaining the rice production quality and quantity. In the context of SSWM, an accurate weed distribution map is needed to provide decision support information for herbicide treatment. UAV remote sensing offers an efficient and effective platform to monitor weeds thanks to its high spatial resolution. In this work, UAV imagery was captured in a rice field located in South China. A semantic labeling approach was adopted to generate the weed distribution maps of the UAV imagery. An ImageNet pre-trained CNN with residual framework was adapted in a fully convolutional form, and transferred to our dataset by fine-tuning. Atrous convolution was applied to extend the field of view of convolutional filters; the performance of multi-scale processing was evaluated; and a fully connected conditional random field (CRF) was applied after the CNN to further refine the spatial details. Finally, our approach was compared with the pixel-based-SVM and the classical FCN-8s. Experimental results demonstrated that our approach achieved the best performance in terms of accuracy. Especially for the detection of small weed patches in the imagery, our approach significantly outperformed other methods. The mean intersection over union (mean IU), overall accuracy, and Kappa coefficient of our method were 0.7751, 0.9445, and 0.9128, respectively. The experiments showed that our approach has high potential in accurate weed mapping of UAV imagery.
Collapse
|