1
|
Chen R, Lu H, Wang Y, Tian Q, Zhou C, Wang A, Feng Q, Gong S, Zhao Q, Han B. High-throughput UAV-based rice panicle detection and genetic mapping of heading-date-related traits. Front Plant Sci 2024; 15:1327507. [PMID: 38562563 PMCID: PMC10984267 DOI: 10.3389/fpls.2024.1327507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 02/19/2024] [Indexed: 04/04/2024]
Abstract
Introduction Rice (Oryza sativa) serves as a vital staple crop that feeds over half the world's population. Optimizing rice breeding for increasing grain yield is critical for global food security. Heading-date-related or Flowering-time-related traits, is a key factor determining yield potential. However, traditional manual phenotyping methods for these traits are time-consuming and labor-intensive. Method Here we show that aerial imagery from unmanned aerial vehicles (UAVs), when combined with deep learning-based panicle detection, enables high-throughput phenotyping of heading-date-related traits. We systematically evaluated various state-of-the-art object detectors on rice panicle counting and identified YOLOv8-X as the optimal detector. Results Applying YOLOv8-X to UAV time-series images of 294 rice recombinant inbred lines (RILs) allowed accurate quantification of six heading-date-related traits. Utilizing these phenotypes, we identified quantitative trait loci (QTL), including verified loci and novel loci, associated with heading date. Discussion Our optimized UAV phenotyping and computer vision pipeline may facilitate scalable molecular identification of heading-date-related genes and guide enhancements in rice yield and adaptation.
Collapse
Affiliation(s)
- Rulei Chen
- National Center for Gene Research, Key Laboratory of Plant Design/National Key Laboratory of Plant Molecular Genetics, Center for Excellence in Molecular Plant Sciences, Chinese Academy of Sciences, Shanghai, China
- University of the Chinese Academy of Sciences, Beijing, China
| | - Hengyun Lu
- National Center for Gene Research, Key Laboratory of Plant Design/National Key Laboratory of Plant Molecular Genetics, Center for Excellence in Molecular Plant Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Yongchun Wang
- National Center for Gene Research, Key Laboratory of Plant Design/National Key Laboratory of Plant Molecular Genetics, Center for Excellence in Molecular Plant Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Qilin Tian
- National Center for Gene Research, Key Laboratory of Plant Design/National Key Laboratory of Plant Molecular Genetics, Center for Excellence in Molecular Plant Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Congcong Zhou
- National Center for Gene Research, Key Laboratory of Plant Design/National Key Laboratory of Plant Molecular Genetics, Center for Excellence in Molecular Plant Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Ahong Wang
- National Center for Gene Research, Key Laboratory of Plant Design/National Key Laboratory of Plant Molecular Genetics, Center for Excellence in Molecular Plant Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Qi Feng
- National Center for Gene Research, Key Laboratory of Plant Design/National Key Laboratory of Plant Molecular Genetics, Center for Excellence in Molecular Plant Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Songfu Gong
- Center for Excellence in Molecular Plant Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Qiang Zhao
- National Center for Gene Research, Key Laboratory of Plant Design/National Key Laboratory of Plant Molecular Genetics, Center for Excellence in Molecular Plant Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Bin Han
- National Center for Gene Research, Key Laboratory of Plant Design/National Key Laboratory of Plant Molecular Genetics, Center for Excellence in Molecular Plant Sciences, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
2
|
Qiu S, Li Y, Gao J, Li X, Yuan X, Liu Z, Cui Q, Wu C. Research and Implementation of Millet Ear Detection Method Based on Lightweight YOLOv5. Sensors (Basel) 2023; 23:9189. [PMID: 38005575 PMCID: PMC10675272 DOI: 10.3390/s23229189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 11/08/2023] [Accepted: 11/12/2023] [Indexed: 11/26/2023]
Abstract
As the millet ears are dense, small in size, and serious occlusion in the complex grain field scene, the target detection model suitable for this environment requires high computing power, and it is difficult to deploy the real-time detection of millet ears on mobile devices. A lightweight real-time detection method for millet ears is based on YOLOv5. First, the YOLOv5s model is improved by replacing the YOLOv5s backbone feature extraction network with the MobilenetV3 lightweight model to reduce model size. Then, using the multi-feature fusion detection structure, the micro-scale detection layer is augmented to reduce high-level feature maps and low-level feature maps. The Merge-NMS technique is used in post-processing for target information loss to reduce the influence of boundary blur on the detection effect and increase the detection accuracy of small and obstructed targets. Finally, the models reconstructed by different improved methods are trained and tested on the self-built millet ear data set. The AP value of the improved model in this study reaches 97.78%, F1-score is 94.20%, and the model size is only 7.56 MB, which is 53.28% of the standard YoloV5s model size, and has a better detection speed. Compared with other classical target detection models, it shows strong robustness and generalization ability. The lightweight model performs better in the detection of pictures and videos in the Jetson Nano. The results show that the improved lightweight YOLOv5 millet detection model in this study can overcome the influence of complex environments, and significantly improve the detection effect of millet under dense distribution and occlusion conditions. The millet detection model is deployed on the Jetson Nano, and the millet detection system is implemented based on the PyQt5 framework. The detection accuracy and detection speed of the millet detection system can meet the actual needs of intelligent agricultural machinery equipment and has a good application prospect.
Collapse
Affiliation(s)
- Shujin Qiu
- College of Agricultural Engineering, Shanxi Agriculture University, Jinzhong 030801, China; (Y.L.); (J.G.); (X.L.); (Z.L.); (Q.C.); (C.W.)
| | - Yun Li
- College of Agricultural Engineering, Shanxi Agriculture University, Jinzhong 030801, China; (Y.L.); (J.G.); (X.L.); (Z.L.); (Q.C.); (C.W.)
| | - Jian Gao
- College of Agricultural Engineering, Shanxi Agriculture University, Jinzhong 030801, China; (Y.L.); (J.G.); (X.L.); (Z.L.); (Q.C.); (C.W.)
| | - Xiaobin Li
- College of Agricultural Engineering, Shanxi Agriculture University, Jinzhong 030801, China; (Y.L.); (J.G.); (X.L.); (Z.L.); (Q.C.); (C.W.)
| | - Xiangyang Yuan
- College of Agricultural, Shanxi Agricultural University, Jinzhong 030801, China;
| | - Zhenyu Liu
- College of Agricultural Engineering, Shanxi Agriculture University, Jinzhong 030801, China; (Y.L.); (J.G.); (X.L.); (Z.L.); (Q.C.); (C.W.)
| | - Qingliang Cui
- College of Agricultural Engineering, Shanxi Agriculture University, Jinzhong 030801, China; (Y.L.); (J.G.); (X.L.); (Z.L.); (Q.C.); (C.W.)
| | - Cuiqing Wu
- College of Agricultural Engineering, Shanxi Agriculture University, Jinzhong 030801, China; (Y.L.); (J.G.); (X.L.); (Z.L.); (Q.C.); (C.W.)
| |
Collapse
|
3
|
Teng Z, Chen J, Wang J, Wu S, Chen R, Lin Y, Shen L, Jackson R, Zhou J, Yang C. Panicle-Cloud: An Open and AI-Powered Cloud Computing Platform for Quantifying Rice Panicles from Drone-Collected Imagery to Enable the Classification of Yield Production in Rice. Plant Phenomics 2023; 5:0105. [PMID: 37850120 PMCID: PMC10578299 DOI: 10.34133/plantphenomics.0105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 09/19/2023] [Indexed: 10/19/2023]
Abstract
Rice (Oryza sativa) is an essential stable food for many rice consumption nations in the world and, thus, the importance to improve its yield production under global climate changes. To evaluate different rice varieties' yield performance, key yield-related traits such as panicle number per unit area (PNpM2) are key indicators, which have attracted much attention by many plant research groups. Nevertheless, it is still challenging to conduct large-scale screening of rice panicles to quantify the PNpM2 trait due to complex field conditions, a large variation of rice cultivars, and their panicle morphological features. Here, we present Panicle-Cloud, an open and artificial intelligence (AI)-powered cloud computing platform that is capable of quantifying rice panicles from drone-collected imagery. To facilitate the development of AI-powered detection models, we first established an open diverse rice panicle detection dataset that was annotated by a group of rice specialists; then, we integrated several state-of-the-art deep learning models (including a preferred model called Panicle-AI) into the Panicle-Cloud platform, so that nonexpert users could select a pretrained model to detect rice panicles from their own aerial images. We trialed the AI models with images collected at different attitudes and growth stages, through which the right timing and preferred image resolutions for phenotyping rice panicles in the field were identified. Then, we applied the platform in a 2-season rice breeding trial to valid its biological relevance and classified yield production using the platform-derived PNpM2 trait from hundreds of rice varieties. Through correlation analysis between computational analysis and manual scoring, we found that the platform could quantify the PNpM2 trait reliably, based on which yield production was classified with high accuracy. Hence, we trust that our work demonstrates a valuable advance in phenotyping the PNpM2 trait in rice, which provides a useful toolkit to enable rice breeders to screen and select desired rice varieties under field conditions.
Collapse
Affiliation(s)
- Zixuan Teng
- Digital Fujian Research Institute of Big Data for Agriculture and Forestry, College of Computer and Information Sciences,
Fujian Agriculture and Forestry University, Fuzhou 350002, China
- Key Laboratory of Smart Agriculture and Forestry (Fujian Agriculture and Forestry University),
Fujian Province University, Fuzhou 350002, China
| | - Jiawei Chen
- State Key Laboratory of Crop Genetics & Germplasm Enhancement, academy for Advanced Interdisciplinary Studies,
Nanjing Agricultural University, Nanjing 210095, China
| | - Jian Wang
- Ningxia Academy of Agriculture and Forestry Sciences, Yinchuan 750002, China
| | - Shuixiu Wu
- Digital Fujian Research Institute of Big Data for Agriculture and Forestry, College of Computer and Information Sciences,
Fujian Agriculture and Forestry University, Fuzhou 350002, China
- College of Mechanical and Electrical Engineering,
Fujian Agriculture and Forestry University, Fuzhou 350002, China
| | - Riqing Chen
- Digital Fujian Research Institute of Big Data for Agriculture and Forestry, College of Computer and Information Sciences,
Fujian Agriculture and Forestry University, Fuzhou 350002, China
| | - Yaohai Lin
- Digital Fujian Research Institute of Big Data for Agriculture and Forestry, College of Computer and Information Sciences,
Fujian Agriculture and Forestry University, Fuzhou 350002, China
| | - Liyan Shen
- State Key Laboratory of Crop Genetics & Germplasm Enhancement, academy for Advanced Interdisciplinary Studies,
Nanjing Agricultural University, Nanjing 210095, China
| | - Robert Jackson
- Cambridge Crop Research,
National Institute of Agricultural Botany (NIAB), Cambridge CB3 0LE, UK
| | - Ji Zhou
- State Key Laboratory of Crop Genetics & Germplasm Enhancement, academy for Advanced Interdisciplinary Studies,
Nanjing Agricultural University, Nanjing 210095, China
- Cambridge Crop Research,
National Institute of Agricultural Botany (NIAB), Cambridge CB3 0LE, UK
| | - Changcai Yang
- Digital Fujian Research Institute of Big Data for Agriculture and Forestry, College of Computer and Information Sciences,
Fujian Agriculture and Forestry University, Fuzhou 350002, China
- Center for Agroforestry Mega Data Science, School of Future Technology,
Fujian Agriculture and Forestry University, Fuzhou 350002, China
| |
Collapse
|
4
|
Zhou Q, Guo W, Chen N, Wang Z, Li G, Ding Y, Ninomiya S, Mu Y. Analyzing Nitrogen Effects on Rice Panicle Development by Panicle Detection and Time-Series Tracking. Plant Phenomics 2023; 5:0048. [PMID: 37363145 PMCID: PMC10289797 DOI: 10.34133/plantphenomics.0048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 04/16/2023] [Indexed: 06/28/2023]
Abstract
Detailed observation of the phenotypic changes in rice panicle substantially helps us to understand the yield formation. In recent studies, phenotyping of rice panicles during the heading-flowering stage still lacks comprehensive analysis, especially of panicle development under different nitrogen treatments. In this work, we proposed a pipeline to automatically acquire the detailed panicle traits based on time-series images by using the YOLO v5, ResNet50, and DeepSORT models. Combined with field observation data, the proposed method was used to test whether it has an ability to identify subtle differences in panicle developments under different nitrogen treatments. The result shows that panicle counting throughout the heading-flowering stage achieved high accuracy (R2 = 0.96 and RMSE = 1.73), and heading date was estimated with an absolute error of 0.25 days. In addition, by identical panicle tracking based on the time-series images, we analyzed detailed flowering phenotypic changes of a single panicle, such as flowering duration and individual panicle flowering time. For rice population, with an increase in the nitrogen application: panicle number increased, heading date changed little, but the duration was slightly extended; cumulative flowering panicle number increased, rice flowering initiation date arrived earlier while the ending date was later; thus, the flowering duration became longer. For a single panicle, identical panicle tracking revealed that higher nitrogen application led to earlier flowering initiation date, significantly longer flowering days, and significantly longer total duration from vigorous flowering beginning to the end (total DBE). However, the vigorous flowering beginning time showed no significant differences and there was a slight decrease in daily DBE.
Collapse
Affiliation(s)
- Qinyang Zhou
- College of Agriculture, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Center for Modern Crop Production co-sponsored by Province and Ministry,
Nanjing Agricultural University, Nanjing 210095, China
| | - Wei Guo
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Midori-cho, Nishi-Tokyo, Tokyo 188-0002, Japan
| | - Na Chen
- College of Agriculture, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Center for Modern Crop Production co-sponsored by Province and Ministry,
Nanjing Agricultural University, Nanjing 210095, China
| | - Ze Wang
- College of Agriculture, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Center for Modern Crop Production co-sponsored by Province and Ministry,
Nanjing Agricultural University, Nanjing 210095, China
| | - Ganghua Li
- College of Agriculture, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Center for Modern Crop Production co-sponsored by Province and Ministry,
Nanjing Agricultural University, Nanjing 210095, China
| | - Yanfeng Ding
- College of Agriculture, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Center for Modern Crop Production co-sponsored by Province and Ministry,
Nanjing Agricultural University, Nanjing 210095, China
| | - Seishi Ninomiya
- College of Agriculture, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Center for Modern Crop Production co-sponsored by Province and Ministry,
Nanjing Agricultural University, Nanjing 210095, China
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Midori-cho, Nishi-Tokyo, Tokyo 188-0002, Japan
| | - Yue Mu
- College of Agriculture, Academy for Advanced Interdisciplinary Studies, Collaborative Innovation Center for Modern Crop Production co-sponsored by Province and Ministry,
Nanjing Agricultural University, Nanjing 210095, China
| |
Collapse
|
5
|
Sun B, Zhou W, Zhu S, Huang S, Yu X, Wu Z, Lei X, Yin D, Xia H, Chen Y, Deng F, Tao Y, Cheng H, Jin X, Ren W. Universal detection of curved rice panicles in complex environments using aerial images and improved YOLOv4 model. Front Plant Sci 2022; 13:1021398. [PMID: 36420030 PMCID: PMC9676644 DOI: 10.3389/fpls.2022.1021398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 10/07/2022] [Indexed: 06/16/2023]
Abstract
Accurate and rapid identification of the effective number of panicles per unit area is crucial for the assessment of rice yield. As part of agricultural development, manual observation of effective panicles in the paddy field is being replaced by unmanned aerial vehicle (UAV) imaging combined with target detection modeling. However, UAV images of panicles of curved hybrid Indica rice in complex field environments are characterized by overlapping, blocking, and dense distribution, imposing challenges on rice panicle detection models. This paper proposes a universal curved panicle detection method by combining UAV images of different types of hybrid Indica rice panicles (leaf-above-spike, spike-above-leaf, and middle type) from four ecological sites using an improved You Only Look Once version 4 (YOLOv4) model. MobileNetv2 is used as the backbone feature extraction network based on a lightweight model in addition to a focal loss and convolutional block attention module for improved detection of curved rice panicles of different varieties. Moreover, soft non-maximum suppression is used to address rice panicle occlusion in the dataset. This model yields a single image detection rate of 44.46 FPS, and mean average precision, recall, and F1 values of 90.32%, 82.36%, and 0.89%, respectively. This represents an increase of 6.2%, 0.12%, and 16.24% from those of the original YOLOv4 model, respectively. The model exhibits superior performance in identifying different strain types in mixed and independent datasets, indicating its feasibility as a general model for detection of different types of rice panicles in the heading stage.
Collapse
Affiliation(s)
- Boteng Sun
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| | - Wei Zhou
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| | - Shilin Zhu
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| | - Song Huang
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| | - Xun Yu
- Key Laboratory of Crop Physiology and Ecology, Ministry of Agriculture, Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing, China
| | - Zhenyuan Wu
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| | - Xiaolong Lei
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| | - Dameng Yin
- Key Laboratory of Crop Physiology and Ecology, Ministry of Agriculture, Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing, China
| | - Haixiao Xia
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| | - Yong Chen
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| | - Fei Deng
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| | - Youfeng Tao
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| | - Hong Cheng
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| | - Xiuliang Jin
- Key Laboratory of Crop Physiology and Ecology, Ministry of Agriculture, Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing, China
| | - Wanjun Ren
- State Key Laboratory of Crop Gene Exploration and Utilization in Southwest China, Key Laboratory of Crop Ecophysiology and Farming System in Southwest China, Ministry of Agriculture, Sichuan Agricultural University, Chengdu, Sichuan, China
| |
Collapse
|
6
|
Yamagishi Y, Kato Y, Ninomiya S, Guo W. Image-Based Phenotyping for Non-Destructive In Situ Rice ( Oryza sativa L.) Tiller Counting Using Proximal Sensing. Sensors (Basel) 2022; 22:5547. [PMID: 35898050 DOI: 10.3390/s22155547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 07/06/2022] [Accepted: 07/11/2022] [Indexed: 11/17/2022]
Abstract
The increase in the number of tillers of rice significantly affects grain yield. However, this is measured only by the manual counting of emerging tillers, where the most common method is to count by hand touching. This study develops an efficient, non-destructive method for estimating the number of tillers during the vegetative and reproductive stages under flooded conditions. Unlike popular deep-learning-based approaches requiring training data and computational resources, we propose a simple image-processing pipeline following the empirical principles of synchronously emerging leaves and tillers in rice morphogenesis. Field images were taken by an unmanned aerial vehicle at a very low flying height for UAV imaging—1.5 to 3 m above the rice canopy. Subsequently, the proposed image-processing pipeline was used, which includes binarization, skeletonization, and leaf-tip detection, to count the number of long-growing leaves. The tiller number was estimated from the number of long-growing leaves. The estimated tiller number in a 1.1 m × 1.1 m area is significantly correlated with the actual number of tillers, with 60% of hills having an error of less than ±3 tillers. This study demonstrates the potential of the proposed image-sensing-based tiller-counting method to help agronomists with efficient, non-destructive field phenotyping.
Collapse
|
7
|
Srivastava A, Prakash J. Internet of Low-Altitude UAVs (IoLoUA): a methodical modeling on integration of Internet of “Things” with “UAV” possibilities and tests. Artif Intell Rev. [DOI: 10.1007/s10462-022-10225-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
8
|
Zhang Y, Li M, Ma X, Wu X, Wang Y. High-Precision Wheat Head Detection Model Based on One-Stage Network and GAN Model. Front Plant Sci 2022; 13:787852. [PMID: 35720576 PMCID: PMC9201825 DOI: 10.3389/fpls.2022.787852] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 05/06/2022] [Indexed: 06/12/2023]
Abstract
Counting wheat heads is a time-consuming process in agricultural production, which is currently primarily carried out by humans. Manually identifying wheat heads and statistically analyzing the findings has a rigorous requirement for the workforce and is prone to error. With the advancement of machine vision technology, computer vision detection algorithms have made wheat head detection and counting feasible. To accomplish this traditional labor-intensive task and tackle various tricky matters in wheat images, a high-precision wheat head detection model with strong generalizability was presented based on a one-stage network structure. The model's structure was referred to as that of the YOLO network; meanwhile, several modules were added and adjusted in the backbone network. The one-stage backbone network received an attention module and a feature fusion module, and the Loss function was improved. When compared to various other mainstream object detection networks, our model outperforms them, with a mAP of 0.688. In addition, an iOS-based intelligent wheat head counting mobile app was created, which could calculate the number of wheat heads in images shot in an agricultural environment in less than a second.
Collapse
Affiliation(s)
- Yan Zhang
- College of Information and Electrical Engineering, China Agricultural University, Beijing, China
| | - Manzhou Li
- College of Plant Protection, China Agricultural University, Beijing, China
| | - Xiaoxiao Ma
- College of Information and Electrical Engineering, China Agricultural University, Beijing, China
| | - Xiaotong Wu
- College of Economics and Management, China Agricultural University, Beijing, China
| | - Yaojun Wang
- College of Information and Electrical Engineering, China Agricultural University, Beijing, China
| |
Collapse
|
9
|
Abstract
The counting of wheat heads is labor-intensive work in agricultural production. At present, it is mainly done by humans. Manual identification and statistics are time-consuming and error-prone. With the development of machine vision-related technologies, it has become possible to complete wheat head identification and counting with the help of computer vision detection algorithms. Based on the one-stage network framework, the Wheat Detection Net (WDN) model was proposed for wheat head detection and counting. Due to the characteristics of wheat head recognition, an attention module and feature fusion module were added to the one-stage backbone network, and the formula for the loss function was optimized as well. The model was tested on a test set and compared with mainstream object detection network algorithms. The results indicate that the mAP and FPS indicators of the WDN model are better than those of other models. The mAP of WDN reached 0.903. Furthermore, an intelligent wheat head counting system was developed for iOS, which can present the number of wheat heads within a photo of a crop within 1 s.
Collapse
|
10
|
Abstract
In contrast to the rapid advances made in plant genotyping, plant phenotyping is considered a bottleneck in plant science. This has promoted high-throughput plant phenotyping (HTP) studies, resulting in an exponential increase in phenotyping-related publications. The development of HTP was originally intended for use as indoor HTP technologies for model plant species under controlled environments. However, this subsequently shifted to HTP for use in crops in fields. Although HTP in fields is much more difficult to conduct due to unstable environmental conditions compared to HTP in controlled environments, recent advances in HTP technology have allowed these difficulties to be overcome, allowing for rapid, efficient, non-destructive, non-invasive, quantitative, repeatable, and objective phenotyping. Recent HTP developments have been accelerated by the advances in data analysis, sensors, and robot technologies, including machine learning, image analysis, three dimensional (3D) reconstruction, image sensors, laser sensors, environmental sensors, and drones, along with high-speed computational resources. This article provides an overview of recent HTP technologies, focusing mainly on canopy-based phenotypes of major crops, such as canopy height, canopy coverage, canopy biomass, and canopy stressed appearance, in addition to crop organ detection and counting in the fields. Current topics in field HTP are also presented, followed by a discussion on the low rates of adoption of HTP in practical breeding programs.
Collapse
Affiliation(s)
- Seishi Ninomiya
- Graduate School of Agriculture and Life Sciences, The University of Tokyo, Nishitokyo, Tokyo 188-0002, Japan
- Plant Phenomics Research Center, Nanjing Agricultural University, Nanjing, China
- Corresponding author (e-mail: )
| |
Collapse
|
11
|
Wang X, Yang W, Lv Q, Huang C, Liang X, Chen G, Xiong L, Duan L. Field rice panicle detection and counting based on deep learning. Front Plant Sci 2022; 13:966495. [PMID: 36035660 PMCID: PMC9416702 DOI: 10.3389/fpls.2022.966495] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 07/26/2022] [Indexed: 05/17/2023]
Abstract
Panicle number is directly related to rice yield, so panicle detection and counting has always been one of the most important scientific research topics. Panicle counting is a challenging task due to many factors such as high density, high occlusion, and large variation in size, shape, posture et.al. Deep learning provides state-of-the-art performance in object detection and counting. Generally, the large images need to be resized to fit for the video memory. However, small panicles would be missed if the image size of the original field rice image is extremely large. In this paper, we proposed a rice panicle detection and counting method based on deep learning which was especially designed for detecting rice panicles in rice field images with large image size. Different object detectors were compared and YOLOv5 was selected with MAPE of 3.44% and accuracy of 92.77%. Specifically, we proposed a new method for removing repeated detections and proved that the method outperformed the existing NMS methods. The proposed method was proved to be robust and accurate for counting panicles in field rice images of different illumination, rice accessions, and image input size. Also, the proposed method performed well on UAV images. In addition, an open-access and user-friendly web portal was developed for rice researchers to use the proposed method conveniently.
Collapse
|
12
|
Affiliation(s)
- Jin Sun
- College of Mechanical Engineering Yangzhou University Yangzhou China
- Joint International Research Laboratory of Agriculture and Agri‐Product Safety, The Ministry of Education of China Yangzhou University Yangzhou China
- Department of Informatics University of Leicester Leicester UK
| | - Yang Zhang
- College of Mechanical Engineering Yangzhou University Yangzhou China
- Joint International Research Laboratory of Agriculture and Agri‐Product Safety, The Ministry of Education of China Yangzhou University Yangzhou China
| | - Xinglong Zhu
- College of Mechanical Engineering Yangzhou University Yangzhou China
- Joint International Research Laboratory of Agriculture and Agri‐Product Safety, The Ministry of Education of China Yangzhou University Yangzhou China
| | - Yu‐Dong Zhang
- Department of Informatics University of Leicester Leicester UK
| |
Collapse
|
13
|
Guo Y, Li S, Zhang Z, Li Y, Hu Z, Xin D, Chen Q, Wang J, Zhu R. Automatic and Accurate Calculation of Rice Seed Setting Rate Based on Image Segmentation and Deep Learning. Front Plant Sci 2021; 12:770916. [PMID: 34970287 PMCID: PMC8712771 DOI: 10.3389/fpls.2021.770916] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 11/23/2021] [Indexed: 05/03/2023]
Abstract
The rice seed setting rate (RSSR) is an important component in calculating rice yields and a key phenotype for its genetic analysis. Automatic calculations of RSSR through computer vision technology have great significance for rice yield predictions. The basic premise for calculating RSSR is having an accurate and high throughput identification of rice grains. In this study, we propose a method based on image segmentation and deep learning to automatically identify rice grains and calculate RSSR. By collecting information on the rice panicle, our proposed image automatic segmentation method can detect the full grain and empty grain, after which the RSSR can be calculated by our proposed rice seed setting rate optimization algorithm (RSSROA). Finally, the proposed method was used to predict the RSSR during which process, the average identification accuracy reached 99.43%. This method has therefore been proven as an effective, non-invasive method for high throughput identification and calculation of RSSR. It is also applicable to soybean yields, as well as wheat and other crops with similar characteristics.
Collapse
Affiliation(s)
- Yixin Guo
- College of Engineering, Northeast Agricultural University, Harbin, China
| | - Shuai Li
- College of Engineering, Northeast Agricultural University, Harbin, China
| | - Zhanguo Zhang
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Yang Li
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
| | - Zhenbang Hu
- Agricultural College, Northeast Agricultural University, Harbin, China
| | - Dawei Xin
- Agricultural College, Northeast Agricultural University, Harbin, China
| | - Qingshan Chen
- Agricultural College, Northeast Agricultural University, Harbin, China
- *Correspondence: Qingshan Chen,
| | - Jingguo Wang
- Agricultural College, Northeast Agricultural University, Harbin, China
- Jingguo Wang,
| | - Rongsheng Zhu
- College of Arts and Sciences, Northeast Agricultural University, Harbin, China
- Rongsheng Zhu,
| |
Collapse
|
14
|
Hosseiny B, Rastiveis H, Homayouni S. An Automated Framework for Plant Detection Based on Deep Simulated Learning from Drone Imagery. Remote Sensing 2020; 12:3521. [DOI: 10.3390/rs12213521] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Traditional mapping and monitoring of agricultural fields are expensive, laborious, and may contain human errors. Technological advances in platforms and sensors, followed by artificial intelligence (AI) and deep learning (DL) breakthroughs in intelligent data processing, led to improving the remote sensing applications for precision agriculture (PA). Therefore, technological advances in platforms and sensors and intelligent data processing methods, such as machine learning and DL, and geospatial and remote sensing technologies, have improved the quality of agricultural land monitoring for PA needs. However, providing ground truth data for model training is a time-consuming and tedious task and may contain multiple human errors. This paper proposes an automated and fully unsupervised framework based on image processing and DL methods for plant detection in agricultural lands from very high-resolution drone remote sensing imagery. The proposed framework’s main idea is to automatically generate an unlimited amount of simulated training data from the input image. This capability is advantageous for DL methods and can solve their biggest drawback, i.e., requiring a considerable amount of training data. This framework’s core is based on the faster regional convolutional neural network (R-CNN) with the backbone of ResNet-101 for object detection. The proposed framework’s efficiency was evaluated by two different image sets from two cornfields, acquired by an RGB camera mounted on a drone. The results show that the proposed method leads to an average counting accuracy of 90.9%. Furthermore, based on the average Hausdorff distance (AHD), an average object detection localization error of 11 pixels was obtained. Additionally, by evaluating the object detection metrics, the resulting mean precision, recall, and F1 for plant detection were 0.868, 0.849, and 0.855, respectively, which seem to be promising for an unsupervised plant detection method.
Collapse
|
15
|
Hoeser T, Bachofer F, Kuenzer C. Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review—Part II: Applications. Remote Sensing 2020; 12:3053. [DOI: 10.3390/rs12183053] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
In Earth observation (EO), large-scale land-surface dynamics are traditionally analyzed by investigating aggregated classes. The increase in data with a very high spatial resolution enables investigations on a fine-grained feature level which can help us to better understand the dynamics of land surfaces by taking object dynamics into account. To extract fine-grained features and objects, the most popular deep-learning model for image analysis is commonly used: the convolutional neural network (CNN). In this review, we provide a comprehensive overview of the impact of deep learning on EO applications by reviewing 429 studies on image segmentation and object detection with CNNs. We extensively examine the spatial distribution of study sites, employed sensors, used datasets and CNN architectures, and give a thorough overview of applications in EO which used CNNs. Our main finding is that CNNs are in an advanced transition phase from computer vision to EO. Upon this, we argue that in the near future, investigations which analyze object dynamics with CNNs will have a significant impact on EO research. With a focus on EO applications in this Part II, we complete the methodological review provided in Part I.
Collapse
|
16
|
Hayat MA, Wu J, Cao Y. Unsupervised Bayesian learning for rice panicle segmentation with UAV images. Plant Methods 2020; 16:18. [PMID: 32123536 PMCID: PMC7035759 DOI: 10.1186/s13007-020-00567-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Accepted: 02/10/2020] [Indexed: 05/17/2023]
Abstract
BACKGROUND In this paper, an unsupervised Bayesian learning method is proposed to perform rice panicle segmentation with optical images taken by unmanned aerial vehicles (UAV) over paddy fields. Unlike existing supervised learning methods that require a large amount of labeled training data, the unsupervised learning approach detects panicle pixels in UAV images by analyzing statistical properties of pixels in an image without a training phase. Under the Bayesian framework, the distributions of pixel intensities are assumed to follow a multivariate Gaussian mixture model (GMM), with different components in the GMM corresponding to different categories, such as panicle, leaves, or background. The prevalence of each category is characterized by the weights associated with each component in the GMM. The model parameters are iteratively learned by using the Markov chain Monte Carlo (MCMC) method with Gibbs sampling, without the need of labeled training data. RESULTS Applying the unsupervised Bayesian learning algorithm on diverse UAV images achieves an average recall, precision and F 1 score of 96.49%, 72.31%, and 82.10%, respectively. These numbers outperform existing supervised learning approaches. CONCLUSIONS Experimental results demonstrate that the proposed method can accurately identify panicle pixels in UAV images taken under diverse conditions.
Collapse
Affiliation(s)
- Md Abul Hayat
- Department of Electrical Engineering, University of Arkansas, Fayetteville, 72701 USA
| | - Jingxian Wu
- Department of Electrical Engineering, University of Arkansas, Fayetteville, 72701 USA
| | - Yingli Cao
- Department of Information and Electrical Engineering, Shenyang Agricultural University, Shenyang, 110041 China
| |
Collapse
|
17
|
Labib NS, Danoy G, Musial J, Brust MR, Bouvry P. Internet of Unmanned Aerial Vehicles-A Multilayer Low-Altitude Airspace Model for Distributed UAV Traffic Management. Sensors (Basel) 2019; 19:E4779. [PMID: 31684133 DOI: 10.3390/s19214779] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 10/27/2019] [Accepted: 10/31/2019] [Indexed: 12/24/2022]
Abstract
The rapid adoption of Internet of Things (IoT) has encouraged the integration of new connected devices such as Unmanned Aerial Vehicles (UAVs) to the ubiquitous network. UAVs promise a pragmatic solution to the limitations of existing terrestrial IoT infrastructure as well as bring new means of delivering IoT services through a wide range of applications. Owning to their potential, UAVs are expected to soon dominate the low-altitude airspace over populated cities. This introduces new research challenges such as the safe management of UAVs operation under high traffic demands. This paper proposes a novel way of structuring the uncontrolled, low-altitude airspace, with the aim of addressing the complex problem of UAV traffic management at an abstract level. The work, hence, introduces a model of the airspace as a weighted multilayer network of nodes and airways and presents a set of experimental simulation results using three UAV traffic management heuristics.
Collapse
|