1
|
Zhang Q, Luan R, Wang M, Zhang J, Yu F, Ping Y, Qiu L. Research Progress of Spectral Imaging Techniques in Plant Phenotype Studies. PLANTS (BASEL, SWITZERLAND) 2024; 13:3088. [PMID: 39520006 PMCID: PMC11548186 DOI: 10.3390/plants13213088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Revised: 10/25/2024] [Accepted: 10/31/2024] [Indexed: 11/16/2024]
Abstract
Spectral imaging technique has been widely applied in plant phenotype analysis to improve plant trait selection and genetic advantages. The latest developments and applications of various optical imaging techniques in plant phenotypes were reviewed, and their advantages and applicability were compared. X-ray computed tomography (X-ray CT) and light detection and ranging (LiDAR) are more suitable for the three-dimensional reconstruction of plant surfaces, tissues, and organs. Chlorophyll fluorescence imaging (ChlF) and thermal imaging (TI) can be used to measure the physiological phenotype characteristics of plants. Specific symptoms caused by nutrient deficiency can be detected by hyperspectral and multispectral imaging, LiDAR, and ChlF. Future plant phenotype research based on spectral imaging can be more closely integrated with plant physiological processes. It can more effectively support the research in related disciplines, such as metabolomics and genomics, and focus on micro-scale activities, such as oxygen transport and intercellular chlorophyll transmission.
Collapse
Affiliation(s)
| | - Rupeng Luan
- Institute of Data Science and Agricultural Economics, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China; (Q.Z.); (J.Z.); (F.Y.); (Y.P.); (L.Q.)
| | - Ming Wang
- Institute of Data Science and Agricultural Economics, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China; (Q.Z.); (J.Z.); (F.Y.); (Y.P.); (L.Q.)
| | | | | | | | | |
Collapse
|
2
|
Lu T, Wan L, Qi S, Gao M. Land Cover Classification of UAV Remote Sensing Based on Transformer-CNN Hybrid Architecture. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115288. [PMID: 37300015 DOI: 10.3390/s23115288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/29/2023] [Accepted: 05/31/2023] [Indexed: 06/12/2023]
Abstract
High-precision land cover maps of remote sensing images based on an intelligent extraction method are an important research field for many scholars. In recent years, deep learning represented by convolutional neural networks has been introduced into the field of land cover remote sensing mapping. In view of the problem that a convolution operation is good at extracting local features but has limitations in modeling long-distance dependence relationships, a semantic segmentation network, DE-UNet, with a dual encoder is proposed in this paper. The Swin Transformer and convolutional neural network are used to design the hybrid architecture. The Swin Transformer pays attention to multi-scale global features and learns local features through the convolutional neural network. Integrated features take into account both global and local context information. In the experiment, remote sensing images from UAVs were used to test three deep learning models including DE-UNet. DE-UNet achieved the highest classification accuracy, and the average overall accuracy was 0.28% and 4.81% higher than UNet and UNet++, respectively. It shows that the introduction of a Transformer enhances the model fitting ability.
Collapse
Affiliation(s)
- Tingyu Lu
- College of Geographical Sciences, Harbin Normal University, Harbin 150025, China
- Heilongjiang Province Key Laboratory of Geographical Environment Monitoring and Spatial Information Service in Cold Regions, Harbin Normal University, Harbin 150025, China
| | - Luhe Wan
- College of Geographical Sciences, Harbin Normal University, Harbin 150025, China
- Heilongjiang Province Key Laboratory of Geographical Environment Monitoring and Spatial Information Service in Cold Regions, Harbin Normal University, Harbin 150025, China
| | - Shaoqun Qi
- College of Geographical Sciences, Harbin Normal University, Harbin 150025, China
- Heilongjiang Province Key Laboratory of Geographical Environment Monitoring and Spatial Information Service in Cold Regions, Harbin Normal University, Harbin 150025, China
| | - Meixiang Gao
- Department of Geography and Spatial Information Techniques, Ningbo University, Ningbo 315211, China
| |
Collapse
|
3
|
Abebe AM, Kim Y, Kim J, Kim SL, Baek J. Image-Based High-Throughput Phenotyping in Horticultural Crops. PLANTS (BASEL, SWITZERLAND) 2023; 12:2061. [PMID: 37653978 PMCID: PMC10222289 DOI: 10.3390/plants12102061] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/12/2023] [Accepted: 05/18/2023] [Indexed: 09/02/2023]
Abstract
Plant phenotyping is the primary task of any plant breeding program, and accurate measurement of plant traits is essential to select genotypes with better quality, high yield, and climate resilience. The majority of currently used phenotyping techniques are destructive and time-consuming. Recently, the development of various sensors and imaging platforms for rapid and efficient quantitative measurement of plant traits has become the mainstream approach in plant phenotyping studies. Here, we reviewed the trends of image-based high-throughput phenotyping methods applied to horticultural crops. High-throughput phenotyping is carried out using various types of imaging platforms developed for indoor or field conditions. We highlighted the applications of different imaging platforms in the horticulture sector with their advantages and limitations. Furthermore, the principles and applications of commonly used imaging techniques, visible light (RGB) imaging, thermal imaging, chlorophyll fluorescence, hyperspectral imaging, and tomographic imaging for high-throughput plant phenotyping, are discussed. High-throughput phenotyping has been widely used for phenotyping various horticultural traits, which can be morphological, physiological, biochemical, yield, biotic, and abiotic stress responses. Moreover, the ability of high-throughput phenotyping with the help of various optical sensors will lead to the discovery of new phenotypic traits which need to be explored in the future. We summarized the applications of image analysis for the quantitative evaluation of various traits with several examples of horticultural crops in the literature. Finally, we summarized the current trend of high-throughput phenotyping in horticultural crops and highlighted future perspectives.
Collapse
Affiliation(s)
| | | | | | | | - Jeongho Baek
- Department of Agricultural Biotechnology, National Institute of Agricultural Science, Rural Development Administration, Jeonju 54874, Republic of Korea
| |
Collapse
|
4
|
Huang H, Mei H, Yan T, Wang B, Xu F, Zhou D. Performance-guaranteed distributed control for multiple plant protection UAVs with collision avoidance and a directed topology. FRONTIERS IN PLANT SCIENCE 2022; 13:949857. [PMID: 36212289 PMCID: PMC9534514 DOI: 10.3389/fpls.2022.949857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 08/05/2022] [Indexed: 06/16/2023]
Abstract
The urgent requirement for improving the efficiency of agricultural plant protection operations has spurred considerable interest in multiple plant protection UAV systems. In this study, a performance-guaranteed distributed control scheme is developed in order to address the control of multiple plant protection UAV systems with collision avoidance and a directed topology. First, a novel concept called predetermined time performance function (PTPF) is proposed, such that the tracking error can converge to an arbitrary small preassigned region in finite time. Second, combined with the two-order filter for each UAV, the information estimation from the leader is generated. The distributed protocol avoids the use of an asymmetric Laplace matrix of a directed graph and solves the difficulty of control design. Furthermore, by introducing with a collision prediction mechanism, a repulsive force field is constructed between the dynamic obstacle and the UAV, in order to avoid the collision. Finally, it is rigorously proved that the consensus of the multiple plant protection UAV system can be achieved while guaranteeing the predetermined time performance. A numerical simulation is carried out to verify the effectiveness of the presented method, such that the multiple UAVs system can fulfill time-constrained plant protection tasks.
Collapse
Affiliation(s)
- Hanqiao Huang
- Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an, China
| | - Hantong Mei
- School of Astronautics, Northwestern Polytechnical University, Xi'an, China
| | - Tian Yan
- Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an, China
| | - Bolan Wang
- Shanghai Electro-Mechanical Engineering Institute, Shanghai, China
| | - Feihong Xu
- Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an, China
| | - Daming Zhou
- School of Astronautics, Northwestern Polytechnical University, Xi'an, China
| |
Collapse
|
5
|
Multisensor UAS mapping of Plant Species and Plant Functional Types in Midwestern Grasslands. REMOTE SENSING 2022. [DOI: 10.3390/rs14143453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Uncrewed aerial systems (UASs) have emerged as powerful ecological observation platforms capable of filling critical spatial and spectral observation gaps in plant physiological and phenological traits that have been difficult to measure from space-borne sensors. Despite recent technological advances, the high cost of drone-borne sensors limits the widespread application of UAS technology across scientific disciplines. Here, we evaluate the tradeoffs between off-the-shelf and sophisticated drone-borne sensors for mapping plant species and plant functional types (PFTs) within a diverse grassland. Specifically, we compared species and PFT mapping accuracies derived from hyperspectral, multispectral, and RGB imagery fused with light detection and ranging (LiDAR) or structure-for-motion (SfM)-derived canopy height models (CHM). Sensor–data fusion were used to consider either a single observation period or near-monthly observation frequencies for integration of phenological information (i.e., phenometrics). Results indicate that overall classification accuracies for plant species and PFTs were highest in hyperspectral and LiDAR-CHM fusions (78 and 89%, respectively), followed by multispectral and phenometric–SfM–CHM fusions (52 and 60%, respectively) and RGB and SfM–CHM fusions (45 and 47%, respectively). Our findings demonstrate clear tradeoffs in mapping accuracies from economical versus exorbitant sensor networks but highlight that off-the-shelf multispectral sensors may achieve accuracies comparable to those of sophisticated UAS sensors by integrating phenometrics into machine learning image classifiers.
Collapse
|
6
|
Extraction of Broad-Leaved Tree Crown Based on UAV Visible Images and OBIA-RF Model: A Case Study for Chinese Olive Trees. REMOTE SENSING 2022. [DOI: 10.3390/rs14102469] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Chinese olive trees (Canarium album L.) are broad-leaved species that are widely planted in China. Accurately obtaining tree crown information provides important data for evaluating Chinese olive tree growth status, water and fertilizer management, and yield estimation. To this end, this study first used unmanned aerial vehicle (UAV) images in the visible band as the source of remote sensing (RS) data. Second, based on spectral features of the image object, the vegetation index, shape, texture, and terrain features were introduced. Finally, the extraction effect of different feature dimensions was analyzed based on the random forest (RF) algorithm, and the performance of different classifiers was compared based on the features after dimensionality reduction. The results showed that the difference in feature dimensionality and importance was the main factor that led to a change in extraction accuracy. RF has the best extraction effect among the current mainstream machine learning (ML) algorithms. In comparison with the pixel-based (PB) classification method, the object-based image analysis (OBIA) method can extract features of each element of RS images, which has certain advantages. Therefore, the combination of OBIA and RF algorithms is a good solution for Chinese olive tree crown (COTC) extraction based on UAV visible band images.
Collapse
|
7
|
Comparison of Classical Methods and Mask R-CNN for Automatic Tree Detection and Mapping Using UAV Imagery. REMOTE SENSING 2022. [DOI: 10.3390/rs14020295] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Detecting and mapping individual trees accurately and automatically from remote sensing images is of great significance for precision forest management. Many algorithms, including classical methods and deep learning techniques, have been developed and applied for tree crown detection from remote sensing images. However, few studies have evaluated the accuracy of different individual tree detection (ITD) algorithms and their data and processing requirements. This study explored the accuracy of ITD using local maxima (LM) algorithm, marker-controlled watershed segmentation (MCWS), and Mask Region-based Convolutional Neural Networks (Mask R-CNN) in a young plantation forest with different test images. Manually delineated tree crowns from UAV imagery were used for accuracy assessment of the three methods, followed by an evaluation of the data processing and application requirements for three methods to detect individual trees. Overall, Mask R-CNN can best use the information in multi-band input images for detecting individual trees. The results showed that the Mask R-CNN model with the multi-band combination produced higher accuracy than the model with a single-band image, and the RGB band combination achieved the highest accuracy for ITD (F1 score = 94.68%). Moreover, the Mask R-CNN models with multi-band images are capable of providing higher accuracies for ITD than the LM and MCWS algorithms. The LM algorithm and MCWS algorithm also achieved promising accuracies for ITD when the canopy height model (CHM) was used as the test image (F1 score = 87.86% for LM algorithm, F1 score = 85.92% for MCWS algorithm). The LM and MCWS algorithms are easy to use and lower computer computational requirements, but they are unable to identify tree species and are limited by algorithm parameters, which need to be adjusted for each classification. It is highlighted that the application of deep learning with its end-to-end-learning approach is very efficient and capable of deriving the information from multi-layer images, but an additional training set is needed for model training, robust computer resources are required, and a large number of accurate training samples are necessary. This study provides valuable information for forestry practitioners to select an optimal approach for detecting individual trees.
Collapse
|
8
|
Boost Precision Agriculture with Unmanned Aerial Vehicle Remote Sensing and Edge Intelligence: A Survey. REMOTE SENSING 2021. [DOI: 10.3390/rs13214387] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In recent years unmanned aerial vehicles (UAVs) have emerged as a popular and cost-effective technology to capture high spatial and temporal resolution remote sensing (RS) images for a wide range of precision agriculture applications, which can help reduce costs and environmental impacts by providing detailed agricultural information to optimize field practices. Furthermore, deep learning (DL) has been successfully applied in agricultural applications such as weed detection, crop pest and disease detection, etc. as an intelligent tool. However, most DL-based methods place high computation, memory and network demands on resources. Cloud computing can increase processing efficiency with high scalability and low cost, but results in high latency and great pressure on the network bandwidth. The emerging of edge intelligence, although still in the early stages, provides a promising solution for artificial intelligence (AI) applications on intelligent edge devices at the edge of the network close to data sources. These devices are with built-in processors enabling onboard analytics or AI (e.g., UAVs and Internet of Things gateways). Therefore, in this paper, a comprehensive survey on the latest developments of precision agriculture with UAV RS and edge intelligence is conducted for the first time. The major insights observed are as follows: (a) in terms of UAV systems, small or light, fixed-wing or industrial rotor-wing UAVs are widely used in precision agriculture; (b) sensors on UAVs can provide multi-source datasets, and there are only a few public UAV dataset for intelligent precision agriculture, mainly from RGB sensors and a few from multispectral and hyperspectral sensors; (c) DL-based UAV RS methods can be categorized into classification, object detection and segmentation tasks, and convolutional neural network and recurrent neural network are the mostly common used network architectures; (d) cloud computing is a common solution to UAV RS data processing, while edge computing brings the computing close to data sources; (e) edge intelligence is the convergence of artificial intelligence and edge computing, in which model compression especially parameter pruning and quantization is the most important and widely used technique at present, and typical edge resources include central processing units, graphics processing units and field programmable gate arrays.
Collapse
|
9
|
Canopy Volume Extraction of Citrus reticulate Blanco cv. Shatangju Trees Using UAV Image-Based Point Cloud Deep Learning. REMOTE SENSING 2021. [DOI: 10.3390/rs13173437] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Automatic acquisition of the canopy volume parameters of the Citrus reticulate Blanco cv. Shatangju tree is of great significance to precision management of the orchard. This research combined the point cloud deep learning algorithm with the volume calculation algorithm to segment the canopy of the Citrus reticulate Blanco cv. Shatangju trees. The 3D (Three-Dimensional) point cloud model of a Citrus reticulate Blanco cv. Shatangju orchard was generated using UAV tilt photogrammetry images. The segmentation effects of three deep learning models, PointNet++, MinkowskiNet and FPConv, on Shatangju trees and the ground were compared. The following three volume algorithms: convex hull by slices, voxel-based method and 3D convex hull were applied to calculate the volume of Shatangju trees. Model accuracy was evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE). The results show that the overall accuracy of the MinkowskiNet model (94.57%) is higher than the other two models, which indicates the best segmentation effect. The 3D convex hull algorithm received the highest R2 (0.8215) and the lowest RMSE (0.3186 m3) for the canopy volume calculation, which best reflects the real volume of Citrus reticulate Blanco cv. Shatangju trees. The proposed method is capable of rapid and automatic acquisition for the canopy volume of Citrus reticulate Blanco cv. Shatangju trees.
Collapse
|