1
|
Li W, Wu S, Wen W, Lu X, Liu H, Zhang M, Xiao P, Guo X, Zhao C. Using high-throughput phenotype platform MVS-Pheno to reconstruct the 3D morphological structure of wheat. AOB PLANTS 2024; 16:plae019. [PMID: 38660049 PMCID: PMC11041051 DOI: 10.1093/aobpla/plae019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 03/23/2024] [Indexed: 04/26/2024]
Abstract
It is of great significance to study the plant morphological structure for improving crop yield and achieving efficient use of resources. Three dimensional (3D) information can more accurately describe the morphological and structural characteristics of crop plants. Automatic acquisition of 3D information is one of the key steps in plant morphological structure research. Taking wheat as the research object, we propose a point cloud data-driven 3D reconstruction method that achieves 3D structure reconstruction and plant morphology parameterization at the phytomer scale. Specifically, we use the MVS-Pheno platform to reconstruct the point cloud of wheat plants and segment organs through the deep learning algorithm. On this basis, we automatically reconstructed the 3D structure of leaves and tillers and extracted the morphological parameters of wheat. The results show that the semantic segmentation accuracy of organs is 95.2%, and the instance segmentation accuracy AP50 is 0.665. The R2 values for extracted leaf length, leaf width, leaf attachment height, stem leaf angle, tiller length, and spike length were 0.97, 0.80, 1.00, 0.95, 0.99, and 0.95, respectively. This method can significantly improve the accuracy and efficiency of 3D morphological analysis of wheat plants, providing strong technical support for research in fields such as agricultural production optimization and genetic breeding.
Collapse
Affiliation(s)
- Wenrui Li
- College of Information Engineering, Northwest A&F University, Xinong Road, Yangling, Shaanxi, Xianyang 712100, China
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Sheng Wu
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Weiliang Wen
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Xianju Lu
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Haishen Liu
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Minggang Zhang
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Pengliang Xiao
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Xinyu Guo
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Chunjiang Zhao
- College of Information Engineering, Northwest A&F University, Xinong Road, Yangling, Shaanxi, Xianyang 712100, China
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| |
Collapse
|
2
|
Quiñones R, Samal A, Das Choudhury S, Muñoz-Arriola F. OSC-CO 2: coattention and cosegmentation framework for plant state change with multiple features. FRONTIERS IN PLANT SCIENCE 2023; 14:1211409. [PMID: 38023863 PMCID: PMC10644038 DOI: 10.3389/fpls.2023.1211409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 10/06/2023] [Indexed: 12/01/2023]
Abstract
Cosegmentation and coattention are extensions of traditional segmentation methods aimed at detecting a common object (or objects) in a group of images. Current cosegmentation and coattention methods are ineffective for objects, such as plants, that change their morphological state while being captured in different modalities and views. The Object State Change using Coattention-Cosegmentation (OSC-CO2) is an end-to-end unsupervised deep-learning framework that enhances traditional segmentation techniques, processing, analyzing, selecting, and combining suitable segmentation results that may contain most of our target object's pixels, and then displaying a final segmented image. The framework leverages coattention-based convolutional neural networks (CNNs) and cosegmentation-based dense Conditional Random Fields (CRFs) to address segmentation accuracy in high-dimensional plant imagery with evolving plant objects. The efficacy of OSC-CO2 is demonstrated using plant growth sequences imaged with infrared, visible, and fluorescence cameras in multiple views using a remote sensing, high-throughput phenotyping platform, and is evaluated using Jaccard index and precision measures. We also introduce CosegPP+, a dataset that is structured and can provide quantitative information on the efficacy of our framework. Results show that OSC-CO2 out performed state-of-the art segmentation and cosegmentation methods by improving segementation accuracy by 3% to 45%.
Collapse
Affiliation(s)
- Rubi Quiñones
- School of Computing, University of Nebraska-Lincoln, Lincoln, NE, United States
- Computer Science Department, Southern Illinois University Edwardsville, Edwardsville, IL, United States
| | - Ashok Samal
- School of Computing, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Sruti Das Choudhury
- School of Computing, University of Nebraska-Lincoln, Lincoln, NE, United States
- School of Natural Resources, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Francisco Muñoz-Arriola
- School of Natural Resources, University of Nebraska-Lincoln, Lincoln, NE, United States
- Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE, United States
| |
Collapse
|
3
|
He W, Ye Z, Li M, Yan Y, Lu W, Xing G. Extraction of soybean plant trait parameters based on SfM-MVS algorithm combined with GRNN. FRONTIERS IN PLANT SCIENCE 2023; 14:1181322. [PMID: 37560031 PMCID: PMC10407792 DOI: 10.3389/fpls.2023.1181322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 07/06/2023] [Indexed: 08/11/2023]
Abstract
Soybean is an important grain and oil crop worldwide and is rich in nutritional value. Phenotypic morphology plays an important role in the selection and breeding of excellent soybean varieties to achieve high yield. Nowadays, the mainstream manual phenotypic measurement has some problems such as strong subjectivity, high labor intensity and slow speed. To address the problems, a three-dimensional (3D) reconstruction method for soybean plants based on structure from motion (SFM) was proposed. First, the 3D point cloud of a soybean plant was reconstructed from multi-view images obtained by a smartphone based on the SFM algorithm. Second, low-pass filtering, Gaussian filtering, Ordinary Least Square (OLS) plane fitting, and Laplacian smoothing were used in fusion to automatically segment point cloud data, such as individual plants, stems, and leaves. Finally, Eleven morphological traits, such as plant height, minimum bounding box volume per plant, leaf projection area, leaf projection length and width, and leaf tilt information, were accurately and nondestructively measured by the proposed an algorithm for leaf phenotype measurement (LPM). Moreover, Support Vector Machine (SVM), Back Propagation Neural Network (BP), and Back Propagation Neural Network (GRNN) prediction models were established to predict and identify soybean plant varieties. The results indicated that, compared with the manual measurement, the root mean square error (RMSE) of plant height, leaf length, and leaf width were 0.9997, 0.2357, and 0.2666 cm, and the mean absolute percentage error (MAPE) were 2.7013%, 1.4706%, and 1.8669%, and the coefficients of determination (R2) were 0.9775, 0.9785, and 0.9487, respectively. The accuracy of predicting plant species according to the six leaf parameters was highest when using GRNN, reaching 0.9211, and the RMSE was 18.3263. Based on the phenotypic traits of plants, the differences between C3, 47-6 and W82 soybeans were analyzed genetically, and because C3 was an insect-resistant line, the trait parametes (minimum box volume per plant, number of leaves, minimum size of single leaf box, leaf projection area).The results show that the proposed method can effectively extract the 3D phenotypic structure information of soybean plants and leaves without loss which has the potential using ability in other plants with dense leaves.
Collapse
Affiliation(s)
- Wei He
- College of Engineering, Nanjing Agricultural University, Nanjing, China
| | - Zhihao Ye
- Soybean Research Institute, Ministry of Agriculture and Rural Affairs (MARA) National Center for Soybean Improvement, Ministry of Agriculture and Rural Affairs (MARA) Key Laboratory of Biology and Genetic Improvement of Soybean, National Key Laboratory for Crop Genetics & Germplasm Enhancement and Utilization, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, China
| | - Mingshuang Li
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Yulu Yan
- Soybean Research Institute, Ministry of Agriculture and Rural Affairs (MARA) National Center for Soybean Improvement, Ministry of Agriculture and Rural Affairs (MARA) Key Laboratory of Biology and Genetic Improvement of Soybean, National Key Laboratory for Crop Genetics & Germplasm Enhancement and Utilization, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, China
| | - Wei Lu
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Guangnan Xing
- Soybean Research Institute, Ministry of Agriculture and Rural Affairs (MARA) National Center for Soybean Improvement, Ministry of Agriculture and Rural Affairs (MARA) Key Laboratory of Biology and Genetic Improvement of Soybean, National Key Laboratory for Crop Genetics & Germplasm Enhancement and Utilization, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, China
| |
Collapse
|
4
|
Harandi N, Vandenberghe B, Vankerschaver J, Depuydt S, Van Messem A. How to make sense of 3D representations for plant phenotyping: a compendium of processing and analysis techniques. PLANT METHODS 2023; 19:60. [PMID: 37353846 DOI: 10.1186/s13007-023-01031-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 05/19/2023] [Indexed: 06/25/2023]
Abstract
Computer vision technology is moving more and more towards a three-dimensional approach, and plant phenotyping is following this trend. However, despite its potential, the complexity of the analysis of 3D representations has been the main bottleneck hindering the wider deployment of 3D plant phenotyping. In this review we provide an overview of typical steps for the processing and analysis of 3D representations of plants, to offer potential users of 3D phenotyping a first gateway into its application, and to stimulate its further development. We focus on plant phenotyping applications where the goal is to measure characteristics of single plants or crop canopies on a small scale in research settings, as opposed to large scale crop monitoring in the field.
Collapse
Affiliation(s)
- Negin Harandi
- Center for Biosystems and Biotech Data Science, Ghent University Global Campus, 119 Songdomunhwa-ro, Yeonsu-gu, Incheon, South Korea
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Krijgslaan 281, S9, Ghent, Belgium
| | | | - Joris Vankerschaver
- Center for Biosystems and Biotech Data Science, Ghent University Global Campus, 119 Songdomunhwa-ro, Yeonsu-gu, Incheon, South Korea
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Krijgslaan 281, S9, Ghent, Belgium
| | - Stephen Depuydt
- Erasmus Applied University of Sciences and Arts, Campus Kaai, Nijverheidskaai 170, Anderlecht, Belgium
| | - Arnout Van Messem
- Department of Mathematics, Université de Liège, Allée de la Découverte 12, Liège, Belgium.
| |
Collapse
|
5
|
Young TJ, Jubery TZ, Carley CN, Carroll M, Sarkar S, Singh AK, Singh A, Ganapathysubramanian B. "Canopy fingerprints" for characterizing three-dimensional point cloud data of soybean canopies. FRONTIERS IN PLANT SCIENCE 2023; 14:1141153. [PMID: 37063230 PMCID: PMC10090282 DOI: 10.3389/fpls.2023.1141153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 02/28/2023] [Indexed: 06/19/2023]
Abstract
Advances in imaging hardware allow high throughput capture of the detailed three-dimensional (3D) structure of plant canopies. The point cloud data is typically post-processed to extract coarse-scale geometric features (like volume, surface area, height, etc.) for downstream analysis. We extend feature extraction from 3D point cloud data to various additional features, which we denote as 'canopy fingerprints'. This is motivated by the successful application of the fingerprint concept for molecular fingerprints in chemistry applications and acoustic fingerprints in sound engineering applications. We developed an end-to-end pipeline to generate canopy fingerprints of a three-dimensional point cloud of soybean [Glycine max (L.) Merr.] canopies grown in hill plots captured by a terrestrial laser scanner (TLS). The pipeline includes noise removal, registration, and plot extraction, followed by the canopy fingerprint generation. The canopy fingerprints are generated by splitting the data into multiple sub-canopy scale components and extracting sub-canopy scale geometric features. The generated canopy fingerprints are interpretable and can assist in identifying patterns in a database of canopies, querying similar canopies, or identifying canopies with a certain shape. The framework can be extended to other modalities (for instance, hyperspectral point clouds) and tuned to find the most informative fingerprint representation for downstream tasks. These canopy fingerprints can aid in the utilization of canopy traits at previously unutilized scales, and therefore have applications in plant breeding and resilient crop production.
Collapse
Affiliation(s)
- Therin J. Young
- Department of Mechanical Engineering, Iowa State University, Ames, IA, United States
| | | | - Clayton N. Carley
- Department of Agronomy, Iowa State University, Ames, IA, United States
| | - Matthew Carroll
- Department of Agronomy, Iowa State University, Ames, IA, United States
| | - Soumik Sarkar
- Department of Mechanical Engineering, Iowa State University, Ames, IA, United States
- Translational AI Center, Iowa State University, Ames, IA, United States
| | - Asheesh K. Singh
- Department of Agronomy, Iowa State University, Ames, IA, United States
| | - Arti Singh
- Department of Agronomy, Iowa State University, Ames, IA, United States
| | - Baskar Ganapathysubramanian
- Department of Mechanical Engineering, Iowa State University, Ames, IA, United States
- Translational AI Center, Iowa State University, Ames, IA, United States
| |
Collapse
|
6
|
Li Y, Liu J, Zhang B, Wang Y, Yao J, Zhang X, Fan B, Li X, Hai Y, Fan X. Three-dimensional reconstruction and phenotype measurement of maize seedlings based on multi-view image sequences. FRONTIERS IN PLANT SCIENCE 2022; 13:974339. [PMID: 36119622 PMCID: PMC9481285 DOI: 10.3389/fpls.2022.974339] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 08/15/2022] [Indexed: 06/15/2023]
Abstract
As an important method for crop phenotype quantification, three-dimensional (3D) reconstruction is of critical importance for exploring the phenotypic characteristics of crops. In this study, maize seedlings were subjected to 3D reconstruction based on the imaging technology, and their phenotypic characters were analyzed. In the first stage, a multi-view image sequence was acquired via an RGB camera and video frame extraction method, followed by 3D reconstruction of maize based on structure from motion algorithm. Next, the original point cloud data of maize were preprocessed through Euclidean clustering algorithm, color filtering algorithm and point cloud voxel filtering algorithm to obtain a point cloud model of maize. In the second stage, the phenotypic parameters in the development process of maize seedlings were analyzed, and the maize plant height, leaf length, relative leaf area and leaf width measured through point cloud were compared with the corresponding manually measured values, and the two were highly correlated, with the coefficient of determination (R 2) of 0.991, 0.989, 0.926 and 0.963, respectively. In addition, the errors generated between the two were also analyzed, and results reflected that the proposed method was capable of rapid, accurate and nondestructive extraction. In the third stage, maize stem leaves were segmented and identified through the region growing segmentation algorithm, and the expected segmentation effect was achieved. In general, the proposed method could accurately construct the 3D morphology of maize plants, segment maize leaves, and nondestructively and accurately extract the phenotypic parameters of maize plants, thus providing a data support for the research on maize phenotypes.
Collapse
Affiliation(s)
- Yuchao Li
- State Key Laboratory of North China Crop Improvement and Regulation, Baoding, China
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, China
| | - Jingyan Liu
- State Key Laboratory of North China Crop Improvement and Regulation, Baoding, China
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, China
| | - Bo Zhang
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, China
| | - Yonggang Wang
- Hebei Runtian Water-Saving Equipment Co., Ltd., Shijiazhuang, China
| | - Jingfa Yao
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, China
| | - Xuejing Zhang
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, China
| | - Baojiang Fan
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, China
| | - Xudong Li
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, China
| | - Yan Hai
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, China
| | - Xiaofei Fan
- State Key Laboratory of North China Crop Improvement and Regulation, Baoding, China
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, China
| |
Collapse
|
7
|
Gu J, Zhang Y, Yin Y, Wang R, Deng J, Zhang B. Surface Defect Detection of Cabbage Based on Curvature Features of 3D Point Cloud. FRONTIERS IN PLANT SCIENCE 2022; 13:942040. [PMID: 35909747 PMCID: PMC9331920 DOI: 10.3389/fpls.2022.942040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 06/14/2022] [Indexed: 05/25/2023]
Abstract
The dents and cracks of cabbage caused by mechanical damage during transportation have a direct impact on both commercial value and storage time. In this study, a method for surface defect detection of cabbage is proposed based on the curvature feature of the 3D point cloud. First, the red-green-blue (RGB) images and depth images are collected using a RealSense-D455 depth camera for 3D point cloud reconstruction. Then, the region of interest (ROI) is extracted by statistical filtering and Euclidean clustering segmentation algorithm, and the 3D point cloud of cabbage is segmented from background noise. Then, the curvature features of the 3D point cloud are calculated using the estimated normal vector based on the least square plane fitting method. Finally, the curvature threshold is determined according to the curvature characteristic parameters, and the surface defect type and area can be detected. The flat-headed cabbage and round-headed cabbage are selected to test the surface damage of dents and cracks. The test results show that the average detection accuracy of this proposed method is 96.25%, in which, the average detection accuracy of dents is 93.3% and the average detection accuracy of cracks is 96.67%, suggesting high detection accuracy and good adaptability for various cabbages. This study provides important technical support for automatic and non-destructive detection of cabbage surface defects.
Collapse
Affiliation(s)
- Jin Gu
- College of Engineering, China Agricultural University, Beijing, China
| | - Yawei Zhang
- College of Engineering, China Agricultural University, Beijing, China
| | - Yanxin Yin
- Research Center of Intelligent Equipment, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- National Research Center of Intelligent Equipment for Agriculture, Beijing, China
| | - Ruixue Wang
- Chinese Academy of Agricultural Mechanization Sciences Group Co., Ltd., Beijing, China
| | - Junwen Deng
- College of Engineering, China Agricultural University, Beijing, China
| | - Bin Zhang
- College of Engineering, China Agricultural University, Beijing, China
| |
Collapse
|
8
|
Okura F. 3D modeling and reconstruction of plants and trees: A cross-cutting review across computer graphics, vision, and plant phenotyping. BREEDING SCIENCE 2022; 72:31-47. [PMID: 36045890 PMCID: PMC8987840 DOI: 10.1270/jsbbs.21074] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/26/2021] [Indexed: 06/15/2023]
Abstract
This paper reviews the past and current trends of three-dimensional (3D) modeling and reconstruction of plants and trees. These topics have been studied in multiple research fields, including computer vision, graphics, plant phenotyping, and forestry. This paper, therefore, provides a cross-cutting review. Representations of plant shape and structure are first summarized, where every method for plant modeling and reconstruction is based on a shape/structure representation. The methods were then categorized into 1) creating non-existent plants (modeling) and 2) creating models from real-world plants (reconstruction). This paper also discusses the limitations of current methods and possible future directions.
Collapse
Affiliation(s)
- Fumio Okura
- Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
| |
Collapse
|