1
|
Peng X, Wang K, Zhang Z, Geng N, Zhang Z. A Point-Cloud Segmentation Network Based on SqueezeNet and Time Series for Plants. J Imaging 2023; 9:258. [PMID: 38132676 PMCID: PMC10743816 DOI: 10.3390/jimaging9120258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 11/16/2023] [Accepted: 11/21/2023] [Indexed: 12/23/2023] Open
Abstract
The phenotyping of plant growth enriches our understanding of intricate genetic characteristics, paving the way for advancements in modern breeding and precision agriculture. Within the domain of phenotyping, segmenting 3D point clouds of plant organs is the basis of extracting plant phenotypic parameters. In this study, we introduce a novel method for point-cloud downsampling that adeptly mitigates the challenges posed by sample imbalances. In subsequent developments, we architect a deep learning framework founded on the principles of SqueezeNet for the segmentation of plant point clouds. In addition, we also use the time series as input variables, which effectively improves the segmentation accuracy of the network. Based on semantic segmentation, the MeanShift algorithm is employed to execute instance segmentation on the point-cloud data of crops. In semantic segmentation, the average Precision, Recall, F1-score, and IoU of maize reached 99.35%, 99.26%, 99.30%, and 98.61%, and the average Precision, Recall, F1-score, and IoU of tomato reached 97.98%, 97.92%, 97.95%, and 95.98%. In instance segmentation, the accuracy of maize and tomato reached 98.45% and 96.12%. This research holds the potential to advance the fields of plant phenotypic extraction, ideotype selection, and precision agriculture.
Collapse
Affiliation(s)
| | | | | | - Nan Geng
- College of Information Engineering, Northwest A&F University, Yangling 712100, China; (X.P.)
| | | |
Collapse
|
2
|
Harandi N, Vandenberghe B, Vankerschaver J, Depuydt S, Van Messem A. How to make sense of 3D representations for plant phenotyping: a compendium of processing and analysis techniques. PLANT METHODS 2023; 19:60. [PMID: 37353846 DOI: 10.1186/s13007-023-01031-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 05/19/2023] [Indexed: 06/25/2023]
Abstract
Computer vision technology is moving more and more towards a three-dimensional approach, and plant phenotyping is following this trend. However, despite its potential, the complexity of the analysis of 3D representations has been the main bottleneck hindering the wider deployment of 3D plant phenotyping. In this review we provide an overview of typical steps for the processing and analysis of 3D representations of plants, to offer potential users of 3D phenotyping a first gateway into its application, and to stimulate its further development. We focus on plant phenotyping applications where the goal is to measure characteristics of single plants or crop canopies on a small scale in research settings, as opposed to large scale crop monitoring in the field.
Collapse
Affiliation(s)
- Negin Harandi
- Center for Biosystems and Biotech Data Science, Ghent University Global Campus, 119 Songdomunhwa-ro, Yeonsu-gu, Incheon, South Korea
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Krijgslaan 281, S9, Ghent, Belgium
| | | | - Joris Vankerschaver
- Center for Biosystems and Biotech Data Science, Ghent University Global Campus, 119 Songdomunhwa-ro, Yeonsu-gu, Incheon, South Korea
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Krijgslaan 281, S9, Ghent, Belgium
| | - Stephen Depuydt
- Erasmus Applied University of Sciences and Arts, Campus Kaai, Nijverheidskaai 170, Anderlecht, Belgium
| | - Arnout Van Messem
- Department of Mathematics, Université de Liège, Allée de la Découverte 12, Liège, Belgium.
| |
Collapse
|
3
|
Xin B, Sun J, Bartholomeus H, Kootstra G. 3D data-augmentation methods for semantic segmentation of tomato plant parts. FRONTIERS IN PLANT SCIENCE 2023; 14:1045545. [PMID: 37377799 PMCID: PMC10291624 DOI: 10.3389/fpls.2023.1045545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 04/28/2023] [Indexed: 06/29/2023]
Abstract
Introduction 3D semantic segmentation of plant point clouds is an important step towards automatic plant phenotyping and crop modeling. Since traditional hand-designed methods for point-cloud processing face challenges in generalisation, current methods are based on deep neural network that learn to perform the 3D segmentation based on training data. However, these methods require a large annotated training set to perform well. Especially for 3D semantic segmentation, the collection of training data is highly labour intensitive and time consuming. Data augmentation has been shown to improve training on small training sets. However, it is unclear which data-augmentation methods are effective for 3D plant-part segmentation. Methods In the proposed work, five novel data-augmentation methods (global cropping, brightness adjustment, leaf translation, leaf rotation, and leaf crossover) were proposed and compared to five existing methods (online down sampling, global jittering, global scaling, global rotation, and global translation). The methods were applied to PointNet++ for 3D semantic segmentation of the point clouds of three cultivars of tomato plants (Merlice, Brioso, and Gardener Delight). The point clouds were segmented into soil base, stick, stemwork, and other bio-structures. Results and disccusion Among the data augmentation methods being proposed in this paper, leaf crossover indicated the most promising result which outperformed the existing ones. Leaf rotation (around Z axis), leaf translation, and cropping also performed well on the 3D tomato plant point clouds, which outperformed most of the existing work apart from global jittering. The proposed 3D data augmentation approaches significantly improve the overfitting caused by the limited training data. The improved plant-part segmentation further enables a more accurate reconstruction of the plant architecture.
Collapse
Affiliation(s)
- Bolai Xin
- Department of Plant Science, Wageningen University and Research, Wageningen, Netherlands
| | - Ji Sun
- Department of Plant Science, Wageningen University and Research, Wageningen, Netherlands
| | - Harm Bartholomeus
- Laboratory of Geo-Information Science and Remote Sensing, Wageningen University and Research, Wageningen, Netherlands
| | - Gert Kootstra
- Department of Plant Science, Wageningen University and Research, Wageningen, Netherlands
| |
Collapse
|
4
|
Daviet B, Fernandez R, Cabrera-Bosquet L, Pradal C, Fournier C. PhenoTrack3D: an automatic high-throughput phenotyping pipeline to track maize organs over time. PLANT METHODS 2022; 18:130. [PMID: 36482291 PMCID: PMC9730636 DOI: 10.1186/s13007-022-00961-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 11/22/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND High-throughput phenotyping platforms allow the study of the form and function of a large number of genotypes subjected to different growing conditions (GxE). A number of image acquisition and processing pipelines have been developed to automate this process, for micro-plots in the field and for individual plants in controlled conditions. Capturing shoot development requires extracting from images both the evolution of the 3D plant architecture as a whole, and a temporal tracking of the growth of its organs. RESULTS We propose PhenoTrack3D, a new pipeline to extract a 3D + t reconstruction of maize. It allows the study of plant architecture and individual organ development over time during the entire growth cycle. The method tracks the development of each organ from a time-series of plants whose organs have already been segmented in 3D using existing methods, such as Phenomenal [Artzet et al. in BioRxiv 1:805739, 2019] which was chosen in this study. First, a novel stem detection method based on deep-learning is used to locate precisely the point of separation between ligulated and growing leaves. Second, a new and original multiple sequence alignment algorithm has been developed to perform the temporal tracking of ligulated leaves, which have a consistent geometry over time and an unambiguous topological position. Finally, growing leaves are back-tracked with a distance-based approach. This pipeline is validated on a challenging dataset of 60 maize hybrids imaged daily from emergence to maturity in the PhenoArch platform (ca. 250,000 images). Stem tip was precisely detected over time (RMSE < 2.1 cm). 97.7% and 85.3% of ligulated and growing leaves respectively were assigned to the correct rank after tracking, on 30 plants × 43 dates. The pipeline allowed to extract various development and architecture traits at organ level, with good correlation to manual observations overall, on random subsets of 10-355 plants. CONCLUSIONS We developed a novel phenotyping method based on sequence alignment and deep-learning. It allows to characterise the development of maize architecture at organ level, automatically and at a high-throughput. It has been validated on hundreds of plants during the entire development cycle, showing its applicability on GxE analyses of large maize datasets.
Collapse
Affiliation(s)
- Benoit Daviet
- LEPSE, Univ Montpellier, INRAE, Institut Agro, Montpellier, France
| | - Romain Fernandez
- CIRAD, UMR AGAP Institut, 34398, Montpellier, France
- CIRAD, INRAE, UMR AGAP Institut, Univ Montpellier, Institut Agro, 34398, Montpellier, France
| | | | - Christophe Pradal
- CIRAD, UMR AGAP Institut, 34398, Montpellier, France.
- CIRAD, INRAE, UMR AGAP Institut, Univ Montpellier, Institut Agro, 34398, Montpellier, France.
- Inria & LIRMM, CNRS, Univ Montpellier, Montpellier, France.
| | | |
Collapse
|
5
|
Li D, Li J, Xiang S, Pan A. PSegNet: Simultaneous Semantic and Instance Segmentation for Point Clouds of Plants. PLANT PHENOMICS 2022; 2022:9787643. [PMID: 35693119 PMCID: PMC9157368 DOI: 10.34133/2022/9787643] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Accepted: 04/07/2022] [Indexed: 12/02/2022]
Abstract
Phenotyping of plant growth improves the understanding of complex genetic traits and eventually expedites the development of modern breeding and intelligent agriculture. In phenotyping, segmentation of 3D point clouds of plant organs such as leaves and stems contributes to automatic growth monitoring and reflects the extent of stress received by the plant. In this work, we first proposed the Voxelized Farthest Point Sampling (VFPS), a novel point cloud downsampling strategy, to prepare our plant dataset for training of deep neural networks. Then, a deep learning network—PSegNet, was specially designed for segmenting point clouds of several species of plants. The effectiveness of PSegNet originates from three new modules including the Double-Neighborhood Feature Extraction Block (DNFEB), the Double-Granularity Feature Fusion Module (DGFFM), and the Attention Module (AM). After training on the plant dataset prepared with VFPS, the network can simultaneously realize the semantic segmentation and the leaf instance segmentation for three plant species. Comparing to several mainstream networks such as PointNet++, ASIS, SGPN, and PlantNet, the PSegNet obtained the best segmentation results quantitatively and qualitatively. In semantic segmentation, PSegNet achieved 95.23%, 93.85%, 94.52%, and 89.90% for the mean Prec, Rec, F1, and IoU, respectively. In instance segmentation, PSegNet achieved 88.13%, 79.28%, 83.35%, and 89.54% for the mPrec, mRec, mCov, and mWCov, respectively.
Collapse
Affiliation(s)
- Dawei Li
- State Key Laboratory for Modification of Chemical Fibers and Polymer Materials, College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
- Engineering Research Center of Digitized Textile & Fashion Technology, Ministry of Education, Donghua University, Shanghai 201620, China
| | - Jinsheng Li
- College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
| | - Shiyu Xiang
- College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
| | - Anqi Pan
- Engineering Research Center of Digitized Textile & Fashion Technology, Ministry of Education, Donghua University, Shanghai 201620, China
- College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
| |
Collapse
|
6
|
Okura F. 3D modeling and reconstruction of plants and trees: A cross-cutting review across computer graphics, vision, and plant phenotyping. BREEDING SCIENCE 2022; 72:31-47. [PMID: 36045890 PMCID: PMC8987840 DOI: 10.1270/jsbbs.21074] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/26/2021] [Indexed: 06/15/2023]
Abstract
This paper reviews the past and current trends of three-dimensional (3D) modeling and reconstruction of plants and trees. These topics have been studied in multiple research fields, including computer vision, graphics, plant phenotyping, and forestry. This paper, therefore, provides a cross-cutting review. Representations of plant shape and structure are first summarized, where every method for plant modeling and reconstruction is based on a shape/structure representation. The methods were then categorized into 1) creating non-existent plants (modeling) and 2) creating models from real-world plants (reconstruction). This paper also discusses the limitations of current methods and possible future directions.
Collapse
Affiliation(s)
- Fumio Okura
- Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
| |
Collapse
|