1
|
Kienbaum L, Correa Abondano M, Blas R, Schmid K. DeepCob: precise and high-throughput analysis of maize cob geometry using deep learning with an application in genebank phenomics. Plant Methods 2021; 17:91. [PMID: 34419093 PMCID: PMC8379755 DOI: 10.1186/s13007-021-00787-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 07/30/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Maize cobs are an important component of crop yield that exhibit a high diversity in size, shape and color in native landraces and modern varieties. Various phenotyping approaches were developed to measure maize cob parameters in a high throughput fashion. More recently, deep learning methods like convolutional neural networks (CNNs) became available and were shown to be highly useful for high-throughput plant phenotyping. We aimed at comparing classical image segmentation with deep learning methods for maize cob image segmentation and phenotyping using a large image dataset of native maize landrace diversity from Peru. RESULTS Comparison of three image analysis methods showed that a Mask R-CNN trained on a diverse set of maize cob images was highly superior to classical image analysis using the Felzenszwalb-Huttenlocher algorithm and a Window-based CNN due to its robustness to image quality and object segmentation accuracy ([Formula: see text]). We integrated Mask R-CNN into a high-throughput pipeline to segment both maize cobs and rulers in images and perform an automated quantitative analysis of eight phenotypic traits, including diameter, length, ellipticity, asymmetry, aspect ratio and average values of red, green and blue color channels for cob color. Statistical analysis identified key training parameters for efficient iterative model updating. We also show that a small number of 10-20 images is sufficient to update the initial Mask R-CNN model to process new types of cob images. To demonstrate an application of the pipeline we analyzed phenotypic variation in 19,867 maize cobs extracted from 3449 images of 2484 accessions from the maize genebank of Peru to identify phenotypically homogeneous and heterogeneous genebank accessions using multivariate clustering. CONCLUSIONS Single Mask R-CNN model and associated analysis pipeline are widely applicable tools for maize cob phenotyping in contexts like genebank phenomics or plant breeding.
Collapse
Affiliation(s)
- Lydia Kienbaum
- Institute of Plant Breeding, Seed Science and Population Genetics, University of Hohenheim, Stuttgart, Germany
| | - Miguel Correa Abondano
- Institute of Plant Breeding, Seed Science and Population Genetics, University of Hohenheim, Stuttgart, Germany
| | - Raul Blas
- Universidad National Agraria La Molina (UNALM), Lima, Peru
| | - Karl Schmid
- Institute of Plant Breeding, Seed Science and Population Genetics, University of Hohenheim, Stuttgart, Germany.
- Computational Science Lab, University of Hohenheim, Stuttgart, Germany.
| |
Collapse
|
2
|
Jiang Y, Li C, Xu R, Sun S, Robertson JS, Paterson AH. DeepFlower: a deep learning-based approach to characterize flowering patterns of cotton plants in the field. Plant Methods 2020; 16:156. [PMID: 33372635 PMCID: PMC7720604 DOI: 10.1186/s13007-020-00698-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Accepted: 11/24/2020] [Indexed: 05/20/2023]
Abstract
BACKGROUND Flowering is one of the most important processes for flowering plants such as cotton, reflecting the transition from vegetative to reproductive growth and is of central importance to crop yield and adaptability. Conventionally, categorical scoring systems have been widely used to study flowering patterns, which are laborious and subjective to apply. The goal of this study was to develop a deep learning-based approach to characterize flowering patterns for cotton plants that flower progressively over several weeks, with flowers distributed across much of the plant. RESULTS A ground mobile system (GPhenoVision) was modified with a multi-view color imaging module, to acquire images of a plant from four viewing angles at a time. A total of 116 plants from 23 genotypes were imaged during an approximately 2-month period with an average scanning interval of 2-3 days, yielding a dataset containing 8666 images. A subset (475) of the images were randomly selected and manually annotated to form datasets for training and selecting the best object detection model. With the best model, a deep learning-based approach (DeepFlower) was developed to detect and count individual emerging blooms for a plant on a given date. The DeepFlower was used to process all images to obtain bloom counts for individual plants over the flowering period, using the resulting counts to derive flowering curves (and thus flowering characteristics). Regression analyses showed that the DeepFlower method could accurately (R2 = 0.88 and RMSE = 0.79) detect and count emerging blooms on cotton plants, and statistical analyses showed that imaging-derived flowering characteristics had similar effectiveness as manual assessment for identifying differences among genetic categories or genotypes. CONCLUSIONS The developed approach could thus be an effective and efficient tool to characterize flowering patterns for flowering plants (such as cotton) with complex canopy architecture.
Collapse
Affiliation(s)
- Yu Jiang
- Horticulture Section, School of Integrative Plant Science, Cornell AgriTech, Cornell University, Geneva, NY, 14456, USA
- Phenomics and Plant Robotics Center/College of Engineering, The University of Georgia, Athens, GA, 30602, USA
| | - Changying Li
- Phenomics and Plant Robotics Center/College of Engineering, The University of Georgia, Athens, GA, 30602, USA.
- College of Agricultural & Environmental Sciences, The University of Georgia, Athens, GA, 30602, USA.
| | - Rui Xu
- Phenomics and Plant Robotics Center/College of Engineering, The University of Georgia, Athens, GA, 30602, USA
| | - Shangpeng Sun
- Phenomics and Plant Robotics Center/College of Engineering, The University of Georgia, Athens, GA, 30602, USA
| | - Jon S Robertson
- College of Agricultural & Environmental Sciences, The University of Georgia, Athens, GA, 30602, USA
| | - Andrew H Paterson
- College of Agricultural & Environmental Sciences, The University of Georgia, Athens, GA, 30602, USA
- Franklin College of Arts and Sciences, The University of Georgia, Athens, GA, 30602, USA
| |
Collapse
|
3
|
Lozano-Claros D, Meng X, Custovic E, Deng G, Berkowitz O, Whelan J, Lewsey MG. Developmental normalization of phenomics data generated by high throughput plant phenotyping systems. Plant Methods 2020; 16:111. [PMID: 32817754 PMCID: PMC7424680 DOI: 10.1186/s13007-020-00653-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Accepted: 08/06/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Sowing time is commonly used as the temporal reference for Arabidopsis thaliana (Arabidopsis) experiments in high throughput plant phenotyping (HTPP) systems. This relies on the assumption that germination and seedling establishment are uniform across the population. However, individual seeds have different development trajectories even under uniform environmental conditions. This leads to increased variance in quantitative phenotyping approaches. We developed the Digital Adjustment of Plant Development (DAPD) normalization method. It normalizes time-series HTPP measurements by reference to an early developmental stage and in an automated manner. The timeline of each measurement series is shifted to a reference time. The normalization is determined by cross-correlation at multiple time points of the time-series measurements, which may include rosette area, leaf size, and number. RESULTS The DAPD method improved the accuracy of phenotyping measurements by decreasing the statistical dispersion of quantitative traits across a time-series. We applied DAPD to evaluate the relative growth rate in Arabidopsis plants and demonstrated that it improves uniformity in measurements, permitting a more informative comparison between individuals. Application of DAPD decreased variance of phenotyping measurements by up to 2.5 times compared to sowing-time normalization. The DAPD method also identified more outliers than any other central tendency technique applied to the non-normalized dataset. CONCLUSIONS DAPD is an effective method to control for temporal differences in development within plant phenotyping datasets. In principle, it can be applied to HTPP data from any species/trait combination for which a relevant developmental scale can be defined.
Collapse
Affiliation(s)
- Diego Lozano-Claros
- Department of Animal, Plant and Soil Science, AgriBio Building, La Trobe University, Bundoora, VIC 3086 Australia
- Department of Engineering, School of Engineering and Mathematical Sciences, La Trobe University, Melbourne, VIC 3086 Australia
| | - Xiangxiang Meng
- Department of Animal, Plant and Soil Science, AgriBio Building, La Trobe University, Bundoora, VIC 3086 Australia
- Australian Research Council Centre of Excellence in Plant Energy Biology, La Trobe University, Bundoora, VIC 3086 Australia
- Currently: Key Laboratory of Biofuels, Shandong Provincial Key Laboratory of Energy Genetics, Qingdao Institute of Bioenergy and Bioprocess Technology, Chinese Academy of Sciences, Qingdao, 266101 China
| | - Eddie Custovic
- Department of Engineering, School of Engineering and Mathematical Sciences, La Trobe University, Melbourne, VIC 3086 Australia
| | - Guang Deng
- Department of Engineering, School of Engineering and Mathematical Sciences, La Trobe University, Melbourne, VIC 3086 Australia
| | - Oliver Berkowitz
- Department of Animal, Plant and Soil Science, AgriBio Building, La Trobe University, Bundoora, VIC 3086 Australia
- Australian Research Council Research Hub for Medicinal Agriculture, AgriBio Building, La Trobe University, Bundoora, VIC 3086 Australia
- Australian Research Council Centre of Excellence in Plant Energy Biology, La Trobe University, Bundoora, VIC 3086 Australia
| | - James Whelan
- Department of Animal, Plant and Soil Science, AgriBio Building, La Trobe University, Bundoora, VIC 3086 Australia
- Australian Research Council Research Hub for Medicinal Agriculture, AgriBio Building, La Trobe University, Bundoora, VIC 3086 Australia
- Australian Research Council Centre of Excellence in Plant Energy Biology, La Trobe University, Bundoora, VIC 3086 Australia
| | - Mathew G. Lewsey
- Department of Animal, Plant and Soil Science, AgriBio Building, La Trobe University, Bundoora, VIC 3086 Australia
- Australian Research Council Research Hub for Medicinal Agriculture, AgriBio Building, La Trobe University, Bundoora, VIC 3086 Australia
| |
Collapse
|
4
|
Henke M, Junker A, Neumann K, Altmann T, Gladilin E. Comparison and extension of three methods for automated registration of multimodal plant images. Plant Methods 2019; 15:44. [PMID: 31168314 PMCID: PMC6487531 DOI: 10.1186/s13007-019-0426-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2018] [Accepted: 04/17/2019] [Indexed: 05/31/2023]
Abstract
With the introduction of high-throughput multisensory imaging platforms, the automatization of multimodal image analysis has become the focus of quantitative plant research. Due to a number of natural and technical reasons (e.g., inhomogeneous scene illumination, shadows, and reflections), unsupervised identification of relevant plant structures (i.e., image segmentation) represents a nontrivial task that often requires extensive human-machine interaction. Registration of multimodal plant images enables the automatized segmentation of 'difficult' image modalities such as visible light or near-infrared images using the segmentation results of image modalities that exhibit higher contrast between plant and background regions (such as fluorescent images). Furthermore, registration of different image modalities is essential for assessment of a consistent multiparametric plant phenotype, where, for example, chlorophyll and water content as well as disease- and/or stress-related pigmentation can simultaneously be studied at a local scale. To automatically register thousands of images, efficient algorithmic solutions for the unsupervised alignment of two structurally similar but, in general, nonidentical images are required. For establishment of image correspondences, different algorithmic approaches based on different image features have been proposed. The particularity of plant image analysis consists, however, of a large variability of shapes and colors of different plants measured at different developmental stages from different views. While adult plant shoots typically have a unique structure, young shoots may have a nonspecific shape that can often be hardly distinguished from the background structures. Consequently, it is not clear a priori what image features and registration techniques are suitable for the alignment of various multimodal plant images. Furthermore, dynamically measured plants may exhibit nonuniform movements that require application of nonrigid registration techniques. Here, we investigate three common techniques for registration of visible light and fluorescence images that rely on finding correspondences between (i) feature-points, (ii) frequency domain features, and (iii) image intensity information. The performance of registration methods is validated in terms of robustness and accuracy measured by a direct comparison with manually segmented images of different plants. Our experimental results show that all three techniques are sensitive to structural image distortions and require additional preprocessing steps including structural enhancement and characteristic scale selection. To overcome the limitations of conventional approaches, we develop an iterative algorithmic scheme, which allows it to perform both rigid and slightly nonrigid registration of high-throughput plant images in a fully automated manner.
Collapse
Affiliation(s)
- Michael Henke
- Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), OT Gatersleben, Corrensstrasse 3, 06466 Seeland, Germany
| | - Astrid Junker
- Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), OT Gatersleben, Corrensstrasse 3, 06466 Seeland, Germany
| | - Kerstin Neumann
- Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), OT Gatersleben, Corrensstrasse 3, 06466 Seeland, Germany
| | - Thomas Altmann
- Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), OT Gatersleben, Corrensstrasse 3, 06466 Seeland, Germany
| | - Evgeny Gladilin
- Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), OT Gatersleben, Corrensstrasse 3, 06466 Seeland, Germany
| |
Collapse
|