1
|
Peng X, Wang K, Zhang Z, Geng N, Zhang Z. A Point-Cloud Segmentation Network Based on SqueezeNet and Time Series for Plants. J Imaging 2023; 9:258. [PMID: 38132676 PMCID: PMC10743816 DOI: 10.3390/jimaging9120258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 11/16/2023] [Accepted: 11/21/2023] [Indexed: 12/23/2023] Open
Abstract
The phenotyping of plant growth enriches our understanding of intricate genetic characteristics, paving the way for advancements in modern breeding and precision agriculture. Within the domain of phenotyping, segmenting 3D point clouds of plant organs is the basis of extracting plant phenotypic parameters. In this study, we introduce a novel method for point-cloud downsampling that adeptly mitigates the challenges posed by sample imbalances. In subsequent developments, we architect a deep learning framework founded on the principles of SqueezeNet for the segmentation of plant point clouds. In addition, we also use the time series as input variables, which effectively improves the segmentation accuracy of the network. Based on semantic segmentation, the MeanShift algorithm is employed to execute instance segmentation on the point-cloud data of crops. In semantic segmentation, the average Precision, Recall, F1-score, and IoU of maize reached 99.35%, 99.26%, 99.30%, and 98.61%, and the average Precision, Recall, F1-score, and IoU of tomato reached 97.98%, 97.92%, 97.95%, and 95.98%. In instance segmentation, the accuracy of maize and tomato reached 98.45% and 96.12%. This research holds the potential to advance the fields of plant phenotypic extraction, ideotype selection, and precision agriculture.
Collapse
Affiliation(s)
| | | | | | - Nan Geng
- College of Information Engineering, Northwest A&F University, Yangling 712100, China; (X.P.)
| | | |
Collapse
|
2
|
Daviet B, Fernandez R, Cabrera-Bosquet L, Pradal C, Fournier C. PhenoTrack3D: an automatic high-throughput phenotyping pipeline to track maize organs over time. PLANT METHODS 2022; 18:130. [PMID: 36482291 PMCID: PMC9730636 DOI: 10.1186/s13007-022-00961-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 11/22/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND High-throughput phenotyping platforms allow the study of the form and function of a large number of genotypes subjected to different growing conditions (GxE). A number of image acquisition and processing pipelines have been developed to automate this process, for micro-plots in the field and for individual plants in controlled conditions. Capturing shoot development requires extracting from images both the evolution of the 3D plant architecture as a whole, and a temporal tracking of the growth of its organs. RESULTS We propose PhenoTrack3D, a new pipeline to extract a 3D + t reconstruction of maize. It allows the study of plant architecture and individual organ development over time during the entire growth cycle. The method tracks the development of each organ from a time-series of plants whose organs have already been segmented in 3D using existing methods, such as Phenomenal [Artzet et al. in BioRxiv 1:805739, 2019] which was chosen in this study. First, a novel stem detection method based on deep-learning is used to locate precisely the point of separation between ligulated and growing leaves. Second, a new and original multiple sequence alignment algorithm has been developed to perform the temporal tracking of ligulated leaves, which have a consistent geometry over time and an unambiguous topological position. Finally, growing leaves are back-tracked with a distance-based approach. This pipeline is validated on a challenging dataset of 60 maize hybrids imaged daily from emergence to maturity in the PhenoArch platform (ca. 250,000 images). Stem tip was precisely detected over time (RMSE < 2.1 cm). 97.7% and 85.3% of ligulated and growing leaves respectively were assigned to the correct rank after tracking, on 30 plants × 43 dates. The pipeline allowed to extract various development and architecture traits at organ level, with good correlation to manual observations overall, on random subsets of 10-355 plants. CONCLUSIONS We developed a novel phenotyping method based on sequence alignment and deep-learning. It allows to characterise the development of maize architecture at organ level, automatically and at a high-throughput. It has been validated on hundreds of plants during the entire development cycle, showing its applicability on GxE analyses of large maize datasets.
Collapse
Affiliation(s)
- Benoit Daviet
- LEPSE, Univ Montpellier, INRAE, Institut Agro, Montpellier, France
| | - Romain Fernandez
- CIRAD, UMR AGAP Institut, 34398, Montpellier, France
- CIRAD, INRAE, UMR AGAP Institut, Univ Montpellier, Institut Agro, 34398, Montpellier, France
| | | | - Christophe Pradal
- CIRAD, UMR AGAP Institut, 34398, Montpellier, France.
- CIRAD, INRAE, UMR AGAP Institut, Univ Montpellier, Institut Agro, 34398, Montpellier, France.
- Inria & LIRMM, CNRS, Univ Montpellier, Montpellier, France.
| | | |
Collapse
|
3
|
Deb M, Garai A, Das A, Dhal KG. LS-Net: a convolutional neural network for leaf segmentation of rosette plants. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07479-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
4
|
Bhagat S, Kokare M, Haswani V, Hambarde P, Kamble R. Eff-UNet++: A novel architecture for plant leaf segmentation and counting. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101583] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
5
|
Automatic leaf segmentation and overlapping leaf separation using stereo vision. ARRAY 2021. [DOI: 10.1016/j.array.2021.100099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
6
|
Kolhar S, Jagtap J. Convolutional neural network based encoder-decoder architectures for semantic segmentation of plants. ECOL INFORM 2021. [DOI: 10.1016/j.ecoinf.2021.101373] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
7
|
Dar ZA, Dar SA, Khan JA, Lone AA, Langyan S, Lone BA, Kanth RH, Iqbal A, Rane J, Wani SH, Alfarraj S, Alharbi SA, Brestic M, Ansari MJ. Identification for surrogate drought tolerance in maize inbred lines utilizing high-throughput phenomics approach. PLoS One 2021; 16:e0254318. [PMID: 34314420 PMCID: PMC8315520 DOI: 10.1371/journal.pone.0254318] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 06/24/2021] [Indexed: 11/20/2022] Open
Abstract
Screening for drought tolerance requires precise techniques like phonemics, which is an emerging science aimed at non-destructive methods allowing large-scale screening of genotypes. Large-scale screening complements genomic efforts to identify genes relevant for crop improvement. Thirty maize inbred lines from various sources (exotic and indigenous) maintained at Dryland Agriculture Research Station were used in the current study. In the automated plant transport and imaging systems (LemnaTec Scanalyzer system for large plants), top and side view images were taken of the VIS (visible) and NIR (near infrared) range of the light spectrum to capture phenes. All images were obtained with a thermal imager. All sensors were used to collect images one day after shifting the pots from the greenhouse for 11 days. Image processing was done using pre-processing, segmentation and flowered by features' extraction. Different surrogate traits such as pixel area, plant aspect ratio, convex hull ratio and calliper length were estimated. A strong association was found between canopy temperature and above ground biomass under stress conditions. Promising lines in different surrogates will be utilized in breeding programmes to develop mapping populations for traits of interest related to drought resilience, in terms of improved tissue water status and mapping of genes/QTLs for drought traits.
Collapse
Affiliation(s)
- Zahoor A Dar
- Dryland Agricultural Research Station, Sher-e-Kashmir University of Agricultural Sciences &Technology-Kashmir, Rangreth Srinagar, Jammu and Kashmir, India
| | - Showket A Dar
- Department of Entomology, Sher-e-Kashmir University of Agricultural Sciences & Technology of Kashmir, Srinagar-Kargil, Ladakh, India
| | - Jameel A Khan
- Department of Biotechnology, University of Agricultural Sciences, Bangalore, India
| | - Ajaz A Lone
- Dryland Agricultural Research Station, Sher-e-Kashmir University of Agricultural Sciences &Technology-Kashmir, Rangreth Srinagar, Jammu and Kashmir, India
| | - Sapna Langyan
- ICAR-National Bureau for Plant Genetic Resources, New Delhi, India
| | - B A Lone
- Department of Agronomy, Sher-e-Kashmir University of Agricultural Sciences &Technology-Kashmir, Srinagar, Jammu and Kashmir, India
| | - R H Kanth
- Department of Agronomy, Sher-e-Kashmir University of Agricultural Sciences &Technology-Kashmir, Wadura Sopore, Jammu and Kashmir, India
| | - Asif Iqbal
- Department of Soil Science, Sher-e-Kashmir University of Agricultural Sciences &Technology-Kashmir, Srinagar, Jammu and Kashmir, India
| | - Jagdish Rane
- Department of Drought Science, ICAR-NIASM, Baramati, New Delhi, India
| | - Shabir H Wani
- MRCFCF, Sher-e-Kashmir University of Agricultural Sciences &Technology-Kashmir, Srinagar, Jammu and Kashmir, India
| | - Saleh Alfarraj
- Zoology Department, College of Science, King Saud University, Riyadh, Saudi Arabia
| | - Sulaiman Ali Alharbi
- Department of Botany & Microbiology, College of Science, King Saud University, Riyadh, Saudi Arabia
| | - Marian Brestic
- Department of Plant Physiology, Slovak University of Agriculture, Nitra, Slovakia
| | - Mohammad Javed Ansari
- Department of Botany, Hindu College Moradabad (Mahatma Jyotiba Phule Rohilkhand University Bareilly), Moradabad, India
| |
Collapse
|
8
|
G. JB, E.S. G. An hierarchical approach for automatic segmentation of leaf images with similar background using kernel smoothing based Gaussian process regression. ECOL INFORM 2021. [DOI: 10.1016/j.ecoinf.2021.101323] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
9
|
Li Y, Wen W, Guo X, Yu Z, Gu S, Yan H, Zhao C. High-throughput phenotyping analysis of maize at the seedling stage using end-to-end segmentation network. PLoS One 2021; 16:e0241528. [PMID: 33434222 PMCID: PMC7802938 DOI: 10.1371/journal.pone.0241528] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 12/22/2020] [Indexed: 11/30/2022] Open
Abstract
Image processing technologies are available for high-throughput acquisition and analysis of phenotypes for crop populations, which is of great significance for crop growth monitoring, evaluation of seedling condition, and cultivation management. However, existing methods rely on empirical segmentation thresholds, thus can have insufficient accuracy of extracted phenotypes. Taking maize as an example crop, we propose a phenotype extraction approach from top-view images at the seedling stage. An end-to-end segmentation network, named PlantU-net, which uses a small amount of training data, was explored to realize automatic segmentation of top-view images of a maize population at the seedling stage. Morphological and color related phenotypes were automatic extracted, including maize shoot coverage, circumscribed radius, aspect ratio, and plant azimuth plane angle. The results show that the approach can segment the shoots at the seedling stage from top-view images, obtained either from the UAV or tractor-based high-throughput phenotyping platform. The average segmentation accuracy, recall rate, and F1 score are 0.96, 0.98, and 0.97, respectively. The extracted phenotypes, including maize shoot coverage, circumscribed radius, aspect ratio, and plant azimuth plane angle, are highly correlated with manual measurements (R2 = 0.96-0.99). This approach requires less training data and thus has better expansibility. It provides practical means for high-throughput phenotyping analysis of early growth stage crop populations.
Collapse
Affiliation(s)
- Yinglun Li
- College of Resources and Environment, Jilin Agricultural University, Changchun, China
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
| | - Weiliang Wen
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Xinyu Guo
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Zetao Yu
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Shenghao Gu
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Haipeng Yan
- Beijing Shunxin Agricultural Science and Technology Co., Ltd, Beijing, China
| | - Chunjiang Zhao
- College of Resources and Environment, Jilin Agricultural University, Changchun, China
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| |
Collapse
|
10
|
Das Choudhury S, Maturu S, Samal A, Stoerger V, Awada T. Leveraging Image Analysis to Compute 3D Plant Phenotypes Based on Voxel-Grid Plant Reconstruction. FRONTIERS IN PLANT SCIENCE 2020; 11:521431. [PMID: 33362806 PMCID: PMC7755976 DOI: 10.3389/fpls.2020.521431] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 11/17/2020] [Indexed: 05/31/2023]
Abstract
High throughput image-based plant phenotyping facilitates the extraction of morphological and biophysical traits of a large number of plants non-invasively in a relatively short time. It facilitates the computation of advanced phenotypes by considering the plant as a single object (holistic phenotypes) or its components, i.e., leaves and the stem (component phenotypes). The architectural complexity of plants increases over time due to variations in self-occlusions and phyllotaxy, i.e., arrangements of leaves around the stem. One of the central challenges to computing phenotypes from 2-dimensional (2D) single view images of plants, especially at the advanced vegetative stage in presence of self-occluding leaves, is that the information captured in 2D images is incomplete, and hence, the computed phenotypes are inaccurate. We introduce a novel algorithm to compute 3-dimensional (3D) plant phenotypes from multiview images using voxel-grid reconstruction of the plant (3DPhenoMV). The paper also presents a novel method to reliably detect and separate the individual leaves and the stem from the 3D voxel-grid of the plant using voxel overlapping consistency check and point cloud clustering techniques. To evaluate the performance of the proposed algorithm, we introduce the University of Nebraska-Lincoln 3D Plant Phenotyping Dataset (UNL-3DPPD). A generic taxonomy of 3D image-based plant phenotypes are also presented to promote 3D plant phenotyping research. A subset of these phenotypes are computed using computer vision algorithms with discussion of their significance in the context of plant science. The central contributions of the paper are (a) an algorithm for 3D voxel-grid reconstruction of maize plants at the advanced vegetative stages using images from multiple 2D views; (b) a generic taxonomy of 3D image-based plant phenotypes and a public benchmark dataset, i.e., UNL-3DPPD, to promote the development of 3D image-based plant phenotyping research; and (c) novel voxel overlapping consistency check and point cloud clustering techniques to detect and isolate individual leaves and stem of the maize plants to compute the component phenotypes. Detailed experimental analyses demonstrate the efficacy of the proposed method, and also show the potential of 3D phenotypes to explain the morphological characteristics of plants regulated by genetic and environmental interactions.
Collapse
Affiliation(s)
- Sruti Das Choudhury
- School of Natural Resources, University of Nebraska-Lincoln, Lincoln, NE, United States
- Department of Computer Science and Engineering, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Srikanth Maturu
- Department of Computer Science and Engineering, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Ashok Samal
- Department of Computer Science and Engineering, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Vincent Stoerger
- Agricultural Research Division, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Tala Awada
- School of Natural Resources, University of Nebraska-Lincoln, Lincoln, NE, United States
- Agricultural Research Division, University of Nebraska-Lincoln, Lincoln, NE, United States
| |
Collapse
|
11
|
Yu JG, Li Y, Gao C, Gaoa H, Xia GS, Yub ZL, Lic Y. Exemplar-Based Recursive Instance Segmentation With Application to Plant Image Analysis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:389-404. [PMID: 31329554 DOI: 10.1109/tip.2019.2923571] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Instance segmentation is a challenging computer vision problem which lies at the intersection of object detection and semantic segmentation. Motivated by plant image analysis in the context of plant phenotyping, a recently emerging application field of computer vision, this paper presents the Exemplar-Based Recursive Instance Segmentation (ERIS) framework. A three-layer probabilistic model is firstly introduced to jointly represent hypotheses, voting elements, instance labels and their connections. Afterwards, a recursive optimization algorithm is developed to infer the maximum a posteriori (MAP) solution, which handles one instance at a time by alternating among the three steps of detection, segmentation and update. The proposed ERIS framework departs from previous works mainly in two respects. First, it is exemplar-based and model-free, which can achieve instance-level segmentation of a specific object class given only a handful of (typically less than 10) annotated exemplars. Such a merit enables its use in case that no massive manually-labeled data is available for training strong classification models, as required by most existing methods. Second, instead of attempting to infer the solution in a single shot, which suffers from extremely high computational complexity, our recursive optimization strategy allows for reasonably efficient MAP-inference in full hypothesis space. The ERIS framework is substantialized for the specific application of plant leaf segmentation in this work. Experiments are conducted on public benchmarks to demonstrate the superiority of our method in both effectiveness and efficiency in comparison with the state-of-the-art.
Collapse
|
12
|
Das Choudhury S, Samal A, Awada T. Leveraging Image Analysis for High-Throughput Plant Phenotyping. FRONTIERS IN PLANT SCIENCE 2019; 10:508. [PMID: 31068958 PMCID: PMC6491831 DOI: 10.3389/fpls.2019.00508] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Accepted: 04/02/2019] [Indexed: 05/18/2023]
Abstract
The complex interaction between a genotype and its environment controls the biophysical properties of a plant, manifested in observable traits, i.e., plant's phenome, which influences resources acquisition, performance, and yield. High-throughput automated image-based plant phenotyping refers to the sensing and quantifying plant traits non-destructively by analyzing images captured at regular intervals and with precision. While phenomic research has drawn significant attention in the last decade, extracting meaningful and reliable numerical phenotypes from plant images especially by considering its individual components, e.g., leaves, stem, fruit, and flower, remains a critical bottleneck to the translation of advances of phenotyping technology into genetic insights due to various challenges including lighting variations, plant rotations, and self-occlusions. The paper provides (1) a framework for plant phenotyping in a multimodal, multi-view, time-lapsed, high-throughput imaging system; (2) a taxonomy of phenotypes that may be derived by image analysis for better understanding of morphological structure and functional processes in plants; (3) a brief discussion on publicly available datasets to encourage algorithm development and uniform comparison with the state-of-the-art methods; (4) an overview of the state-of-the-art image-based high-throughput plant phenotyping methods; and (5) open problems for the advancement of this research field.
Collapse
Affiliation(s)
- Sruti Das Choudhury
- School of Natural Resources, University of Nebraska-Lincoln, Lincoln, NE, United States
- Department of Computer Science and Engineering, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Ashok Samal
- Department of Computer Science and Engineering, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Tala Awada
- School of Natural Resources, University of Nebraska-Lincoln, Lincoln, NE, United States
- Agricultural Research Division, University of Nebraska-Lincoln, Lincoln, NE, United States
| |
Collapse
|
13
|
Li D, Cao Y, Tang XS, Yan S, Cai X. Leaf Segmentation on Dense Plant Point Clouds with Facet Region Growing. SENSORS 2018; 18:s18113625. [PMID: 30366434 PMCID: PMC6263610 DOI: 10.3390/s18113625] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 10/07/2018] [Accepted: 10/22/2018] [Indexed: 11/22/2022]
Abstract
Leaves account for the largest proportion of all organ areas for most kinds of plants, and are comprise the main part of the photosynthetically active material in a plant. Observation of individual leaves can help to recognize their growth status and measure complex phenotypic traits. Current image-based leaf segmentation methods have problems due to highly restricted species and vulnerability toward canopy occlusion. In this work, we propose an individual leaf segmentation approach for dense plant point clouds using facet over-segmentation and facet region growing. The approach can be divided into three steps: (1) point cloud pre-processing, (2) facet over-segmentation, and (3) facet region growing for individual leaf segmentation. The experimental results show that the proposed method is effective and efficient in segmenting individual leaves from 3D point clouds of greenhouse ornamentals such as Epipremnum aureum, Monstera deliciosa, and Calathea makoyana, and the average precision and recall are both above 90%. The results also reveal the wide applicability of the proposed methodology for point clouds scanned from different kinds of 3D imaging systems, such as stereo vision and Kinect v2. Moreover, our method is potentially applicable in a broad range of applications that aim at segmenting regular surfaces and objects from a point cloud.
Collapse
Affiliation(s)
- Dawei Li
- College of Information Science and Technology, Donghua University, Shanghai 201620, China.
- Engineering Research Center of Digitized Textile & Fashion Technology, Ministry of Education, Donghua University, Shanghai 201620, China.
| | - Yan Cao
- College of Information Science and Technology, Donghua University, Shanghai 201620, China.
| | - Xue-Song Tang
- College of Information Science and Technology, Donghua University, Shanghai 201620, China.
- Engineering Research Center of Digitized Textile & Fashion Technology, Ministry of Education, Donghua University, Shanghai 201620, China.
| | - Siyuan Yan
- College of Information Science and Technology, Donghua University, Shanghai 201620, China.
| | - Xin Cai
- College of Information Science and Technology, Donghua University, Shanghai 201620, China.
- Engineering Research Center of Digitized Textile & Fashion Technology, Ministry of Education, Donghua University, Shanghai 201620, China.
| |
Collapse
|