1
|
Deep learning for Chilean native flora classification: a comparative analysis. FRONTIERS IN PLANT SCIENCE 2023; 14:1211490. [PMID: 37767291 PMCID: PMC10520280 DOI: 10.3389/fpls.2023.1211490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 08/15/2023] [Indexed: 09/29/2023]
Abstract
The limited availability of information on Chilean native flora has resulted in a lack of knowledge among the general public, and the classification of these plants poses challenges without extensive expertise. This study evaluates the performance of several Deep Learning (DL) models, namely InceptionV3, VGG19, ResNet152, and MobileNetV2, in classifying images representing Chilean native flora. The models are pre-trained on Imagenet. A dataset containing 500 images for each of the 10 classes of native flowers in Chile was curated, resulting in a total of 5000 images. The DL models were applied to this dataset, and their performance was compared based on accuracy and other relevant metrics. The findings highlight the potential of DL models to accurately classify images of Chilean native flora. The results contribute to enhancing the understanding of these plant species and fostering awareness among the general public. Further improvements and applications of DL in ecology and biodiversity research are discussed.
Collapse
|
2
|
A Novel Computer Vision Model for Medicinal Plant Identification Using Log-Gabor Filters and Deep Learning Algorithms. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1189509. [PMID: 36203732 PMCID: PMC9532088 DOI: 10.1155/2022/1189509] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 08/16/2022] [Accepted: 09/05/2022] [Indexed: 11/27/2022]
Abstract
Computer vision is the science that enables computers and machines to see and perceive image content on a semantic level. It combines concepts, techniques, and ideas from various fields such as digital image processing, pattern matching, artificial intelligence, and computer graphics. A computer vision system is designed to model the human visual system on a functional basis as closely as possible. Deep learning and Convolutional Neural Networks (CNNs) in particular which are biologically inspired have significantly contributed to computer vision studies. This research develops a computer vision system that uses CNNs and handcrafted filters from Log-Gabor filters to identify medicinal plants based on their leaf textural features in an ensemble manner. The system was tested on a dataset developed from the Centre of Plant Medicine Research, Ghana (MyDataset) consisting of forty-nine (49) plant species. Using the concept of transfer learning, ten pretrained networks including Alexnet, GoogLeNet, DenseNet201, Inceptionv3, Mobilenetv2, Restnet18, Resnet50, Resnet101, vgg16, and vgg19 were used as feature extractors. The DenseNet201 architecture resulted with the best outcome of 87% accuracy and GoogLeNet with 79% preforming the worse averaged across six supervised learning algorithms. The proposed model (OTAMNet), created by fusing a Log-Gabor layer into the transition layers of the DenseNet201 architecture achieved 98% accuracy when tested on MyDataset. OTAMNet was tested on other benchmark datasets; Flavia, Swedish Leaf, MD2020, and the Folio dataset. The Flavia dataset achieved 99%, Swedish Leaf 100%, MD2020 99%, and the Folio dataset 97%. A false-positive rate of less than 0.1% was achieved in all cases.
Collapse
|
3
|
Plant recognition by AI: Deep neural nets, transformers, and kNN in deep embeddings. FRONTIERS IN PLANT SCIENCE 2022; 13:787527. [PMID: 36237508 PMCID: PMC9551576 DOI: 10.3389/fpls.2022.787527] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 07/15/2022] [Indexed: 06/16/2023]
Abstract
The article reviews and benchmarks machine learning methods for automatic image-based plant species recognition and proposes a novel retrieval-based method for recognition by nearest neighbor classification in a deep embedding space. The image retrieval method relies on a model trained via the Recall@k surrogate loss. State-of-the-art approaches to image classification, based on Convolutional Neural Networks (CNN) and Vision Transformers (ViT), are benchmarked and compared with the proposed image retrieval-based method. The impact of performance-enhancing techniques, e.g., class prior adaptation, image augmentations, learning rate scheduling, and loss functions, is studied. The evaluation is carried out on the PlantCLEF 2017, the ExpertLifeCLEF 2018, and the iNaturalist 2018 Datasets-the largest publicly available datasets for plant recognition. The evaluation of CNN and ViT classifiers shows a gradual improvement in classification accuracy. The current state-of-the-art Vision Transformer model, ViT-Large/16, achieves 91.15% and 83.54% accuracy on the PlantCLEF 2017 and ExpertLifeCLEF 2018 test sets, respectively; the best CNN model (ResNeSt-269e) error rate dropped by 22.91% and 28.34%. Apart from that, additional tricks increased the performance for the ViT-Base/32 by 3.72% on ExpertLifeCLEF 2018 and by 4.67% on PlantCLEF 2017. The retrieval approach achieved superior performance in all measured scenarios with accuracy margins of 0.28%, 4.13%, and 10.25% on ExpertLifeCLEF 2018, PlantCLEF 2017, and iNat2018-Plantae, respectively.
Collapse
|
4
|
Review of plant leaf recognition. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10278-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
5
|
Deep learning approaches and interventions for futuristic engineering in agriculture. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07744-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
6
|
Exploring Soybean Flower and Pod Variation Patterns During Reproductive Period Based on Fusion Deep Learning. FRONTIERS IN PLANT SCIENCE 2022; 13:922030. [PMID: 35909768 PMCID: PMC9326440 DOI: 10.3389/fpls.2022.922030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 06/20/2022] [Indexed: 06/15/2023]
Abstract
The soybean flower and the pod drop are important factors in soybean yield, and the use of computer vision techniques to obtain the phenotypes of flowers and pods in bulk, as well as in a quick and accurate manner, is a key aspect of the study of the soybean flower and pod drop rate (PDR). This paper compared a variety of deep learning algorithms for identifying and counting soybean flowers and pods, and found that the Faster R-CNN model had the best performance. Furthermore, the Faster R-CNN model was further improved and optimized based on the characteristics of soybean flowers and pods. The accuracy of the final model for identifying flowers and pods was increased to 94.36 and 91%, respectively. Afterward, a fusion model for soybean flower and pod recognition and counting was proposed based on the Faster R-CNN model, where the coefficient of determinationR2 between counts of soybean flowers and pods by the fusion model and manual counts reached 0.965 and 0.98, respectively. The above results show that the fusion model is a robust recognition and counting algorithm that can reduce labor intensity and improve efficiency. Its application will greatly facilitate the study of the variable patterns of soybean flowers and pods during the reproductive period. Finally, based on the fusion model, we explored the variable patterns of soybean flowers and pods during the reproductive period, the spatial distribution patterns of soybean flowers and pods, and soybean flower and pod drop patterns.
Collapse
|
7
|
Automatic Fungi Recognition: Deep Learning Meets Mycology. SENSORS 2022; 22:s22020633. [PMID: 35062595 PMCID: PMC8779018 DOI: 10.3390/s22020633] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/03/2022] [Accepted: 01/04/2022] [Indexed: 02/04/2023]
Abstract
The article presents an AI-based fungi species recognition system for a citizen-science community. The system's real-time identification too - FungiVision - with a mobile application front-end, led to increased public interest in fungi, quadrupling the number of citizens collecting data. FungiVision, deployed with a human-in-the-loop, reaches nearly 93% accuracy. Using the collected data, we developed a novel fine-grained classification dataset - Danish Fungi 2020 (DF20) - with several unique characteristics: species-level labels, a small number of errors, and rich observation metadata. The dataset enables the testing of the ability to improve classification using metadata, e.g., time, location, habitat and substrate, facilitates classifier calibration testing and finally allows the study of the impact of the device settings on the classification performance. The continual flow of labelled data supports improvements of the online recognition system. Finally, we present a novel method for the fungi recognition service, based on a Vision Transformer architecture. Trained on DF20 and exploiting available metadata, it achieves a recognition error that is 46.75% lower than the current system. By providing a stream of labeled data in one direction, and an accuracy increase in the other, the collaboration creates a virtuous cycle helping both communities.
Collapse
|
8
|
|
9
|
Tree trunk texture classification using multi-scale statistical macro binary patterns and CNN. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108473] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
10
|
Automated feature-specific tree species identification from natural images using deep semi-supervised learning. ECOL INFORM 2021. [DOI: 10.1016/j.ecoinf.2021.101475] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
11
|
Barley Variety Identification by iPhone Images and Deep Learning. JOURNAL OF THE AMERICAN SOCIETY OF BREWING CHEMISTS 2021. [DOI: 10.1080/03610470.2021.1958602] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
12
|
Plant image identification application demonstrates high accuracy in Northern Europe. AOB PLANTS 2021; 13:plab050. [PMID: 34457230 PMCID: PMC8387968 DOI: 10.1093/aobpla/plab050] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 07/26/2021] [Indexed: 05/23/2023]
Abstract
Automated image-based plant identification has experienced rapid development and has been already used in research and nature management. However, there is a need for extensive studies on how accurately automatic plant identification works and which characteristics of observations and study species influence the results. We investigated the accuracy of the Flora Incognita application, a research-based tool for automated plant image identification. Our study was conducted in Estonia, Northern Europe. Photos originated from the Estonian national curated biodiversity observations database, originally without the intention to use them for automated identification (1496 photos, 542 species) were examined. Flora Incognita was also directly tested in field conditions in various habitats, taking images of plant organs as guided by the application (998 observations, 1703 photos, 280 species). Identification accuracy was compared among species characteristics: plant family, growth forms and life forms, habitat type and regional frequency. We also analysed image characteristics (plant organs, background, number of species in focus), and the number of training images that were available for particular species to develop the automated identification algorithm. From database images 79.6 % of species were correctly identified by Flora Incognita; in the field conditions species identification accuracy reached 85.3 %. Overall, the correct genus was found for 89 % and the correct plant family for 95 % of the species. Accuracy varied among different plant families, life forms and growth forms. Rare and common species and species from different habitats were identified with equal accuracy. Images with reproductive organs or with only the target species in focus were identified with greater success. The number of training images per species was positively correlated with the identification success. Even though a high accuracy has been already achieved for Flora Incognita, allowing its usage for research and practices, our results can guide further improvements of this application and automated plant identification in general.
Collapse
|
13
|
Convolutional Neural Networks to Estimate Dry Matter Yield in a Guineagrass Breeding Program Using UAV Remote Sensing. SENSORS (BASEL, SWITZERLAND) 2021; 21:3971. [PMID: 34207543 PMCID: PMC8227058 DOI: 10.3390/s21123971] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 01/07/2023]
Abstract
Forage dry matter is the main source of nutrients in the diet of ruminant animals. Thus, this trait is evaluated in most forage breeding programs with the objective of increasing the yield. Novel solutions combining unmanned aerial vehicles (UAVs) and computer vision are crucial to increase the efficiency of forage breeding programs, to support high-throughput phenotyping (HTP), aiming to estimate parameters correlated to important traits. The main goal of this study was to propose a convolutional neural network (CNN) approach using UAV-RGB imagery to estimate dry matter yield traits in a guineagrass breeding program. For this, an experiment composed of 330 plots of full-sib families and checks conducted at Embrapa Beef Cattle, Brazil, was used. The image dataset was composed of images obtained with an RGB sensor embedded in a Phantom 4 PRO. The traits leaf dry matter yield (LDMY) and total dry matter yield (TDMY) were obtained by conventional agronomic methodology and considered as the ground-truth data. Different CNN architectures were analyzed, such as AlexNet, ResNeXt50, DarkNet53, and two networks proposed recently for related tasks named MaCNN and LF-CNN. Pretrained AlexNet and ResNeXt50 architectures were also studied. Ten-fold cross-validation was used for training and testing the model. Estimates of DMY traits by each CNN architecture were considered as new HTP traits to compare with real traits. Pearson correlation coefficient r between real and HTP traits ranged from 0.62 to 0.79 for LDMY and from 0.60 to 0.76 for TDMY; root square mean error (RSME) ranged from 286.24 to 366.93 kg·ha-1 for LDMY and from 413.07 to 506.56 kg·ha-1 for TDMY. All the CNNs generated heritable HTP traits, except LF-CNN for LDMY and AlexNet for TDMY. Genetic correlations between real and HTP traits were high but varied according to the CNN architecture. HTP trait from ResNeXt50 pretrained achieved the best results for indirect selection regardless of the dry matter trait. This demonstrates that CNNs with remote sensing data are highly promising for HTP for dry matter yield traits in forage breeding programs.
Collapse
|
14
|
A multi-division convolutional neural network-based plant identification system. PeerJ Comput Sci 2021; 7:e572. [PMID: 34141894 PMCID: PMC8176547 DOI: 10.7717/peerj-cs.572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 05/10/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND Plants have an important place in the life of all living things. Today, there is a risk of extinction for many plant species due to climate change and its environmental impact. Therefore, researchers have conducted various studies with the aim of protecting the diversity of the planet's plant life. Generally, research in this area is aimed at determining plant species and diseases, with works predominantly based on plant images. Advances in deep learning techniques have provided very successful results in this field, and have become widely used in research studies to identify plant species. METHODS In this paper, a Multi-Division Convolutional Neural Network (MD-CNN)-based plant recognition system was developed in order to address an agricultural problem related to the classification of plant species. In the proposed system, we divide plant images into equal nxn-sized pieces, and then deep features are extracted for each piece using a Convolutional Neural Network (CNN). For each part of the obtained deep features, effective features are selected using the Principal Component Analysis (PCA) algorithm. Finally, the obtained effective features are combined and classification conducted using the Support Vector Machine (SVM) method. RESULTS In order to test the performance of the proposed deep-based system, eight different plant datasets were used: Flavia, Swedish, ICL, Foliage, Folio, Flower17, Flower102, and LeafSnap. According to the results of these experimental studies, 100% accuracy scores were achieved for the Flavia, Swedish, and Folio datasets, whilst the ICL, Foliage, Flower17, Flower102, and LeafSnap datasets achieved results of 99.77%, 99.93%, 97.87%, 98.03%, and 94.38%, respectively.
Collapse
|
15
|
Central Attention and a Dual Path Convolutional Neural Network in Real-World Tree Species Recognition. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:961. [PMID: 33499249 PMCID: PMC7908595 DOI: 10.3390/ijerph18030961] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/11/2021] [Accepted: 01/14/2021] [Indexed: 11/16/2022]
Abstract
Identifying plants is not only the job of professionals, but also useful or essential for the plant lover and the general public. Although deep learning approaches for plant recognition are promising, driven by the success of convolutional neural networks (CNN), their performances are still far from the requirements of an in-field scenario. First, we propose a central attention concept that helps focus on the target instead of backgrounds in the image for tree species recognition. It could prevent model training from confused vision by establishing a dual path CNN deep learning framework, in which the central attention model combined with the CNN model based on InceptionV3 were employed to automatically extract the features. These two models were then learned together with a shared classification layer. Experimental results assessed the effectiveness of our proposed approach which outperformed each uni-path alone, and existing methods in the whole plant recognition system. Additionally, we created our own tree image database where each photo contained a wealth of information on the entire tree instead of an individual plant organ. Lastly, we developed a prototype system of an online/offline available tree species identification working on a consumer mobile platform that can identify the tree species not only by image recognition, but also detection and classification in real-time remotely.
Collapse
|
16
|
|
17
|
PANOMICS meets germplasm. PLANT BIOTECHNOLOGY JOURNAL 2020; 18:1507-1525. [PMID: 32163658 PMCID: PMC7292548 DOI: 10.1111/pbi.13372] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2019] [Revised: 02/17/2020] [Accepted: 02/26/2020] [Indexed: 05/14/2023]
Abstract
Genotyping-by-sequencing has enabled approaches for genomic selection to improve yield, stress resistance and nutritional value. More and more resource studies are emerging providing 1000 and more genotypes and millions of SNPs for one species covering a hitherto inaccessible intraspecific genetic variation. The larger the databases are growing, the better statistical approaches for genomic selection will be available. However, there are clear limitations on the statistical but also on the biological part. Intraspecific genetic variation is able to explain a high proportion of the phenotypes, but a large part of phenotypic plasticity also stems from environmentally driven transcriptional, post-transcriptional, translational, post-translational, epigenetic and metabolic regulation. Moreover, regulation of the same gene can have different phenotypic outputs in different environments. Consequently, to explain and understand environment-dependent phenotypic plasticity based on the available genotype variation we have to integrate the analysis of further molecular levels reflecting the complete information flow from the gene to metabolism to phenotype. Interestingly, metabolomics platforms are already more cost-effective than NGS platforms and are decisive for the prediction of nutritional value or stress resistance. Here, we propose three fundamental pillars for future breeding strategies in the framework of Green Systems Biology: (i) combining genome selection with environment-dependent PANOMICS analysis and deep learning to improve prediction accuracy for marker-dependent trait performance; (ii) PANOMICS resolution at subtissue, cellular and subcellular level provides information about fundamental functions of selected markers; (iii) combining PANOMICS with genome editing and speed breeding tools to accelerate and enhance large-scale functional validation of trait-specific precision breeding.
Collapse
|
18
|
PANOMICS meets germplasm. PLANT BIOTECHNOLOGY JOURNAL 2020; 18. [PMID: 32163658 PMCID: PMC7292548 DOI: 10.1111/pbi.13372,10.13140/rg.2.1.1233.5760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Genotyping-by-sequencing has enabled approaches for genomic selection to improve yield, stress resistance and nutritional value. More and more resource studies are emerging providing 1000 and more genotypes and millions of SNPs for one species covering a hitherto inaccessible intraspecific genetic variation. The larger the databases are growing, the better statistical approaches for genomic selection will be available. However, there are clear limitations on the statistical but also on the biological part. Intraspecific genetic variation is able to explain a high proportion of the phenotypes, but a large part of phenotypic plasticity also stems from environmentally driven transcriptional, post-transcriptional, translational, post-translational, epigenetic and metabolic regulation. Moreover, regulation of the same gene can have different phenotypic outputs in different environments. Consequently, to explain and understand environment-dependent phenotypic plasticity based on the available genotype variation we have to integrate the analysis of further molecular levels reflecting the complete information flow from the gene to metabolism to phenotype. Interestingly, metabolomics platforms are already more cost-effective than NGS platforms and are decisive for the prediction of nutritional value or stress resistance. Here, we propose three fundamental pillars for future breeding strategies in the framework of Green Systems Biology: (i) combining genome selection with environment-dependent PANOMICS analysis and deep learning to improve prediction accuracy for marker-dependent trait performance; (ii) PANOMICS resolution at subtissue, cellular and subcellular level provides information about fundamental functions of selected markers; (iii) combining PANOMICS with genome editing and speed breeding tools to accelerate and enhance large-scale functional validation of trait-specific precision breeding.
Collapse
|
19
|
Deep learning for image-based large-flowered chrysanthemum cultivar recognition. PLANT METHODS 2019; 15:146. [PMID: 31827578 PMCID: PMC6892201 DOI: 10.1186/s13007-019-0532-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Accepted: 11/23/2019] [Indexed: 05/16/2023]
Abstract
BACKGROUND Cultivar recognition is a basic work in flower production, research, and commercial application. Chinese large-flowered chrysanthemum (Chrysanthemum × morifolium Ramat.) is miraculous because of its high ornamental value and rich cultural deposits. However, the complicated capitulum structure, various floret types and numerous cultivars hinder chrysanthemum cultivar recognition. Here, we explore how deep learning method can be applied to chrysanthemum cultivar recognition. RESULTS We propose deep learning models with two networks VGG16 and ResNet50 to recognize large-flowered chrysanthemum. Dataset A comprising 14,000 images for 103 cultivars, and dataset B comprising 197 images from different years were collected. Dataset A was used to train the networks and determine the calibration accuracy (Top-5 rate of above 98%), and dataset B was used to evaluate the model generalization performance (Top-5 rate of above 78%). Moreover, gradient-weighted class activation mapping (Grad-CAM) visualization and feature clustering analysis were used to explore how the deep learning model recognizes chrysanthemum cultivars. CONCLUSION Deep learning method applied to cultivar recognition is a breakthrough in horticultural science with the advantages of strong recognition performance and high recognition speed. Inflorescence edge areas, disc floret areas, inflorescence colour and inflorescence shape may well be the key factors in model decision-making process, which are also critical in human decision-making.
Collapse
|
20
|
Deep Learning Techniques for Grape Plant Species Identification in Natural Images. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4850. [PMID: 31703313 PMCID: PMC6891615 DOI: 10.3390/s19224850] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 11/04/2019] [Accepted: 11/05/2019] [Indexed: 11/24/2022]
Abstract
Frequently, the vineyards in the Douro Region present multiple grape varieties per parcel and even per row. An automatic algorithm for grape variety identification as an integrated software component was proposed that can be applied, for example, to a robotic harvesting system. However, some issues and constraints in its development were highlighted, namely, the images captured in natural environment, low volume of images, high similarity of the images among different grape varieties, leaf senescence, and significant changes on the grapevine leaf and bunch images in the harvest seasons, mainly due to adverse climatic conditions, diseases, and the presence of pesticides. In this paper, the performance of the transfer learning and fine-tuning techniques based on AlexNet architecture were evaluated when applied to the identification of grape varieties. Two natural vineyard image datasets were captured in different geographical locations and harvest seasons. To generate different datasets for training and classification, some image processing methods, including a proposed four-corners-in-one image warping algorithm, were used. The experimental results, obtained from the application of an AlexNet-based transfer learning scheme and trained on the image dataset pre-processed through the four-corners-in-one method, achieved a test accuracy score of 77.30%. Applying this classifier model, an accuracy of 89.75% on the popular Flavia leaf dataset was reached. The results obtained by the proposed approach are promising and encouraging in helping Douro wine growers in the automatic task of identifying grape varieties.
Collapse
|
21
|
Computer vision-based phenotyping for improvement of plant productivity: a machine learning perspective. Gigascience 2019; 8:5232233. [PMID: 30520975 PMCID: PMC6312910 DOI: 10.1093/gigascience/giy153] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Revised: 09/06/2018] [Accepted: 11/24/2018] [Indexed: 11/29/2022] Open
Abstract
Employing computer vision to extract useful information from images and videos is becoming a key technique for identifying phenotypic changes in plants. Here, we review the emerging aspects of computer vision for automated plant phenotyping. Recent advances in image analysis empowered by machine learning-based techniques, including convolutional neural network-based modeling, have expanded their application to assist high-throughput plant phenotyping. Combinatorial use of multiple sensors to acquire various spectra has allowed us to noninvasively obtain a series of datasets, including those related to the development and physiological responses of plants throughout their life. Automated phenotyping platforms accelerate the elucidation of gene functions associated with traits in model plants under controlled conditions. Remote sensing techniques with image collection platforms, such as unmanned vehicles and tractors, are also emerging for large-scale field phenotyping for crop breeding and precision agriculture. Computer vision-based phenotyping will play significant roles in both the nowcasting and forecasting of plant traits through modeling of genotype/phenotype relationships.
Collapse
|
22
|
Computer vision-based phenotyping for improvement of plant productivity: a machine learning perspective. Gigascience 2019. [PMID: 30520975 DOI: 10.1093/gigascience/giy153/5232233] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2023] Open
Abstract
Employing computer vision to extract useful information from images and videos is becoming a key technique for identifying phenotypic changes in plants. Here, we review the emerging aspects of computer vision for automated plant phenotyping. Recent advances in image analysis empowered by machine learning-based techniques, including convolutional neural network-based modeling, have expanded their application to assist high-throughput plant phenotyping. Combinatorial use of multiple sensors to acquire various spectra has allowed us to noninvasively obtain a series of datasets, including those related to the development and physiological responses of plants throughout their life. Automated phenotyping platforms accelerate the elucidation of gene functions associated with traits in model plants under controlled conditions. Remote sensing techniques with image collection platforms, such as unmanned vehicles and tractors, are also emerging for large-scale field phenotyping for crop breeding and precision agriculture. Computer vision-based phenotyping will play significant roles in both the nowcasting and forecasting of plant traits through modeling of genotype/phenotype relationships.
Collapse
|
23
|
Deep Learning for Plant Stress Phenotyping: Trends and Future Perspectives. TRENDS IN PLANT SCIENCE 2018; 23:883-898. [PMID: 30104148 DOI: 10.1016/j.tplants.2018.07.004] [Citation(s) in RCA: 181] [Impact Index Per Article: 30.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 06/17/2018] [Accepted: 07/11/2018] [Indexed: 05/18/2023]
Abstract
Deep learning (DL), a subset of machine learning approaches, has emerged as a versatile tool to assimilate large amounts of heterogeneous data and provide reliable predictions of complex and uncertain phenomena. These tools are increasingly being used by the plant science community to make sense of the large datasets now regularly collected via high-throughput phenotyping and genotyping. We review recent work where DL principles have been utilized for digital image-based plant stress phenotyping. We provide a comparative assessment of DL tools against other existing techniques, with respect to decision accuracy, data size requirement, and applicability in various scenarios. Finally, we outline several avenues of research leveraging current and future DL tools in plant science.
Collapse
|
24
|
Estimation of vegetation indices for high-throughput phenotyping of wheat using aerial imaging. PLANT METHODS 2018; 14:20. [PMID: 29563961 PMCID: PMC5851000 DOI: 10.1186/s13007-018-0287-6] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 03/07/2018] [Indexed: 05/24/2023]
Abstract
BACKGROUND Unmanned aerial vehicles offer the opportunity for precision agriculture to efficiently monitor agricultural land. A vegetation index (VI) derived from an aerially observed multispectral image (MSI) can quantify crop health, moisture and nutrient content. However, due to the high cost of multispectral sensors, alternate, low-cost solutions have lately received great interest. We present a novel method for model-based estimation of a VI using RGB color images. The non-linear spatio-spectral relationship between the RGB image of vegetation and the index computed by its corresponding MSI is learned through deep neural networks. The learned models can be used to estimate VI of a crop segment. RESULTS Analysis of images obtained in wheat breeding trials show that the aerially observed VI was highly correlated with ground-measured VI. In addition, VI estimates based on RGB images were highly correlated with VI deduced from MSIs. Spatial, spectral and temporal information of images contributed to estimation of VI. Both intra-variety and inter-variety differences were preserved by estimated VI. However, VI estimates were reliable until just before significant appearance of senescence. CONCLUSION The proposed approach validates that it is reasonable to accurately estimate VI using deep neural networks. The results prove that RGB images contain sufficient information for VI estimation. It demonstrates that low-cost VI measurement is possible with standard RGB cameras.
Collapse
|