1
|
Research in methodologies for modelling the oral cavity. Biomed Phys Eng Express 2024; 10:032001. [PMID: 38350128 DOI: 10.1088/2057-1976/ad28cc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 02/13/2024] [Indexed: 02/15/2024]
Abstract
The paper aims to explore the current state of understanding surrounding in silico oral modelling. This involves exploring methodologies, technologies and approaches pertaining to the modelling of the whole oral cavity; both internally and externally visible structures that may be relevant or appropriate to oral actions. Such a model could be referred to as a 'complete model' which includes consideration of a full set of facial features (i.e. not only mouth) as well as synergistic stimuli such as audio and facial thermal data. 3D modelling technologies capable of accurately and efficiently capturing a complete representation of the mouth for an individual have broad applications in the study of oral actions, due to their cost-effectiveness and time efficiency. This review delves into the field of clinical phonetics to classify oral actions pertaining to both speech and non-speech movements, identifying how the various vocal organs play a role in the articulatory and masticatory process. Vitaly, it provides a summation of 12 articulatory recording methods, forming a tool to be used by researchers in identifying which method of recording is appropriate for their work. After addressing the cost and resource-intensive limitations of existing methods, a new system of modelling is proposed that leverages external to internal correlation modelling techniques to create a more efficient models of the oral cavity. The vision is that the outcomes will be applicable to a broad spectrum of oral functions related to physiology, health and wellbeing, including speech, oral processing of foods as well as dental health. The applications may span from speech correction, designing foods for the aging population, whilst in the dental field we would be able to gain information about patient's oral actions that would become part of creating a personalised dental treatment plan.
Collapse
|
2
|
Semantic Segmentation of Spontaneous Intracerebral Hemorrhage, Intraventricular Hemorrhage, and Associated Edema on CT Images Using Deep Learning. Radiol Artif Intell 2022; 4:e220096. [PMID: 36523645 PMCID: PMC9745441 DOI: 10.1148/ryai.220096] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 08/30/2022] [Accepted: 09/12/2022] [Indexed: 11/11/2022]
Abstract
This study evaluated deep learning algorithms for semantic segmentation and quantification of intracerebral hemorrhage (ICH), perihematomal edema (PHE), and intraventricular hemorrhage (IVH) on noncontrast CT scans of patients with spontaneous ICH. Models were assessed on 1732 annotated baseline noncontrast CT scans obtained from the Tranexamic Acid for Hyperacute Primary Intracerebral Haemorrhage (ie, TICH-2) international multicenter trial (ISRCTN93732214), and different loss functions using a three-dimensional no-new-U-Net (nnU-Net) were examined to address class imbalance (30% of participants with IVH in dataset). On the test cohort (n = 174, 10% of dataset), the top-performing models achieved median Dice similarity coefficients of 0.92 (IQR, 0.89-0.94), 0.66 (0.58-0.71), and 1.00 (0.87-1.00), respectively, for ICH, PHE, and IVH segmentation. U-Net-based networks showed comparable, satisfactory performances on ICH and PHE segmentations (P > .05), but all nnU-Net variants achieved higher accuracy than the Brain Lesion Analysis and Segmentation Tool for CT (BLAST-CT) and DeepLabv3+ for all labels (P < .05). The Focal model showed improved performance in IVH segmentation compared with the Tversky, two-dimensional nnU-Net, U-Net, BLAST-CT, and DeepLabv3+ models (P < .05). Focal achieved concordance values of 0.98, 0.88, and 0.99 for ICH, PHE, and ICH volumes, respectively. The mean volumetric differences between the ground truth and prediction were 0.32 mL (95% CI: -8.35, 9.00), 1.14 mL (-9.53, 11.8), and 0.06 mL (-1.71, 1.84), respectively. In conclusion, U-Net-based networks provide accurate segmentation on CT images of spontaneous ICH, and Focal loss can address class imbalance. International Clinical Trials Registry Platform (ICTRP) no. ISRCTN93732214 Supplemental material is available for this article. © RSNA, 2022 Keywords: Head/Neck, Brain/Brain Stem, Hemorrhage, Segmentation, Quantification, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms.
Collapse
|
3
|
Domain Adaptation of Synthetic Images for Wheat Head Detection. PLANTS (BASEL, SWITZERLAND) 2021; 10:plants10122633. [PMID: 34961104 PMCID: PMC8708756 DOI: 10.3390/plants10122633] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 11/22/2021] [Accepted: 11/25/2021] [Indexed: 06/12/2023]
Abstract
Wheat head detection is a core computer vision problem related to plant phenotyping that in recent years has seen increased interest as large-scale datasets have been made available for use in research. In deep learning problems with limited training data, synthetic data have been shown to improve performance by increasing the number of training examples available but have had limited effectiveness due to domain shift. To overcome this, many adversarial approaches such as Generative Adversarial Networks (GANs) have been proposed as a solution by better aligning the distribution of synthetic data to that of real images through domain augmentation. In this paper, we examine the impacts of performing wheat head detection on the global wheat head challenge dataset using synthetic data to supplement the original dataset. Through our experimentation, we demonstrate the challenges of performing domain augmentation where the target domain is large and diverse. We then present a novel approach to improving scores through using heatmap regression as a support network, and clustering to combat high variation of the target domain.
Collapse
|
4
|
GANana: Unsupervised Domain Adaptation for Volumetric Regression of Fruit. PLANT PHENOMICS (WASHINGTON, D.C.) 2021; 2021:9874597. [PMID: 34708214 PMCID: PMC8520669 DOI: 10.34133/2021/9874597] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 09/16/2021] [Indexed: 05/14/2023]
Abstract
3D reconstruction of fruit is important as a key component of fruit grading and an important part of many size estimation pipelines. Like many computer vision challenges, the 3D reconstruction task suffers from a lack of readily available training data in most domains, with methods typically depending on large datasets of high-quality image-model pairs. In this paper, we propose an unsupervised domain-adaptation approach to 3D reconstruction where labelled images only exist in our source synthetic domain, and training is supplemented with different unlabelled datasets from the target real domain. We approach the problem of 3D reconstruction using volumetric regression and produce a training set of 25,000 pairs of images and volumes using hand-crafted 3D models of bananas rendered in a 3D modelling environment (Blender). Each image is then enhanced by a GAN to more closely match the domain of photographs of real images by introducing a volumetric consistency loss, improving performance of 3D reconstruction on real images. Our solution harnesses the cost benefits of synthetic data while still maintaining good performance on real world images. We focus this work on the task of 3D banana reconstruction from a single image, representing a common task in plant phenotyping, but this approach is general and may be adapted to any 3D reconstruction task including other plant species and organs.
Collapse
|
5
|
Active Vision and Surface Reconstruction for 3D Plant Shoot Modelling. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2020; 17:1907-1917. [PMID: 31027044 DOI: 10.1109/tcbb.2019.2896908] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Plant phenotyping is the quantitative description of a plant's physiological, biochemical, and anatomical status which can be used in trait selection and helps to provide mechanisms to link underlying genetics with yield. Here, an active vision- based pipeline is presented which aims to contribute to reducing the bottleneck associated with phenotyping of architectural traits. The pipeline provides a fully automated response to photometric data acquisition and the recovery of three-dimensional (3D) models of plants without the dependency of botanical expertise, whilst ensuring a non-intrusive and non-destructive approach. Access to complete and accurate 3D models of plants supports computation of a wide variety of structural measurements. An Active Vision Cell (AVC) consisting of a camera-mounted robot arm plus combined software interface and a novel surface reconstruction algorithm is proposed. This pipeline provides a robust, flexible, and accurate method for automating the 3D reconstruction of plants. The reconstruction algorithm can reduce noise and provides a promising and extendable framework for high throughput phenotyping, improving current state-of-the-art methods. Furthermore, the pipeline can be applied to any plant species or form due to the application of an active vision framework combined with the automatic selection of key parameters for surface reconstruction.
Collapse
|
6
|
Volumetric Segmentation of Cell Cycle Markers in Confocal Images Using Machine Learning and Deep Learning. FRONTIERS IN PLANT SCIENCE 2020; 11:1275. [PMID: 32983190 PMCID: PMC7483761 DOI: 10.3389/fpls.2020.01275] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Accepted: 08/05/2020] [Indexed: 06/11/2023]
Abstract
Understanding plant growth processes is important for many aspects of biology and food security. Automating the observations of plant development-a process referred to as plant phenotyping-is increasingly important in the plant sciences, and is often a bottleneck. Automated tools are required to analyze the data in microscopy images depicting plant growth, either locating or counting regions of cellular features in images. In this paper, we present to the plant community an introduction to and exploration of two machine learning approaches to address the problem of marker localization in confocal microscopy. First, a comparative study is conducted on the classification accuracy of common conventional machine learning algorithms, as a means to highlight challenges with these methods. Second, a 3D (volumetric) deep learning approach is developed and presented, including consideration of appropriate loss functions and training data. A qualitative and quantitative analysis of all the results produced is performed. Evaluation of all approaches is performed on an unseen time-series sequence comprising several individual 3D volumes, capturing plant growth. The comparative analysis shows that the deep learning approach produces more accurate and robust results than traditional machine learning. To accompany the paper, we are releasing the 4D point annotation tool used to generate the annotations, in the form of a plugin for the popular ImageJ (FIJI) software. Network models and example datasets will also be available online.
Collapse
|
7
|
Deep convolutional neural networks for image-based Convolvulus sepium detection in sugar beet fields. PLANT METHODS 2020; 16:29. [PMID: 32165909 PMCID: PMC7059384 DOI: 10.1186/s13007-020-00570-z] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 02/19/2020] [Indexed: 05/21/2023]
Abstract
BACKGROUND Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments. RESULTS Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 × 1200) on a NVIDIA Titan X GPU environment. CONCLUSION The developed model has the potential to be deployed on an embedded mobile platform like the Jetson TX for online weed detection and management due to its high-speed inference. It is recommendable to use synthetic images and empirical field images together in training stage to improve the performance of models.
Collapse
|
8
|
Towards infield, live plant phenotyping using a reduced-parameter CNN. MACHINE VISION AND APPLICATIONS 2019; 31:2. [PMID: 31894176 PMCID: PMC6917635 DOI: 10.1007/s00138-019-01051-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 04/11/2019] [Accepted: 11/06/2019] [Indexed: 05/29/2023]
Abstract
There is an increase in consumption of agricultural produce as a result of the rapidly growing human population, particularly in developing nations. This has triggered high-quality plant phenotyping research to help with the breeding of high-yielding plants that can adapt to our continuously changing climate. Novel, low-cost, fully automated plant phenotyping systems, capable of infield deployment, are required to help identify quantitative plant phenotypes. The identification of quantitative plant phenotypes is a key challenge which relies heavily on the precise segmentation of plant images. Recently, the plant phenotyping community has started to use very deep convolutional neural networks (CNNs) to help tackle this fundamental problem. However, these very deep CNNs rely on some millions of model parameters and generate very large weight matrices, thus making them difficult to deploy infield on low-cost, resource-limited devices. We explore how to compress existing very deep CNNs for plant image segmentation, thus making them easily deployable infield and on mobile devices. In particular, we focus on applying these models to the pixel-wise segmentation of plants into multiple classes including background, a challenging problem in the plant phenotyping community. We combined two approaches (separable convolutions and SVD) to reduce model parameter numbers and weight matrices of these very deep CNN-based models. Using our combined method (separable convolution and SVD) reduced the weight matrix by up to 95% without affecting pixel-wise accuracy. These methods have been evaluated on two public plant datasets and one non-plant dataset to illustrate generality. We have successfully tested our models on a mobile device.
Collapse
|
9
|
Convolutional Neural Net-Based Cassava Storage Root Counting Using Real and Synthetic Images. FRONTIERS IN PLANT SCIENCE 2019; 10:1516. [PMID: 31850020 DOI: 10.3389/fpls.2019.01516/full] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 10/31/2019] [Indexed: 05/24/2023]
Abstract
Cassava roots are complex structures comprising several distinct types of root. The number and size of the storage roots are two potential phenotypic traits reflecting crop yield and quality. Counting and measuring the size of cassava storage roots are usually done manually, or semi-automatically by first segmenting cassava root images. However, occlusion of both storage and fibrous roots makes the process both time-consuming and error-prone. While Convolutional Neural Nets have shown performance above the state-of-the-art in many image processing and analysis tasks, there are currently a limited number of Convolutional Neural Net-based methods for counting plant features. This is due to the limited availability of data, annotated by expert plant biologists, which represents all possible measurement outcomes. Existing works in this area either learn a direct image-to-count regressor model by regressing to a count value, or perform a count after segmenting the image. We, however, address the problem using a direct image-to-count prediction model. This is made possible by generating synthetic images, using a conditional Generative Adversarial Network (GAN), to provide training data for missing classes. We automatically form cassava storage root masks for any missing classes using existing ground-truth masks, and input them as a condition to our GAN model to generate synthetic root images. We combine the resulting synthetic images with real images to learn a direct image-to-count prediction model capable of counting the number of storage roots in real cassava images taken from a low cost aeroponic growth system. These models are used to develop a system that counts cassava storage roots in real images. Our system first predicts age group ('young' and 'old' roots; pertinent to our image capture regime) in a given image, and then, based on this prediction, selects an appropriate model to predict the number of storage roots. We achieve 91% accuracy on predicting ages of storage roots, and 86% and 71% overall percentage agreement on counting 'old' and 'young' storage roots respectively. Thus we are able to demonstrate that synthetically generated cassava root images can be used to supplement missing root classes, turning the counting problem into a direct image-to-count prediction task.
Collapse
|
10
|
Convolutional Neural Net-Based Cassava Storage Root Counting Using Real and Synthetic Images. FRONTIERS IN PLANT SCIENCE 2019; 10:1516. [PMID: 31850020 PMCID: PMC6888701 DOI: 10.3389/fpls.2019.01516] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 10/31/2019] [Indexed: 05/21/2023]
Abstract
Cassava roots are complex structures comprising several distinct types of root. The number and size of the storage roots are two potential phenotypic traits reflecting crop yield and quality. Counting and measuring the size of cassava storage roots are usually done manually, or semi-automatically by first segmenting cassava root images. However, occlusion of both storage and fibrous roots makes the process both time-consuming and error-prone. While Convolutional Neural Nets have shown performance above the state-of-the-art in many image processing and analysis tasks, there are currently a limited number of Convolutional Neural Net-based methods for counting plant features. This is due to the limited availability of data, annotated by expert plant biologists, which represents all possible measurement outcomes. Existing works in this area either learn a direct image-to-count regressor model by regressing to a count value, or perform a count after segmenting the image. We, however, address the problem using a direct image-to-count prediction model. This is made possible by generating synthetic images, using a conditional Generative Adversarial Network (GAN), to provide training data for missing classes. We automatically form cassava storage root masks for any missing classes using existing ground-truth masks, and input them as a condition to our GAN model to generate synthetic root images. We combine the resulting synthetic images with real images to learn a direct image-to-count prediction model capable of counting the number of storage roots in real cassava images taken from a low cost aeroponic growth system. These models are used to develop a system that counts cassava storage roots in real images. Our system first predicts age group ('young' and 'old' roots; pertinent to our image capture regime) in a given image, and then, based on this prediction, selects an appropriate model to predict the number of storage roots. We achieve 91% accuracy on predicting ages of storage roots, and 86% and 71% overall percentage agreement on counting 'old' and 'young' storage roots respectively. Thus we are able to demonstrate that synthetically generated cassava root images can be used to supplement missing root classes, turning the counting problem into a direct image-to-count prediction task.
Collapse
|
11
|
A low-cost aeroponic phenotyping system for storage root development: unravelling the below-ground secrets of cassava ( Manihot esculenta). PLANT METHODS 2019; 15:131. [PMID: 31728153 PMCID: PMC6842211 DOI: 10.1186/s13007-019-0517-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 10/31/2019] [Indexed: 05/24/2023]
Abstract
BACKGROUND Root and tuber crops are becoming more important for their high source of carbohydrates, next to cereals. Despite their commercial impact, there are significant knowledge gaps about the environmental and inherent regulation of storage root (SR) differentiation, due in part to the innate problems of studying storage roots and the lack of a suitable model system for monitoring storage root growth. The research presented here aimed to develop a reliable, low-cost effective system that enables the study of the factors influencing cassava storage root initiation and development. RESULTS We explored simple, low-cost systems for the study of storage root biology. An aeroponics system described here is ideal for real-time monitoring of storage root development (SRD), and this was further validated using hormone studies. Our aeroponics-based auxin studies revealed that storage root initiation and development are adaptive responses, which are significantly enhanced by the exogenous auxin supply. Field and histological experiments were also conducted to confirm the auxin effect found in the aeroponics system. We also developed a simple digital imaging platform to quantify storage root growth and development traits. Correlation analysis confirmed that image-based estimation can be a surrogate for manual root phenotyping for several key traits. CONCLUSIONS The aeroponic system developed from this study is an effective tool for examining the root architecture of cassava during early SRD. The aeroponic system also provided novel insights into storage root formation by activating the auxin-dependent proliferation of secondary xylem parenchyma cells to induce the initial root thickening and bulking. The developed system can be of direct benefit to molecular biologists, breeders, and physiologists, allowing them to screen germplasm for root traits that correlate with improved economic traits.
Collapse
|
12
|
RootNav 2.0: Deep learning for automatic navigation of complex plant root architectures. Gigascience 2019; 8:giz123. [PMID: 31702012 PMCID: PMC6839032 DOI: 10.1093/gigascience/giz123] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 08/23/2019] [Accepted: 09/22/2019] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. RESULTS We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. CONCLUSIONS We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever.
Collapse
|
13
|
RootNav 2.0: Deep learning for automatic navigation of complex plant root architectures. Gigascience 2019; 8:5614712. [PMID: 31702012 DOI: 10.1101/709147] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 08/23/2019] [Accepted: 09/22/2019] [Indexed: 05/21/2023] Open
Abstract
BACKGROUND In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. RESULTS We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. CONCLUSIONS We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever.
Collapse
|
14
|
A convolutional neural network for fast upsampling of undersampled tomograms in X-ray CT time-series using a representative highly sampled tomogram. JOURNAL OF SYNCHROTRON RADIATION 2019; 26:839-853. [PMID: 31074449 PMCID: PMC6510199 DOI: 10.1107/s1600577519003448] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Accepted: 03/11/2019] [Indexed: 06/09/2023]
Abstract
X-ray computed tomography and, specifically, time-resolved volumetric tomography data collections (4D datasets) routinely produce terabytes of data, which need to be effectively processed after capture. This is often complicated due to the high rate of data collection required to capture at sufficient time-resolution events of interest in a time-series, compelling the researchers to perform data collection with a low number of projections for each tomogram in order to achieve the desired `frame rate'. It is common practice to collect a representative tomogram with many projections, after or before the time-critical portion of the experiment without detrimentally affecting the time-series to aid the analysis process. For this paper these highly sampled data are used to aid feature detection in the rapidly collected tomograms by assisting with the upsampling of their projections, which is equivalent to upscaling the θ-axis of the sinograms. In this paper, a super-resolution approach is proposed based on deep learning (termed an upscaling Deep Neural Network, or UDNN) that aims to upscale the sinogram space of individual tomograms in a 4D dataset of a sample. This is done using learned behaviour from a dataset containing a high number of projections, taken of the same sample and occurring at the beginning or the end of the data collection. The prior provided by the highly sampled tomogram allows the application of an upscaling process with better accuracy than existing interpolation techniques. This upscaling process subsequently permits an increase in the quality of the tomogram's reconstruction, especially in situations that require capture of only a limited number of projections, as is the case in high-frequency time-series capture. The increase in quality can prove very helpful for researchers, as downstream it enables easier segmentation of the tomograms in areas of interest, for example. The method itself comprises a convolutional neural network which through training learns an end-to-end mapping between sinograms with a low and a high number of projections. Since datasets can differ greatly between experiments, this approach specifically develops a lightweight network that can easily and quickly be retrained for different types of samples. As part of the evaluation of our technique, results with different hyperparameter settings are presented, and the method has been tested on both synthetic and real-world data. In addition, accompanying real-world experimental datasets have been released in the form of two 80 GB tomograms depicting a metallic pin that undergoes corruption from a droplet of salt water. Also a new engineering-based phantom dataset has been produced and released, inspired by the experimental datasets.
Collapse
|
15
|
Root branching toward water involves posttranslational modification of transcription factor ARF7. Science 2018; 362:1407-1410. [PMID: 30573626 DOI: 10.1126/science.aau3956] [Citation(s) in RCA: 146] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Accepted: 11/06/2018] [Indexed: 01/01/2023]
Abstract
Plants adapt to heterogeneous soil conditions by altering their root architecture. For example, roots branch when in contact with water by using the hydropatterning response. We report that hydropatterning is dependent on auxin response factor ARF7. This transcription factor induces asymmetric expression of its target gene LBD16 in lateral root founder cells. This differential expression pattern is regulated by posttranslational modification of ARF7 with the small ubiquitin-like modifier (SUMO) protein. SUMOylation negatively regulates ARF7 DNA binding activity. ARF7 SUMOylation is required to recruit the Aux/IAA (indole-3-acetic acid) repressor protein IAA3. Blocking ARF7 SUMOylation disrupts IAA3 recruitment and hydropatterning. We conclude that SUMO-dependent regulation of auxin response controls root branching pattern in response to water availability.
Collapse
|
16
|
Plant Phenotyping: An Active Vision Cell for Three-Dimensional Plant Shoot Reconstruction. PLANT PHYSIOLOGY 2018; 178:524-534. [PMID: 30097468 PMCID: PMC6181042 DOI: 10.1104/pp.18.00664] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Accepted: 07/27/2018] [Indexed: 05/18/2023]
Abstract
Three-dimensional (3D) computer-generated models of plants are urgently needed to support both phenotyping and simulation-based studies such as photosynthesis modeling. However, the construction of accurate 3D plant models is challenging, as plants are complex objects with an intricate leaf structure, often consisting of thin and highly reflective surfaces that vary in shape and size, forming dense, complex, crowded scenes. We address these issues within an image-based method by taking an active vision approach, one that investigates the scene to intelligently capture images, to image acquisition. Rather than use the same camera positions for all plants, our technique is to acquire the images needed to reconstruct the target plant, tuning camera placement to match the plant's individual structure. Our method also combines volumetric- and surface-based reconstruction methods and determines the necessary images based on the analysis of voxel clusters. We describe a fully automatic plant modeling/phenotyping cell (or module) comprising a six-axis robot and a high-precision turntable. By using a standard color camera, we overcome the difficulties associated with laser-based plant reconstruction methods. The 3D models produced are compared with those obtained from fixed cameras and evaluated by comparison with data obtained by x-ray microcomputed tomography across different plant structures. Our results show that our method is successful in improving the accuracy and quality of data obtained from a variety of plant types.
Collapse
|
17
|
Erratum to: Deep machine learning provides state-of-the-art performance in image-based plant phenotyping. Gigascience 2018; 7:5057043. [PMID: 30053289 PMCID: PMC6055546 DOI: 10.1093/gigascience/giy042] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
18
|
Cellular Patterning of Arabidopsis Roots Under Low Phosphate Conditions. FRONTIERS IN PLANT SCIENCE 2018; 9:735. [PMID: 29922313 PMCID: PMC5996075 DOI: 10.3389/fpls.2018.00735] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Accepted: 05/15/2018] [Indexed: 05/04/2023]
Abstract
Phosphorus is a crucial macronutrient for plants playing a critical role in many cellular signaling and energy cycling processes. In light of this, phosphorus acquisition efficiency is an important target trait for crop improvement, but it also provides an ecological adaptation for growth of plants in low nutrient environments. Increased root hair density has been shown to improve phosphorus uptake and plant health in a number of species. In several plant families, including Brassicaceae, root hair bearing cells are positioned on the epidermis according to their position in relation to cortex cells, with hair cells positioned in the cleft between two underlying cortex cells. Thus the number of cortex cells determines the number of epidermal cells in the root hair position. Previous research has associated phosphorus-limiting conditions with an increase in the number of cortex cell files in Arabidopsis thaliana roots, but they have not investigated the spatial or temporal domains in which these extra divisions occur or explored the consequences this has had on root hair formation. In this study, we use 3D reconstructions of root meristems to demonstrate that the radial anticlinal cell divisions seen under low phosphate are exclusive to the cortex. When grown on media containing replete levels of phosphorous, A. thaliana plants almost invariably show eight cortex cells; however when grown in phosphate limited conditions, seedlings develop up to 16 cortex cells (with 10-14 being the most typical). This results in a significant increase in the number of epidermal cells at hair forming positions. These radial anticlinal divisions occur within the initial cells and can be seen within 24 h of transfer of plants to low phosphorous conditions. We show that these changes in the underlying cortical cells feed into epidermal patterning by altering the regular spacing of root hairs.
Collapse
|
19
|
Hyperspectral image analysis techniques for the detection and classification of the early onset of plant disease and stress. PLANT METHODS 2017; 13:80. [PMID: 29051772 PMCID: PMC5634902 DOI: 10.1186/s13007-017-0233-z] [Citation(s) in RCA: 149] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2017] [Accepted: 10/03/2017] [Indexed: 05/20/2023]
Abstract
This review explores how imaging techniques are being developed with a focus on deployment for crop monitoring methods. Imaging applications are discussed in relation to both field and glasshouse-based plants, and techniques are sectioned into 'healthy and diseased plant classification' with an emphasis on classification accuracy, early detection of stress, and disease severity. A central focus of the review is the use of hyperspectral imaging and how this is being utilised to find additional information about plant health, and the ability to predict onset of disease. A summary of techniques used to detect biotic and abiotic stress in plants is presented, including the level of accuracy associated with each method.
Collapse
|
20
|
Deep machine learning provides state-of-the-art performance in image-based plant phenotyping. Gigascience 2017; 6:1-10. [PMID: 29020747 PMCID: PMC5632296 DOI: 10.1093/gigascience/gix083] [Citation(s) in RCA: 122] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Revised: 07/27/2017] [Accepted: 08/16/2017] [Indexed: 11/12/2022] Open
Abstract
In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.
Collapse
|
21
|
Volume Segmentation and Analysis of Biological Materials Using SuRVoS (Super-region Volume Segmentation) Workbench. J Vis Exp 2017. [PMID: 28872144 PMCID: PMC5614362 DOI: 10.3791/56162] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Segmentation is the process of isolating specific regions or objects within an imaged volume, so that further study can be undertaken on these areas of interest. When considering the analysis of complex biological systems, the segmentation of three-dimensional image data is a time consuming and labor intensive step. With the increased availability of many imaging modalities and with automated data collection schemes, this poses an increased challenge for the modern experimental biologist to move from data to knowledge. This publication describes the use of SuRVoS Workbench, a program designed to address these issues by providing methods to semi-automatically segment complex biological volumetric data. Three datasets of differing magnification and imaging modalities are presented here, each highlighting different strategies of segmenting with SuRVoS. Phase contrast X-ray tomography (microCT) of the fruiting body of a plant is used to demonstrate segmentation using model training, cryo electron tomography (cryoET) of human platelets is used to demonstrate segmentation using super- and megavoxels, and cryo soft X-ray tomography (cryoSXT) of a mammalian cell line is used to demonstrate the label splitting tools. Strategies and parameters for each datatype are also presented. By blending a selection of semi-automatic processes into a single interactive tool, SuRVoS provides several benefits. Overall time to segment volumetric data is reduced by a factor of five when compared to manual segmentation, a mainstay in many image processing fields. This is a significant savings when full manual segmentation can take weeks of effort. Additionally, subjectivity is addressed through the use of computationally identified boundaries, and splitting complex collections of objects by their calculated properties rather than on a case-by-case basis.
Collapse
|
22
|
The Microphenotron: a robotic miniaturized plant phenotyping platform with diverse applications in chemical biology. PLANT METHODS 2017; 13:10. [PMID: 28265297 PMCID: PMC5333401 DOI: 10.1186/s13007-017-0158-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2016] [Accepted: 02/02/2017] [Indexed: 05/06/2023]
Abstract
BACKGROUND Chemical genetics provides a powerful alternative to conventional genetics for understanding gene function. However, its application to plants has been limited by the lack of a technology that allows detailed phenotyping of whole-seedling development in the context of a high-throughput chemical screen. We have therefore sought to develop an automated micro-phenotyping platform that would allow both root and shoot development to be monitored under conditions where the phenotypic effects of large numbers of small molecules can be assessed. RESULTS The 'Microphenotron' platform uses 96-well microtitre plates to deliver chemical treatments to seedlings of Arabidopsis thaliana L. and is based around four components: (a) the 'Phytostrip', a novel seedling growth device that enables chemical treatments to be combined with the automated capture of images of developing roots and shoots; (b) an illuminated robotic platform that uses a commercially available robotic manipulator to capture images of developing shoots and roots; (c) software to control the sequence of robotic movements and integrate these with the image capture process; (d) purpose-made image analysis software for automated extraction of quantitative phenotypic data. Imaging of each plate (representing 80 separate assays) takes 4 min and can easily be performed daily for time-course studies. As currently configured, the Microphenotron has a capacity of 54 microtitre plates in a growth room footprint of 2.1 m2, giving a potential throughput of up to 4320 chemical treatments in a typical 10 days experiment. The Microphenotron has been validated by using it to screen a collection of 800 natural compounds for qualitative effects on root development and to perform a quantitative analysis of the effects of a range of concentrations of nitrate and ammonium on seedling development. CONCLUSIONS The Microphenotron is an automated screening platform that for the first time is able to combine large numbers of individual chemical treatments with a detailed analysis of whole-seedling development, and particularly root system development. The Microphenotron should provide a powerful new tool for chemical genetics and for wider chemical biology applications, including the development of natural and synthetic chemical products for improved agricultural sustainability.
Collapse
|
23
|
SuRVoS: Super-Region Volume Segmentation workbench. J Struct Biol 2017; 198:43-53. [PMID: 28246039 PMCID: PMC5405849 DOI: 10.1016/j.jsb.2017.02.007] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2016] [Revised: 02/16/2017] [Accepted: 02/20/2017] [Indexed: 01/08/2023]
Abstract
Segmentation of biological volumes is a crucial step needed to fully analyse their scientific content. Not having access to convenient tools with which to segment or annotate the data means many biological volumes remain under-utilised. Automatic segmentation of biological volumes is still a very challenging research field, and current methods usually require a large amount of manually-produced training data to deliver a high-quality segmentation. However, the complex appearance of cellular features and the high variance from one sample to another, along with the time-consuming work of manually labelling complete volumes, makes the required training data very scarce or non-existent. Thus, fully automatic approaches are often infeasible for many practical applications. With the aim of unifying the segmentation power of automatic approaches with the user expertise and ability to manually annotate biological samples, we present a new workbench named SuRVoS (Super-Region Volume Segmentation). Within this software, a volume to be segmented is first partitioned into hierarchical segmentation layers (named Super-Regions) and is then interactively segmented with the user's knowledge input in the form of training annotations. SuRVoS first learns from and then extends user inputs to the rest of the volume, while using Super-Regions for quicker and easier segmentation than when using a voxel grid. These benefits are especially noticeable on noisy, low-dose, biological datasets.
Collapse
|
24
|
AutoRoot: open-source software employing a novel image analysis approach to support fully-automated plant phenotyping. PLANT METHODS 2017; 13:12. [PMID: 28286542 PMCID: PMC5341458 DOI: 10.1186/s13007-017-0161-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2016] [Accepted: 02/22/2017] [Indexed: 05/14/2023]
Abstract
BACKGROUND Computer-based phenotyping of plants has risen in importance in recent years. Whilst much software has been written to aid phenotyping using image analysis, to date the vast majority has been only semi-automatic. However, such interaction is not desirable in high throughput approaches. Here, we present a system designed to analyse plant images in a completely automated manner, allowing genuine high throughput measurement of root traits. To do this we introduce a new set of proxy traits. RESULTS We test the system on a new, automated image capture system, the Microphenotron, which is able to image many 1000s of roots/h. A simple experiment is presented, treating the plants with differing chemical conditions to produce different phenotypes. The automated imaging setup and the new software tool was used to measure proxy traits in each well. A correlation matrix was calculated across automated and manual measures, as a validation. Some particular proxy measures are very highly correlated with the manual measures (e.g. proxy length to manual length, r2 > 0.9). This suggests that while the automated measures are not directly equivalent to classic manual measures, they can be used to indicate phenotypic differences (hence the term, proxy). In addition, the raw discriminative power of the new proxy traits was examined. Principal component analysis was calculated across all proxy measures over two phenotypically-different groups of plants. Many of the proxy traits can be used to separate the data in the two conditions. CONCLUSION The new proxy traits proposed tend to correlate well with equivalent manual measures, where these exist. Additionally, the new measures display strong discriminative power. It is suggested that for particular phenotypic differences, different traits will be relevant, and not all will have meaningful manual equivalent measures. However, approaches such as PCA can be used to interrogate the resulting data to identify differences between datasets. Select images can then be carefully manually inspected if the nature of the precise differences is required. We suggest such flexible measurement approaches are necessary for fully automated, high throughput systems such as the Microphenotron.
Collapse
|
25
|
Approaches to three-dimensional reconstruction of plant shoot topology and geometry. FUNCTIONAL PLANT BIOLOGY : FPB 2016; 44:62-75. [PMID: 32480547 DOI: 10.1071/fp16167] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2016] [Accepted: 07/23/2016] [Indexed: 05/27/2023]
Abstract
There are currently 805million people classified as chronically undernourished, and yet the World's population is still increasing. At the same time, global warming is causing more frequent and severe flooding and drought, thus destroying crops and reducing the amount of land available for agriculture. Recent studies show that without crop climate adaption, crop productivity will deteriorate. With access to 3D models of real plants it is possible to acquire detailed morphological and gross developmental data that can be used to study their ecophysiology, leading to an increase in crop yield and stability across hostile and changing environments. Here we review approaches to the reconstruction of 3D models of plant shoots from image data, consider current applications in plant and crop science, and identify remaining challenges. We conclude that although phenotyping is receiving an increasing amount of attention - particularly from computer vision researchers - and numerous vision approaches have been proposed, it still remains a highly interactive process. An automated system capable of producing 3D models of plants would significantly aid phenotyping practice, increasing accuracy and repeatability of measurements.
Collapse
|
26
|
Quantification of fluorescent reporters in plant cells. Methods Mol Biol 2015; 1242:123-31. [PMID: 25408449 DOI: 10.1007/978-1-4939-1902-4_11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
Fluorescent reporters are powerful tools for plant research. Many studies require accurate determination of fluorescence intensity and localization. Here, we describe protocols for the quantification of fluorescence intensity in plant cells from confocal laser scanning microscope images using semiautomated software and image analysis techniques.
Collapse
|
27
|
Automated recovery of three-dimensional models of plant shoots from multiple color images. PLANT PHYSIOLOGY 2014; 166:1688-98. [PMID: 25332504 PMCID: PMC4256878 DOI: 10.1104/pp.114.248971] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2014] [Accepted: 10/07/2014] [Indexed: 05/18/2023]
Abstract
Increased adoption of the systems approach to biological research has focused attention on the use of quantitative models of biological objects. This includes a need for realistic three-dimensional (3D) representations of plant shoots for quantification and modeling. Previous limitations in single-view or multiple-view stereo algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present a fully automatic approach to image-based 3D plant reconstruction that can be achieved using a single low-cost camera. The reconstructed plants are represented as a series of small planar sections that together model the more complex architecture of the leaf surfaces. The boundary of each leaf patch is refined using the level-set method, optimizing the model based on image information, curvature constraints, and the position of neighboring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed and, as such, is applicable to a wide variety of plant species and topologies and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on data sets of wheat (Triticum aestivum) and rice (Oryza sativa) plants as well as a unique virtual data set that allows us to compute quantitative measures of reconstruction accuracy. The output is a 3D mesh structure that is suitable for modeling applications in a format that can be imported in the majority of 3D graphics and software packages.
Collapse
|
28
|
Mechanical modelling quantifies the functional importance of outer tissue layers during root elongation and bending. THE NEW PHYTOLOGIST 2014; 202:1212-1222. [PMID: 24641449 PMCID: PMC4286105 DOI: 10.1111/nph.12764] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2013] [Accepted: 02/02/2014] [Indexed: 05/18/2023]
Abstract
Root elongation and bending require the coordinated expansion of multiple cells of different types. These processes are regulated by the action of hormones that can target distinct cell layers. We use a mathematical model to characterise the influence of the biomechanical properties of individual cell walls on the properties of the whole tissue. Taking a simple constitutive model at the cell scale which characterises cell walls via yield and extensibility parameters, we derive the analogous tissue-level model to describe elongation and bending. To accurately parameterise the model, we take detailed measurements of cell turgor, cell geometries and wall thicknesses. The model demonstrates how cell properties and shapes contribute to tissue-level extensibility and yield. Exploiting the highly organised structure of the elongation zone (EZ) of the Arabidopsis root, we quantify the contributions of different cell layers, using the measured parameters. We show how distributions of material and geometric properties across the root cross-section contribute to the generation of curvature, and relate the angle of a gravitropic bend to the magnitude and duration of asymmetric wall softening. We quantify the geometric factors which lead to the predominant contribution of the outer cell files in driving root elongation and bending.
Collapse
|
29
|
Systems analysis of auxin transport in the Arabidopsis root apex. THE PLANT CELL 2014; 26:862-75. [PMID: 24632533 PMCID: PMC4001398 DOI: 10.1105/tpc.113.119495] [Citation(s) in RCA: 148] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2013] [Revised: 01/06/2014] [Accepted: 02/14/2014] [Indexed: 05/17/2023]
Abstract
Auxin is a key regulator of plant growth and development. Within the root tip, auxin distribution plays a crucial role specifying developmental zones and coordinating tropic responses. Determining how the organ-scale auxin pattern is regulated at the cellular scale is essential to understanding how these processes are controlled. In this study, we developed an auxin transport model based on actual root cell geometries and carrier subcellular localizations. We tested model predictions using the DII-VENUS auxin sensor in conjunction with state-of-the-art segmentation tools. Our study revealed that auxin efflux carriers alone cannot create the pattern of auxin distribution at the root tip and that AUX1/LAX influx carriers are also required. We observed that AUX1 in lateral root cap (LRC) and elongating epidermal cells greatly enhance auxin's shootward flux, with this flux being predominantly through the LRC, entering the epidermal cells only as they enter the elongation zone. We conclude that the nonpolar AUX1/LAX influx carriers control which tissues have high auxin levels, whereas the polar PIN carriers control the direction of auxin transport within these tissues.
Collapse
|
30
|
|
31
|
Sequential induction of auxin efflux and influx carriers regulates lateral root emergence. Mol Syst Biol 2013; 9:699. [PMID: 24150423 PMCID: PMC3817398 DOI: 10.1038/msb.2013.43] [Citation(s) in RCA: 91] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2013] [Accepted: 08/06/2013] [Indexed: 12/15/2022] Open
Abstract
Emergence of a new lateral root primordium through the outer layers of the parental root requires the sequential auxin-mediated induction of two auxin transporters. This positive feedback regulatory loop coordinates patterned gene expression in outer tissues. ![]()
The emergence of lateral roots through several tissues requires the precise regulation of gene expression in overlaying cells to trigger cell separation. Auxin derived from new lateral root primordia induces a positive feedback loop in the outer tissues by promoting the expression of the auxin influx transporter LAX3. A mathematical model based on realistic 3D geometries predicted the involvement of an auxin efflux carrier that was later identified to be PIN3. The model also revealed that PIN3 must be expressed before LAX3 to ensure a ‘robust' pattern of LAX3 induction in just two overlaying cortical cell files, thereby delimiting cell separation.
In Arabidopsis, lateral roots originate from pericycle cells deep within the primary root. New lateral root primordia (LRP) have to emerge through several overlaying tissues. Here, we report that auxin produced in new LRP is transported towards the outer tissues where it triggers cell separation by inducing both the auxin influx carrier LAX3 and cell-wall enzymes. LAX3 is expressed in just two cell files overlaying new LRP. To understand how this striking pattern of LAX3 expression is regulated, we developed a mathematical model that captures the network regulating its expression and auxin transport within realistic three-dimensional cell and tissue geometries. Our model revealed that, for the LAX3 spatial expression to be robust to natural variations in root tissue geometry, an efflux carrier is required—later identified to be PIN3. To prevent LAX3 from being transiently expressed in multiple cell files, PIN3 and LAX3 must be induced consecutively, which we later demonstrated to be the case. Our study exemplifies how mathematical models can be used to direct experiments to elucidate complex developmental processes.
Collapse
|
32
|
RootNav: navigating images of complex root architectures. PLANT PHYSIOLOGY 2013; 162:1802-14. [PMID: 23766367 PMCID: PMC3729762 DOI: 10.1104/pp.113.221531] [Citation(s) in RCA: 123] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Accepted: 06/02/2013] [Indexed: 05/18/2023]
Abstract
We present a novel image analysis tool that allows the semiautomated quantification of complex root system architectures in a range of plant species grown and imaged in a variety of ways. The automatic component of RootNav takes a top-down approach, utilizing the powerful expectation maximization classification algorithm to examine regions of the input image, calculating the likelihood that given pixels correspond to roots. This information is used as the basis for an optimization approach to root detection and quantification, which effectively fits a root model to the image data. The resulting user experience is akin to defining routes on a motorist's satellite navigation system: RootNav makes an initial optimized estimate of paths from the seed point to root apices, and the user is able to easily and intuitively refine the results using a visual approach. The proposed method is evaluated on winter wheat (Triticum aestivum) images (and demonstrated on Arabidopsis [Arabidopsis thaliana], Brassica napus, and rice [Oryza sativa]), and results are compared with manual analysis. Four exemplar traits are calculated and show clear illustrative differences between some of the wheat accessions. RootNav, however, provides the structural information needed to support extraction of a wider variety of biologically relevant measures. A separate viewer tool is provided to recover a rich set of architectural traits from RootNav's core representation.
Collapse
|
33
|
What lies beneath: underlying assumptions in bioimage analysis. TRENDS IN PLANT SCIENCE 2012; 17:688-692. [PMID: 22902890 DOI: 10.1016/j.tplants.2012.07.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2012] [Revised: 07/17/2012] [Accepted: 07/25/2012] [Indexed: 05/27/2023]
Abstract
The need for plant image analysis tools is established and has led to a steadily expanding literature and set of software tools. This is encouraging, but raises a question: how does a plant scientist with no detailed knowledge or experience of image analysis methods choose the right tool(s) for the task at hand, or satisfy themselves that a suggested approach is appropriate? We believe that too great an emphasis is currently being placed on low-level mechanisms and software environments. In this opinion article we propose that a renewed focus on the core theories and algorithms used, and in particular the assumptions upon which they rely, will better equip plant scientists to evaluate the available resources.
Collapse
|
34
|
Recovering the dynamics of root growth and development using novel image acquisition and analysis methods. Philos Trans R Soc Lond B Biol Sci 2012. [DOI: 10.1098/rstb.2012.0226] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
35
|
Recovering the dynamics of root growth and development using novel image acquisition and analysis methods. Philos Trans R Soc Lond B Biol Sci 2012; 367:1517-24. [PMID: 22527394 PMCID: PMC3321695 DOI: 10.1098/rstb.2011.0291] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana.
Collapse
|
36
|
CellSeT: novel software to extract and analyze structured networks of plant cells from confocal images. THE PLANT CELL 2012; 24:1353-61. [PMID: 22474181 PMCID: PMC3398551 DOI: 10.1105/tpc.112.096289] [Citation(s) in RCA: 62] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2012] [Revised: 02/23/2012] [Accepted: 03/20/2012] [Indexed: 05/19/2023]
Abstract
It is increasingly important in life sciences that many cell-scale and tissue-scale measurements are quantified from confocal microscope images. However, extracting and analyzing large-scale confocal image data sets represents a major bottleneck for researchers. To aid this process, CellSeT software has been developed, which utilizes tissue-scale structure to help segment individual cells. We provide examples of how the CellSeT software can be used to quantify fluorescence of hormone-responsive nuclear reporters, determine membrane protein polarity, extract cell and tissue geometry for use in later modeling, and take many additional biologically relevant measures using an extensible plug-in toolset. Application of CellSeT promises to remove subjectivity from the resulting data sets and facilitate higher-throughput, quantitative approaches to plant cell research.
Collapse
|
37
|
Identifying biological landmarks using a novel cell measuring image analysis tool: Cell-o-Tape. PLANT METHODS 2012; 8:7. [PMID: 22385537 PMCID: PMC3359173 DOI: 10.1186/1746-4811-8-7] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2011] [Accepted: 03/02/2012] [Indexed: 05/18/2023]
Abstract
BACKGROUND The ability to quantify the geometry of plant organs at the cellular scale can provide novel insights into their structural organization. Hitherto manual methods of measurement provide only very low throughput and subjective solutions, and often quantitative measurements are neglected in favour of a simple cell count. RESULTS We present a tool to count and measure individual neighbouring cells along a defined file in confocal laser scanning microscope images. The tool allows the user to extract this generic information in a flexible and intuitive manner, and builds on the raw data to detect a significant change in cell length along the file. This facility can be used, for example, to provide an estimate of the position of transition into the elongation zone of an Arabidopsis root, traditionally a location sensitive to the subjectivity of the experimenter. CONCLUSIONS Cell-o-tape is shown to locate cell walls with a high degree of accuracy and estimate the location of the transition feature point in good agreement with human experts. The tool is an open source ImageJ/Fiji macro and is available online.
Collapse
|
38
|
Abstract
SUMMARY The original RootTrace tool has proved successful in measuring primary root lengths across time series image data. Biologists have shown interest in using the tool to address further problems, namely counting lateral roots to use as parameters in screening studies, and measuring highly curved roots. To address this, the software has been extended to count emerged lateral roots, and the tracking model extended so that strongly curved and agravitropic roots can be now be recovered. Here, we describe the novel image analysis algorithms and user interface implemented within the RootTrace framework to handle such situations and evaluate the results. AVAILABILITY The software is open source and available from http://sourceforge.net/projects/roottrace.
Collapse
|
39
|
|
40
|
|
41
|
|
42
|
|
43
|
The Angular Distribution of Long-Range Alpha-Particles from the Bombardment of Boron-11 by Protons. ACTA ACUST UNITED AC 2002. [DOI: 10.1088/0370-1298/65/9/306] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
44
|
|
45
|
Abstract
MMPI scores for items on the Harris and Lingoes (1955, 1968) subscales HY2 (need for affection) and PA2 (naivete) were compared among pedophiles (n = 50), non-sexually deviant psychiatric patients (n = 25), and a general control group (n = 50). Results indicated that pedophiles did not demonstrate a "naive need for affection," but, rather, a cynical need for affection. Scores on the Pedophelia (PE) Scale (Toobert, Bartelme, & Jones, 1959) also were compared, and no significant differences were found between the pedophiles and either control group, although a difference was found between the scores of the psychiatric control group and those of the general sample. Selected items from scale 4 (Psychopathic Deviate) and scale 9 (Hypomania) also produced no significant differences among groups.
Collapse
|
46
|
The suffering of dying doctors. West J Med 1992; 156:437. [PMID: 1574901 PMCID: PMC1003298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
47
|
Bias in sex abuse evaluation. J Am Acad Child Adolesc Psychiatry 1989; 28:455-6. [PMID: 2738015 DOI: 10.1097/00004583-198905000-00029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
48
|
|
49
|
Variability of ejection fraction results using commercial cardiac phantoms. Nucl Med Commun 1988; 9:527-32. [PMID: 3173911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Variable ejection fraction results were obtained using two commercial phantoms with three different computer systems. The reasons for the results and their implications are discussed.
Collapse
|
50
|
The effect of dietary modification and hyperglycaemia on gastric emptying and gastric inhibitory polypeptide (GIP) secretion. Br J Nutr 1988; 60:29-37. [PMID: 3408703 DOI: 10.1079/bjn19880073] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
1. Five healthy volunteers whose usual fat and energy intakes were moderately high (fat intake 155 (SE 11) g/d; energy intake 13683 (SE 909) kJ/d) were given on two separate occasions (a) 96 g fat and (b) 96 g fat and intravenous (IV) glucose (250 g glucose/l; 100 ml followed by a 2 ml/min infusion for 180 min). 2. Subjects continued on a low-fat diet for 35 d (fat intake 25 (SE 4) g/d; energy intake 6976 (SE 539) kJ/d) and the tests repeated. 3. The gastric inhibitory polypeptide (GIP) response to oral fat was significantly attenuated by IV glucose whilst subjects were consuming their normal diets and the GIP response to fat alone was significantly diminished during the low-fat diet. Post-prandial plasma triglycerides, light scattering indices (LSI; an index of post-prandial chylomicronaemia) and paracetamol levels paralleled the integrated GIP responses on both normal and low-fat diets. 4. The study of oral fat with or without glucose was repeated on seven further volunteers consuming their usual diet, substituting 10 MBq 99Tcm-labelled tin colloid for the paracetamol to investigate the rate of gastric emptying by radionuclide imaging. 5. Plasma GIP, insulin, triglyceride and LSI levels were similar to those found in the first study. IV glucose almost doubled the gastric emptying time of the oral fat load (half emptying time (t1/2) 148 (SE 11) min after fat alone and 224 (SE 18) min after fat and IV glucose). Post-prandial plasma motilin levels were significantly depressed by IV glucose.(ABSTRACT TRUNCATED AT 250 WORDS)
Collapse
|