1
|
Liu Z, Liu F, Zeng Q, Yin X, Yang Y. Estimation of drinking water volume of laboratory animals based on image processing. Sci Rep 2023; 13:8602. [PMID: 37236974 DOI: 10.1038/s41598-023-34460-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 04/30/2023] [Indexed: 05/28/2023] Open
Abstract
This paper describes an image processing-based technique used to measure the volume of residual water in the drinking water bottle for the laboratory mouse. This technique uses a camera to capture the bottle's image and then processes the image to calculate the volume of water in the bottle. Firstly, the Grabcut method separates the foreground and background to avoid the influence of background on image feature extraction. Then Canny operator was used to detect the edge of the water bottle and the edge of the liquid surface. The cumulative probability Hough detection identified the water bottle edge line segment and the liquid surface line segment from the edge image. Finally, the spatial coordinate system is constructed, and the length of each line segment on the water bottle is calculated by using plane analytical geometry. Then the volume of water is calculated. By comparing image processing time, the pixel number of liquid level, and other indexes, the optimal illuminance and water bottle color were obtained. The experimental results show that the average deviation rate of this method is less than 5%, which significantly improves the accuracy and efficiency of measurement compared with traditional manual measurement.
Collapse
Affiliation(s)
- Zhihai Liu
- College of Transportation, Shandong University of Science and Technology, Qingdao, 266590, China
| | - Feiyi Liu
- College of Mechanical and Electronic Engineering, Shandong University of Science and Technology, Qingdao, 266590, China
| | - Qingliang Zeng
- College of Mechanical and Electronic Engineering, Shandong University of Science and Technology, Qingdao, 266590, China
- College of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Xiang Yin
- College of Transportation, Shandong University of Science and Technology, Qingdao, 266590, China
| | - Yang Yang
- College of Mechanical and Electronic Engineering, Shandong University of Science and Technology, Qingdao, 266590, China.
| |
Collapse
|
2
|
Blum ME, Stewart KM, Cox M, Shoemaker KT, Bennett JR, Sullivan BW, Wakeling BF, Bleich VC. Variation in diet of desert bighorn sheep around parturition: Tradeoffs associated with parturition. Front Ecol Evol 2023. [DOI: 10.3389/fevo.2022.1071771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Abstract
Selection of forage and habitats is driven by nutritional needs of individuals. Some species may sacrifice nutritional quality of forage for the mother in favor of safety of offspring (risk-averse strategy), immediately following parturition. We studied diet quality and forage selection by bighorn sheep before and following parturition to determine how nutritional demands associated with rearing offspring influenced forage acquisition. We used desert bighorn sheep, Ovis canadensis nelsoni, to investigate that potential tradeoff. We captured and radio-collared female bighorn sheep from 2016 to 2018. We used vaginal implant transmitters (VIT)s in pregnant females to identify parturition and to capture and radio-collar neonates to monitor survival of young. We collected fecal samples throughout the breeding season and throughout the year to understand diet quality and composition throughout those temporal periods. We determined diet quality and composition for pre-parturient females, females provisioning offspring, females that lost offspring, and non-pregnant individuals using fecal nitrogen and DNA metabarcoding analyses. Additionally, we compared the diet quality and composition of offspring and adult females during the spring, as well as summer and winter months. Our results indicated differences in diet quality between individuals provisioning offspring and those whose offspring had died. Females that were provisioning dependent young had lower quality diets than those that lost their offspring. Diet composition among those groups was also markedly different; females that had lost an offspring had a more diverse diet than did females with dependent young. Diet quality differed among seasons, wherein offspring and adult females had higher quality diets during the spring months, with decreasing quality as the year progressed. Diet diversity was similar across seasons, although spring months tended to be most diverse. Our results support tradeoffs associated with risk-averse strategies made by adult females associated with parturition. Nutritional quality of forage was linked to provisioning status, indicating that females were trading diet quality for safety of offspring, but those females whose offspring had died selected high quality forages. Those results help explain habitat selection observed in mountain ungulates around parturition and provide further insight into the evolutionary processes and adaptive significance exhibited by those specialized artiodactyls.
Collapse
|
3
|
Khan A, Asim W, Ulhaq A, Robinson RW. A deep semantic vegetation health monitoring platform for citizen science imaging data. PLoS One 2022; 17:e0270625. [PMID: 35895741 PMCID: PMC9328533 DOI: 10.1371/journal.pone.0270625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 06/14/2022] [Indexed: 11/18/2022] Open
Abstract
Automated monitoring of vegetation health in a landscape is often attributed to calculating values of various vegetation indexes over a period of time. However, such approaches suffer from an inaccurate estimation of vegetational change due to the over-reliance of index values on vegetation's colour attributes and the availability of multi-spectral bands. One common observation is the sensitivity of colour attributes to seasonal variations and imaging devices, thus leading to false and inaccurate change detection and monitoring. In addition, these are very strong assumptions in a citizen science project. In this article, we build upon our previous work on developing a Semantic Vegetation Index (SVI) and expand it to introduce a semantic vegetation health monitoring platform to monitor vegetation health in a large landscape. However, unlike our previous work, we use RGB images of the Australian landscape for a quarterly series of images over six years (2015-2020). This Semantic Vegetation Index (SVI) is based on deep semantic segmentation to integrate it with a citizen science project (Fluker Post) for automated environmental monitoring. It has collected thousands of vegetation images shared by various visitors from around 168 different points located in Australian regions over six years. This paper first uses a deep learning-based semantic segmentation model to classify vegetation in repeated photographs. A semantic vegetation index is then calculated and plotted in a time series to reflect seasonal variations and environmental impacts. The results show variational trends of vegetation cover for each year, and the semantic segmentation model performed well in calculating vegetation cover based on semantic pixels (overall accuracy = 97.7%). This work has solved a number of problems related to changes in viewpoint, scale, zoom, and seasonal changes in order to normalise RGB image data collected from different image devices.
Collapse
Affiliation(s)
- Asim Khan
- The Institute for Sustainable Industries and Liveable Cities (ISILC), College of Engineering and Science, Victoria University, Melbourne, Australia
| | - Warda Asim
- The Institute for Sustainable Industries and Liveable Cities (ISILC), College of Engineering and Science, Victoria University, Melbourne, Australia
| | - Anwaar Ulhaq
- The Institute for Sustainable Industries and Liveable Cities (ISILC), College of Engineering and Science, Victoria University, Melbourne, Australia
- School of Computing and Mathematics, Charles Sturt University, Port Macquarie, NSW, Australia
| | - Randall W. Robinson
- The Institute for Sustainable Industries and Liveable Cities (ISILC), College of Engineering and Science, Victoria University, Melbourne, Australia
| |
Collapse
|
4
|
Multiple UAV Flights across the Growing Season Can Characterize Fine Scale Phenological Heterogeneity within and among Vegetation Functional Groups. REMOTE SENSING 2022. [DOI: 10.3390/rs14051290] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Grasslands and shrublands exhibit pronounced spatial and temporal variability in structure and function with differences in phenology that can be difficult to observe. Unpiloted aerial vehicles (UAVs) can measure vegetation spectral patterns relatively cheaply and repeatably at fine spatial resolution. We tested the ability of UAVs to measure phenological variability within vegetation functional groups and to improve classification accuracy at two sites in Montana, U.S.A. We tested four flight frequencies during the growing season. Classification accuracy based on reference data increased by 5–10% between a single flight and scenarios including all conducted flights. Accuracy increased from 50.6 to 61.4% at the drier site, while at the more mesic/densely vegetated site, we found an increase of 59.0 to 64.4% between a single and multiple flights over the growing season. Peak green-up varied by 2–4 weeks within the scenes, and sparse vegetation classes had only a short detectable window of active phtosynthesis; therefore, a single flight could not capture all vegetation that was active across the growing season. The multi-temporal analyses identified differences in the seasonal timing of green-up and senescence within herbaceous and sagebrush classes. Multiple UAV measurements can identify the fine-scale phenological variability in complex mixed grass/shrub vegetation.
Collapse
|
5
|
Assessing Forest Phenology: A Multi-Scale Comparison of Near-Surface (UAV, Spectral Reflectance Sensor, PhenoCam) and Satellite (MODIS, Sentinel-2) Remote Sensing. REMOTE SENSING 2021. [DOI: 10.3390/rs13081597] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
The monitoring of forest phenology based on observations from near-surface sensors such as Unmanned Aerial Vehicles (UAVs), PhenoCams, and Spectral Reflectance Sensors (SRS) over satellite sensors has recently gained significant attention in the field of remote sensing and vegetation phenology. However, exploring different aspects of forest phenology based on observations from these sensors and drawing comparatives from the time series of vegetation indices (VIs) still remains a challenge. Accordingly, this research explores the potential of near-surface sensors to track the temporal dynamics of phenology, cross-compare their results against satellite observations (MODIS, Sentinel-2), and validate satellite-derived phenology. A time series of Normalized Difference Vegetation Index (NDVI), Green Chromatic Coordinate (GCC), and Normalized Difference of Green & Red (VIgreen) indices were extracted from both near-surface and satellite sensor platforms. The regression analysis between time series of NDVI data from different sensors shows the high Pearson’s correlation coefficients (r > 0.75). Despite the good correlations, there was a remarkable offset and significant differences in slope during green-up and senescence periods. SRS showed the most distinctive NDVI profile and was different to other sensors. PhenoCamGCC tracked green-up of the canopy better than the other indices, with a well-defined start, end, and peak of the season, and was most closely correlated (r > 0.93) with the satellites, while SRS-based VIgreen accounted for the least correlation (r = 0.58) against Sentinel-2. Phenophase transition dates were estimated and validated against visual inspection of the PhenoCam data. The Start of Spring (SOS) and End of Spring (EOS) could be predicted with an accuracy of <3 days with GCC, while these metrics from VIgreen and NDVI resulted in a slightly higher bias of (3–10) days. The observed agreement between UAVNDVI vs. satelliteNDVI and PhenoCamGCC vs. satelliteGCC suggests that it is feasible to use PhenoCams and UAVs for satellite data validation and upscaling. Thus, a combination of these near-surface vegetation metrics is promising for a holistic understanding of vegetation phenology from canopy perspective and could serve as a good foundation for analysing the interoperability of different sensors for vegetation dynamics and change analysis.
Collapse
|
6
|
Aasen H, Kirchgessner N, Walter A, Liebisch F. PhenoCams for Field Phenotyping: Using Very High Temporal Resolution Digital Repeated Photography to Investigate Interactions of Growth, Phenology, and Harvest Traits. FRONTIERS IN PLANT SCIENCE 2020; 11:593. [PMID: 32625216 PMCID: PMC7314959 DOI: 10.3389/fpls.2020.00593] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 04/20/2020] [Indexed: 05/19/2023]
Abstract
Understanding the interaction of plant growth with environmental conditions is crucial to increase the resilience of current cropping systems to a changing climate. Here, we investigate PhenoCams as a high-throughput approach for field phenotyping experiments to assess growth dynamics of many different genotypes simultaneously in high temporal (daily) resolution. First, we develop a method that extracts a daily phenological signal that is normalized for the different viewing geometries of the pixels within the images. Second, we investigate the extraction of the in season traits of early vigor, leaf area index (LAI), and senescence dynamic from images of a soybean (Glycine max) field phenotyping experiment and show that it is possible to rate early vigor, senescence dynamics, and track the LAI development between LAI 1 and 4.5. Third, we identify the start of green up, green peak, senescence peak, and end of senescence in the phenological signal. Fourth, we extract the timing of these points and show how this information can be used to assess the impact of phenology on harvest traits (yield, thousand kernel weight, and oil content). The results demonstrate that PhenoCams can track growth dynamics and fill the gap of high temporal monitoring in field phenotyping experiments.
Collapse
Affiliation(s)
- Helge Aasen
- Group of Crop Science, Department of Environmental Systems Science, Institute of Agricultural Sciences, ETH Zürich, Zurich, Switzerland
| | - Norbert Kirchgessner
- Group of Crop Science, Department of Environmental Systems Science, Institute of Agricultural Sciences, ETH Zürich, Zurich, Switzerland
| | - Achim Walter
- Group of Crop Science, Department of Environmental Systems Science, Institute of Agricultural Sciences, ETH Zürich, Zurich, Switzerland
| | - Frank Liebisch
- Group of Crop Science, Department of Environmental Systems Science, Institute of Agricultural Sciences, ETH Zürich, Zurich, Switzerland
- Water Protection and Substance Flows, Research Division Agroecology and Environment, Agroscope, Zurich, Switzerland
| |
Collapse
|
7
|
Comparison of Landsat and Land-Based Phenology Camera Normalized Difference Vegetation Index (NDVI) for Dominant Plant Communities in the Great Basin. SENSORS 2019; 19:s19051139. [PMID: 30845746 PMCID: PMC6427513 DOI: 10.3390/s19051139] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Revised: 02/26/2019] [Accepted: 02/27/2019] [Indexed: 11/17/2022]
Abstract
Phenology of plants is important for ecological interactions. The timing and development of green leaves, plant maturity, and senescence affects biophysical interactions of plants with the environment. In this study we explored the agreement between land-based camera and satellite-based phenology metrics to quantify plant phenology and phenophases dates in five plant community types characteristic of the semi-arid cold desert region of the Great Basin. Three years of data were analyzed. We calculated the Normalized Difference Vegetation Index (NDVI) for both land-based cameras (i.e., phenocams) and Landsat imagery. NDVI from camera images was calculated by taking a standard RGB (red, green, and blue) image and then a near infrared (NIR) plus RGB image. Phenocam NDVI was calculated by extracting the red digital number (DN) and the NIR DN from images taken a few seconds apart. Landsat has a spatial resolution of 30 m2, while phenocam spatial resolution can be analyzed at the single pixel level at the scale of cm2 or area averaged regions can be analyzed with scales up to 1 km2. For this study, phenocam regions of interest were used that approximated the scale of at least one Landsat pixel. In the tall-statured pinyon and juniper woodland sites, there was a lack of agreement in NDVI between phenocam and Landsat NDVI, even after using National Agricultural Imagery Program (NAIP) imagery to account for fractional coverage of pinyon and juniper versus interspace in the phenocam data. Landsat NDVI appeared to be dominated by the signal from the interspace and was insensitive to subtle changes in the pinyon and juniper tree canopy. However, for short-statured sagebrush shrub and meadow communities, there was good agreement between the phenocam and Landsat NDVI as reflected in high Pearson’s correlation coefficients (r > 0.75). Due to greater temporal resolution of the phenocams with images taken daily, versus the 16-day return interval of Landsat, phenocam data provided more utility in determining important phenophase dates: start of season, peak of season, and end of season. More specific species-level information can be obtained with the high temporal resolution of phenocams, but only for a limited number of sites, while Landsat can provide the multi-decadal history and spatial coverage that is unmatched by other platforms. The agreement between Landsat and phenocam NDVI for short-statured plant communities of the Great Basin, shows promise for monitoring landscape and regional-level plant phenology across large areas and time periods, with phenocams providing a more comprehensive understanding of plant phenology at finer spatial scales, and Landsat extending the historical record of observations.
Collapse
|
8
|
Bayr U, Puschmann O. Automatic detection of woody vegetation in repeat landscape photographs using a convolutional neural network. ECOL INFORM 2019. [DOI: 10.1016/j.ecoinf.2019.01.012] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
9
|
Optimizing the Timing of Unmanned Aerial Vehicle Image Acquisition for Applied Mapping of Woody Vegetation Species Using Feature Selection. REMOTE SENSING 2017. [DOI: 10.3390/rs9111130] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
10
|
Phenocams Bridge the Gap between Field and Satellite Observations in an Arid Grassland Ecosystem. REMOTE SENSING 2017. [DOI: 10.3390/rs9101071] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|