1
|
Klingström T, Zonabend König E, Zwane AA. Beyond the hype: using AI, big data, wearable devices, and the internet of things for high-throughput livestock phenotyping. Brief Funct Genomics 2025; 24:elae032. [PMID: 39158344 PMCID: PMC11735752 DOI: 10.1093/bfgp/elae032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 07/19/2024] [Accepted: 08/01/2024] [Indexed: 08/20/2024] Open
Abstract
Phenotyping of animals is a routine task in agriculture which can provide large datasets for the functional annotation of genomes. Using the livestock farming sector to study complex traits enables genetics researchers to fully benefit from the digital transformation of society as economies of scale substantially reduces the cost of phenotyping animals on farms. In the agricultural sector genomics has transitioned towards a model of 'Genomics without the genes' as a large proportion of the genetic variation in animals can be modelled using the infinitesimal model for genomic breeding valuations. Combined with third generation sequencing creating pan-genomes for livestock the digital infrastructure for trait collection and precision farming provides a unique opportunity for high-throughput phenotyping and the study of complex traits in a controlled environment. The emphasis on cost efficient data collection mean that mobile phones and computers have become ubiquitous for cost-efficient large-scale data collection but that the majority of the recorded traits can still be recorded manually with limited training or tools. This is especially valuable in low- and middle income countries and in settings where indigenous breeds are kept at farms preserving more traditional farming methods. Digitalization is therefore an important enabler for high-throughput phenotyping for smaller livestock herds with limited technology investments as well as large-scale commercial operations. It is demanding and challenging for individual researchers to keep up with the opportunities created by the rapid advances in digitalization for livestock farming and how it can be used by researchers with or without a specialization in livestock. This review provides an overview of the current status of key enabling technologies for precision livestock farming applicable for the functional annotation of genomes.
Collapse
Affiliation(s)
- Tomas Klingström
- Department of Animal Biosciences, Swedish University of Agricultural Sciences, Uppsala, Sweden
| | | | - Avhashoni Agnes Zwane
- Department of Biochemistry, Genetics and Microbiology, University of Pretoria, Pretoria, South Africa
| |
Collapse
|
2
|
Reza MN, Lee KH, Habineza E, Samsuzzaman, Kyoung H, Choi YK, Kim G, Chung SO. RGB-based machine vision for enhanced pig disease symptoms monitoring and health management: a review. JOURNAL OF ANIMAL SCIENCE AND TECHNOLOGY 2025; 67:17-42. [PMID: 39974778 PMCID: PMC11833201 DOI: 10.5187/jast.2024.e111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 11/15/2024] [Accepted: 11/18/2024] [Indexed: 02/21/2025]
Abstract
The growing demands of sustainable, efficient, and welfare-conscious pig husbandry have necessitated the adoption of advanced technologies. Among these, RGB imaging and machine vision technology may offer a promising solution for early disease detection and proactive disease management in advanced pig husbandry practices. This review explores innovative applications for monitoring disease symptoms by assessing features that directly or indirectly indicate disease risk, as well as for tracking body weight and overall health. Machine vision and image processing algorithms enable for the real-time detection of subtle changes in pig appearance and behavior that may signify potential health issues. Key indicators include skin lesions, inflammation, ocular and nasal discharge, and deviations in posture and gait, each of which can be detected non-invasively using RGB cameras. Moreover, when integrated with thermal imaging, RGB systems can detect fever, a reliable indicator of infection, while behavioral monitoring systems can track abnormal posture, reduced activity, and altered feeding and drinking habits, which are often precursors to illness. The technology also facilitates the analysis of respiratory symptoms, such as coughing or sneezing (enabling early identification of respiratory diseases, one of the most significant challenges in pig farming), and the assessment of fecal consistency and color (providing valuable insights into digestive health). Early detection of disease or poor health supports proactive interventions, reducing mortality and improving treatment outcomes. Beyond direct symptom monitoring, RGB imaging and machine vision can indirectly assess disease risk by monitoring body weight, feeding behavior, and environmental factors such as overcrowding and temperature. However, further research is needed to refine the accuracy and robustness of algorithms in diverse farming environments. Ultimately, integrating RGB-based machine vision into existing farm management systems could provide continuous, automated surveillance, generating real-time alerts and actionable insights; these can support data-driven disease prevention strategies, reducing the need for mass medication and the development of antimicrobial resistance.
Collapse
Affiliation(s)
- Md Nasim Reza
- Department of Agricultural Machinery
Engineering, Graduate School, Chungnam National University,
Daejeon 34134, Korea
- Department of Smart Agricultural Systems,
Graduate School, Chungnam National University, Daejeon 34134,
Korea
| | - Kyu-Ho Lee
- Department of Agricultural Machinery
Engineering, Graduate School, Chungnam National University,
Daejeon 34134, Korea
- Department of Smart Agricultural Systems,
Graduate School, Chungnam National University, Daejeon 34134,
Korea
| | - Eliezel Habineza
- Department of Smart Agricultural Systems,
Graduate School, Chungnam National University, Daejeon 34134,
Korea
| | - Samsuzzaman
- Department of Agricultural Machinery
Engineering, Graduate School, Chungnam National University,
Daejeon 34134, Korea
| | - Hyunjin Kyoung
- Division of Animal and Dairy Science,
Chungnam National University, Daejeon 34134, Korea
| | | | - Gookhwan Kim
- National Institute of Agricultural
Sciences, Rural Development Administration, Jeonju 54875,
Korea
| | - Sun-Ok Chung
- Department of Agricultural Machinery
Engineering, Graduate School, Chungnam National University,
Daejeon 34134, Korea
- Department of Smart Agricultural Systems,
Graduate School, Chungnam National University, Daejeon 34134,
Korea
| |
Collapse
|
3
|
Pacheco HA, Hernandez RO, Chen SY, Neave HW, Pempek JA, Brito LF. Invited review: Phenotyping strategies and genetic background of dairy cattle behavior in intensive production systems-From trait definition to genomic selection. J Dairy Sci 2025; 108:6-32. [PMID: 39389298 DOI: 10.3168/jds.2024-24953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 09/14/2024] [Indexed: 10/12/2024]
Abstract
Understanding and assessing dairy cattle behavior is critical for developing sustainable breeding programs and management practices. The behavior of individual animals can provide valuable information on their health and welfare status, improve reproductive management, and predict efficiency traits such as feed efficiency and milking efficiency. Routine genetic evaluations of animal behavior traits can contribute to optimizing breeding and management strategies for dairy cattle but require the identification of traits that capture the most important biological processes involved in behavioral responses. These traits should be heritable, repeatable, and measured in noninvasive and cost-effective ways in many individuals from the breeding populations or related reference populations. Although behavior traits are heritable in dairy cattle populations, they are highly polygenic, with no known major genes influencing their phenotypic expression. Genetically selecting dairy cattle based on their behavior can be advantageous because of their relationship with other key traits such as animal health, welfare, and productive efficiency, as well as animal and handler safety. Trait definition and longitudinal data collection are still key challenges for breeding for behavioral responses in dairy cattle. However, the more recent developments and adoption of precision technologies in dairy farms provide avenues for more objective phenotyping and genetic selection of behavior traits. Furthermore, there is still a need to standardize phenotyping protocols for existing traits and develop guidelines for recording novel behavioral traits and integrating multiple data sources. This review gives an overview of the most common indicators of dairy cattle behavior, summarizes the main methods used for analyzing animal behavior in commercial settings, describes the genetic and genomic background of previously defined behavioral traits, and discusses strategies for breeding and improving behavior traits coupled with future opportunities for genetic selection for improved behavioral responses.
Collapse
Affiliation(s)
- Hendyel A Pacheco
- Department of Animal Sciences, Purdue University, West Lafayette, IN 47907
| | - Rick O Hernandez
- Department of Animal Sciences, Purdue University, West Lafayette, IN 47907
| | - Shi-Yi Chen
- Department of Animal Sciences, Purdue University, West Lafayette, IN 47907; Farm Animal Genetic Resources Exploration and Innovation Key Laboratory of Sichuan Province, Sichuan Agricultural University, Chengdu, Sichuan 611130, China
| | - Heather W Neave
- Department of Animal Sciences, Purdue University, West Lafayette, IN 47907
| | - Jessica A Pempek
- USDA-ARS, Livestock Behavior Research Unit, West Lafayette, IN 47907
| | - Luiz F Brito
- Department of Animal Sciences, Purdue University, West Lafayette, IN 47907.
| |
Collapse
|
4
|
Alapati R, Renslo B, Jackson L, Moradi H, Oliver JR, Chowdhury M, Vyas T, Nieves AB, Lawrence A, Wagoner SF, Rouse D, Larsen CG, Wang G, Bur AM. Predicting Therapeutic Response to Hypoglossal Nerve Stimulation Using Deep Learning. Laryngoscope 2024; 134:5210-5216. [PMID: 38934474 PMCID: PMC11563902 DOI: 10.1002/lary.31609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 05/24/2024] [Accepted: 06/17/2024] [Indexed: 06/28/2024]
Abstract
OBJECTIVES To develop and validate machine learning (ML) and deep learning (DL) models using drug-induced sleep endoscopy (DISE) images to predict the therapeutic efficacy of hypoglossal nerve stimulator (HGNS) implantation. METHODS Patients who underwent DISE and subsequent HGNS implantation at a tertiary care referral center were included. Six DL models and five ML algorithms were trained on images from the base of tongue (BOT) and velopharynx (VP) from patients classified as responders or non-responders as defined by Sher's criteria (50% reduction in apnea-hypopnea index (AHI) and AHI < 15 events/h). Precision, recall, F1 score, and overall accuracy were evaluated as measures of performance. RESULTS In total, 25,040 images from 127 patients were included, of which 16,515 (69.3%) were from responders and 8,262 (30.7%) from non-responders. Models trained on the VP dataset had greater overall accuracy when compared to BOT alone and combined VP and BOT image sets, suggesting that VP images contain discriminative features for identifying therapeutic efficacy. The VCG-16 DL model had the best overall performance on the VP image set with high training accuracy (0.833), F1 score (0.78), and recall (0.883). Among ML models, the logistic regression model had the greatest accuracy (0.685) and F1 score (0.813). CONCLUSION Deep neural networks have potential to predict HGNS therapeutic efficacy using images from DISE, facilitating better patient selection for implantation. Development of multi-institutional data and image sets will allow for development of generalizable predictive models. LEVEL OF EVIDENCE NA Laryngoscope, 134:5210-5216, 2024.
Collapse
Affiliation(s)
- Rahul Alapati
- University of Kansas, Department of Otolaryngology-Head and Neck Surgery, Kansas City, KS, USA
| | - Bryan Renslo
- Thomas Jefferson University, Department of Otolaryngology-Head and Neck Surgery, Philadelphia, PA, USA
| | - Laura Jackson
- University of Kansas School of Medicine, Kansas City, KS, USA
| | - Hanna Moradi
- University of Kansas School of Medicine, Kansas City, KS, USA
| | - Jamie R. Oliver
- University of Kansas, Department of Otolaryngology-Head and Neck Surgery, Kansas City, KS, USA
| | | | - Tejas Vyas
- Toronto Metropolitan University, Toronto, ON, Canada
| | - Antonio Bon Nieves
- University of Kansas, Department of Otolaryngology-Head and Neck Surgery, Kansas City, KS, USA
| | - Amelia Lawrence
- University of Kansas, Department of Otolaryngology-Head and Neck Surgery, Kansas City, KS, USA
| | - Sarah F. Wagoner
- University of Kansas, Department of Otolaryngology-Head and Neck Surgery, Kansas City, KS, USA
| | - David Rouse
- University of Kansas, Department of Otolaryngology-Head and Neck Surgery, Kansas City, KS, USA
| | - Christopher G. Larsen
- University of Kansas, Department of Otolaryngology-Head and Neck Surgery, Kansas City, KS, USA
| | - Ganghui Wang
- Toronto Metropolitan University, Toronto, ON, Canada
| | - Andrés M. Bur
- University of Kansas, Department of Otolaryngology-Head and Neck Surgery, Kansas City, KS, USA
| |
Collapse
|
5
|
Choudhary RK, Kumar B. V. S, Sekhar Mukhopadhyay C, Kashyap N, Sharma V, Singh N, Salajegheh Tazerji S, Kalantari R, Hajipour P, Singh Malik Y. Animal Wellness: The Power of Multiomics and Integrative Strategies: Multiomics in Improving Animal Health. Vet Med Int 2024; 2024:4125118. [PMID: 39484643 PMCID: PMC11527549 DOI: 10.1155/2024/4125118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 04/01/2024] [Accepted: 09/05/2024] [Indexed: 11/03/2024] Open
Abstract
The livestock industry faces significant challenges, with disease outbreaks being a particularly devastating issue. These diseases can disrupt the food supply chain and the livelihoods of those involved in the sector. To address this, there is a growing need to enhance the health and well-being of livestock animals, ultimately improving their performance while minimizing their environmental impact. To tackle the considerable challenge posed by disease epidemics, multiomics approaches offer an excellent opportunity for scientists, breeders, and policymakers to gain a comprehensive understanding of animal biology, pathogens, and their genetic makeup. This understanding is crucial for enhancing the health of livestock animals. Multiomic approaches, including phenomics, genomics, epigenomics, metabolomics, proteomics, transcriptomics, microbiomics, and metaproteomics, are widely employed to assess and enhance animal health. High-throughput phenotypic data collection allows for the measurement of various fitness traits, both discrete and continuous, which, when mathematically combined, define the overall health and resilience of animals, including their ability to withstand diseases. Omics methods are routinely used to identify genes involved in host-pathogen interactions, assess fitness traits, and pinpoint animals with disease resistance. Genome-wide association studies (GWAS) help identify the genetic factors associated with health status, heat stress tolerance, disease resistance, and other health-related characteristics, including the estimation of breeding value. Furthermore, the interaction between hosts and pathogens, as observed through the assessment of host gut microbiota, plays a crucial role in shaping animal health and, consequently, their performance. Integrating and analyzing various heterogeneous datasets to gain deeper insights into biological systems is a challenging task that necessitates the use of innovative tools. Initiatives like MiBiOmics, which facilitate the visualization, analysis, integration, and exploration of multiomics data, are expected to improve prediction accuracy and identify robust biomarkers linked to animal health. In this review, we discuss the details of multiomics concerning the health and well-being of livestock animals.
Collapse
Affiliation(s)
- Ratan Kumar Choudhary
- Department of Bioinformatics, Animal Stem Cells Laboratory, College of Animal Biotechnology, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana 141004, Punjab, India
| | - Sunil Kumar B. V.
- Department of Animal Biotechnology, Proteomics & Metabolomics Lab, College of Animal Biotechnology, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana 141004, Punjab, India
| | - Chandra Sekhar Mukhopadhyay
- Department of Bioinformatics, Genomics Lab, College of Animal Biotechnology, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana 141004, Punjab, India
| | - Neeraj Kashyap
- Department of Bioinformatics, Genomics Lab, College of Animal Biotechnology, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana 141004, Punjab, India
| | - Vishal Sharma
- Department of Animal Biotechnology, Reproductive Biotechnology Lab, College of Animal Biotechnology, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana 141004, Punjab, India
| | - Nisha Singh
- Department of Bioinformatics, College of Animal Biotechnology, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana 141004, Punjab, India
| | - Sina Salajegheh Tazerji
- Department of Clinical Sciences, Faculty of Veterinary Medicine, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Roozbeh Kalantari
- Department of Clinical Sciences, Faculty of Veterinary Medicine, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Pouneh Hajipour
- Department of Avian Diseases, Faculty of Veterinary Medicine, University of Tehran, Tehran, Iran
- Department of Clinical Science, Faculty of Veterinary Medicine, Shahid Bahonar University of Kerman, Kerman, Iran
| | - Yashpal Singh Malik
- Department of Microbial and Environmental Biotechnology, College of Animal Biotechnology, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana 141004, Punjab, India
| |
Collapse
|
6
|
Guzman M, Geuther BQ, Sabnis GS, Kumar V. Highly accurate and precise determination of mouse mass using computer vision. PATTERNS (NEW YORK, N.Y.) 2024; 5:101039. [PMID: 39568644 PMCID: PMC11573914 DOI: 10.1016/j.patter.2024.101039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 02/20/2024] [Accepted: 07/11/2024] [Indexed: 11/22/2024]
Abstract
Changes in body mass are key indicators of health in humans and animals and are routinely monitored in animal husbandry and preclinical studies. In rodent studies, the current method of manually weighing the animal on a balance causes at least two issues. First, directly handling the animal induces stress, possibly confounding studies. Second, these data are static, limiting continuous assessment and obscuring rapid changes. A non-invasive, continuous method of monitoring animal mass would have utility in multiple biomedical research areas. We combine computer vision with statistical modeling to demonstrate the feasibility of determining mouse body mass by using video data. Our methods determine mass with a 4.8% error across genetically diverse mouse strains with varied coat colors and masses. This error is low enough to replace manual weighing in most mouse studies. We conclude that visually determining rodent mass enables non-invasive, continuous monitoring, improving preclinical studies and animal welfare.
Collapse
Affiliation(s)
- Malachy Guzman
- The Jackson Laboratory, Bar Harbor, ME, USA
- Carleton College, Northfield, MN, USA
| | | | | | - Vivek Kumar
- The Jackson Laboratory, Bar Harbor, ME, USA
- School of Graduate Biomedical Sciences, Tufts University School of Medicine, Boston, MA, USA
- Graduate School of Biomedical Sciences and Engineering, University of Maine, Orono, ME, USA
| |
Collapse
|
7
|
Aravamuthan S, Cernek P, Anklam K, Döpfer D. Comparative analysis of computer vision algorithms for the real-time detection of digital dermatitis in dairy cows. Prev Vet Med 2024; 229:106235. [PMID: 38833805 DOI: 10.1016/j.prevetmed.2024.106235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Revised: 05/10/2024] [Accepted: 05/18/2024] [Indexed: 06/06/2024]
Abstract
Digital dermatitis (DD) is a bovine claw disease responsible for ulcerative lesions on the planar aspect of the hoof. DD is associated with massive herd outbreaks of lameness and influences cattle welfare and production. Early detection of DD can lead to prompt treatment and decrease lameness. Computer vision (CV) provides a unique opportunity to improve early detection. The study aims to train and compare applications for the real-time detection of DD in dairy cows. Eight CV models were trained for detection and scoring, compared using performance metrics and inference time, and the best model was automated for real-time detection using images and video. Images were collected from commercial dairy farms while facing the interdigital space on the plantar surface of the foot. Images were scored for M-stages of DD by a trained investigator using the M-stage DD classification system with distinct labels for hyperkeratosis (H) and proliferations (P). Two sets of images were compiled: the first dataset (Dataset 1) containing 1,177 M0/M4H and 1,050 M2/M2P images and the second dataset (Dataset 2) containing 240 M0, 17 M2, 51 M2P, 114 M4H, and 108 M4P images. Models were trained to detect and score DD lesions and compared for precision, recall, and mean average precision (mAP) in addition to inference time in frame per second (FPS). Seven of the nine CV models performed well compared to the ground truth of labeled images using Dataset 1. The six models, Faster R-CNN, Cascade R-CNN, YOLOv3, Tiny YOLOv3, YOLOv4, Tiny YOLOv4, and YOLOv5s achieved an mAP between 0.964 and 0.998, whereas the other two models, SSD and SSD Lite, yielded an mAP of 0.371 and 0.387 respectively. Overall, YOLOv4, Tiny YOLOv4, and YOLOv5s outperformed all other models with almost perfect precision, perfect recall, and a higher mAP. Tiny YOLOv4 outperformed all other models with respect to inference time at 333 FPS, followed by YOLOv5s at 133 FPS and YOLOv4 at 65 FPS. YOLOv4 and Tiny YOLOv4 performed better than YOLOv5s compared to the ground truth using Dataset 2. YOLOv4 and Tiny YOLOv4 yielded a similar mAP of 0.896 and 0.895, respectively. However, Tiny YOLOv4 achieved both higher precision and recall compared to YOLOv4. Finally, Tiny YOLOv4 was able to detect DD lesions on a commercial dairy farm with high performance and speed. The proposed CV tool can be used for early detection and prompt treatment of DD in dairy cows. This result is a step towards applying CV algorithms to veterinary medicine and implementing real-time DD detection on dairy farms.
Collapse
Affiliation(s)
- Srikanth Aravamuthan
- Department of Medical Science, School of Veterinary Medicine, University of Wisconsin-Madison, 2015 Linden Drive, Madison 53706, United States.
| | - Preston Cernek
- Department of Medical Science, School of Veterinary Medicine, University of Wisconsin-Madison, 2015 Linden Drive, Madison 53706, United States
| | - Kelly Anklam
- Department of Medical Science, School of Veterinary Medicine, University of Wisconsin-Madison, 2015 Linden Drive, Madison 53706, United States
| | - Dörte Döpfer
- Department of Medical Science, School of Veterinary Medicine, University of Wisconsin-Madison, 2015 Linden Drive, Madison 53706, United States
| |
Collapse
|
8
|
Ma W, Sun Y, Qi X, Xue X, Chang K, Xu Z, Li M, Wang R, Meng R, Li Q. Computer-Vision-Based Sensing Technologies for Livestock Body Dimension Measurement: A Survey. SENSORS (BASEL, SWITZERLAND) 2024; 24:1504. [PMID: 38475040 DOI: 10.3390/s24051504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/21/2024] [Accepted: 02/23/2024] [Indexed: 03/14/2024]
Abstract
Livestock's live body dimensions are a pivotal indicator of economic output. Manual measurement is labor-intensive and time-consuming, often eliciting stress responses in the livestock. With the advancement of computer technology, the techniques for livestock live body dimension measurement have progressed rapidly, yielding significant research achievements. This paper presents a comprehensive review of the recent advancements in livestock live body dimension measurement, emphasizing the crucial role of computer-vision-based sensors. The discussion covers three main aspects: sensing data acquisition, sensing data processing, and sensing data analysis. The common techniques and measurement procedures in, and the current research status of, live body dimension measurement are introduced, along with a comparative analysis of their respective merits and drawbacks. Livestock data acquisition is the initial phase of live body dimension measurement, where sensors are employed as data collection equipment to obtain information conducive to precise measurements. Subsequently, the acquired data undergo processing, leveraging techniques such as 3D vision technology, computer graphics, image processing, and deep learning to calculate the measurements accurately. Lastly, this paper addresses the existing challenges within the domain of livestock live body dimension measurement in the livestock industry, highlighting the potential contributions of computer-vision-based sensors. Moreover, it predicts the potential development trends in the realm of high-throughput live body dimension measurement techniques for livestock.
Collapse
Affiliation(s)
- Weihong Ma
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
| | - Yi Sun
- College of Information Engineering, Northwest A&F University, Xianyang 712199, China
| | - Xiangyu Qi
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Xianglong Xue
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
| | - Kaixuan Chang
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
| | - Zhankang Xu
- College of Information Engineering, Northwest A&F University, Xianyang 712199, China
| | - Mingyu Li
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
| | - Rong Wang
- College of Information Engineering, Northwest A&F University, Xianyang 712199, China
| | - Rui Meng
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
| | - Qifeng Li
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
| |
Collapse
|
9
|
Mora M, Piles M, David I, Rosa GJM. Integrating computer vision algorithms and RFID system for identification and tracking of group-housed animals: an example with pigs. J Anim Sci 2024; 102:skae174. [PMID: 38908015 PMCID: PMC11245691 DOI: 10.1093/jas/skae174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 06/19/2024] [Indexed: 06/24/2024] Open
Abstract
Precision livestock farming aims to individually and automatically monitor animal activity to ensure their health, well-being, and productivity. Computer vision has emerged as a promising tool for this purpose. However, accurately tracking individuals using imaging remains challenging, especially in group housing where animals may have similar appearances. Close interaction or crowding among animals can lead to the loss or swapping of animal IDs, compromising tracking accuracy. To address this challenge, we implemented a framework combining a tracking-by-detection method with a radio frequency identification (RFID) system. We tested this approach using twelve pigs in a single pen as an illustrative example. Three of the pigs had distinctive natural coat markings, enabling their visual identification within the group. The remaining pigs either shared similar coat color patterns or were entirely white, making them visually indistinguishable from each other. We employed the latest version of the You Only Look Once (YOLOv8) and BoT-SORT algorithms for detection and tracking, respectively. YOLOv8 was fine-tuned with a dataset of 3,600 images to detect and classify different pig classes, achieving a mean average precision of all the classes of 99%. The fine-tuned YOLOv8 model and the tracker BoT-SORT were then applied to a 166.7-min video comprising 100,018 frames. Results showed that pigs with distinguishable coat color markings could be tracked 91% of the time on average. For pigs with similar coat color, the RFID system was used to identify individual animals when they entered the feeding station, and this RFID identification was linked to the image trajectory of each pig, both backward and forward. The two pigs with similar markings could be tracked for an average of 48.6 min, while the seven white pigs could be tracked for an average of 59.1 min. In all cases, the tracking time assigned to each pig matched the ground truth 90% of the time or more. Thus, our proposed framework enabled reliable tracking of group-housed pigs for extended periods, offering a promising alternative to the independent use of image or RFID approaches alone. This approach represents a significant step forward in combining multiple devices for animal identification, tracking, and traceability, particularly when homogeneous animals are kept in groups.
Collapse
Affiliation(s)
- Mónica Mora
- Institute of Agrifood Research and Technology (IRTA) – Animal Breeding and Genetics, Barcelona 08140, Spain
- Department of Animal and Dairy Sciences, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Miriam Piles
- Institute of Agrifood Research and Technology (IRTA) – Animal Breeding and Genetics, Barcelona 08140, Spain
| | - Ingrid David
- GenPhySE, Université de Toulouse, INRAE, ENVT, Castanet Tolosan 31326, France
| | - Guilherme J M Rosa
- Department of Animal and Dairy Sciences, University of Wisconsin-Madison, Madison, WI 53706, USA
| |
Collapse
|
10
|
Guzman M, Geuther B, Sabnis G, Kumar V. Highly Accurate and Precise Determination of Mouse Mass Using Computer Vision. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.30.573718. [PMID: 38318203 PMCID: PMC10843158 DOI: 10.1101/2023.12.30.573718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
Changes in body mass are a key indicator of health and disease in humans and model organisms. Animal body mass is routinely monitored in husbandry and preclinical studies. In rodent studies, the current best method requires manually weighing the animal on a balance which has at least two consequences. First, direct handling of the animal induces stress and can have confounding effects on studies. Second, the acquired mass is static and not amenable to continuous assessment, and rapid mass changes can be missed. A noninvasive and continuous method of monitoring animal mass would have utility in multiple areas of biomedical research. Here, we test the feasibility of determining mouse body mass using video data. We combine computer vision methods with statistical modeling to demonstrate the feasibility of our approach. Our methods determine mouse mass with 4.8% error across highly genetically diverse mouse strains, with varied coat colors and mass. This error is low enough to replace manual weighing with image-based assessment in most mouse studies. We conclude that visual determination of rodent mass using video enables noninvasive and continuous monitoring and can improve animal welfare and preclinical studies.
Collapse
|
11
|
Elliott KC, Werkheiser I. A Framework for Transparency in Precision Livestock Farming. Animals (Basel) 2023; 13:3358. [PMID: 37958113 PMCID: PMC10648797 DOI: 10.3390/ani13213358] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/20/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
As precision livestock farming (PLF) technologies emerge, it is important to consider their social and ethical dimensions. Reviews of PLF have highlighted the importance of considering ethical issues related to privacy, security, and welfare. However, little attention has been paid to ethical issues related to transparency regarding these technologies. This paper proposes a framework for developing responsible transparency in the context of PLF. It examines the kinds of information that could be ethically important to disclose about these technologies, the different audiences that might care about this information, the challenges involved in achieving transparency for these audiences, and some promising strategies for addressing these challenges. For example, with respect to the information to be disclosed, efforts to foster transparency could focus on: (1) information about the goals and priorities of those developing PLF systems; (2) details about how the systems operate; (3) information about implicit values that could be embedded in the systems; and/or (4) characteristics of the machine learning algorithms often incorporated into these systems. In many cases, this information is likely to be difficult to obtain or communicate meaningfully to relevant audiences (e.g., farmers, consumers, industry, and/or regulators). Some of the potential steps for addressing these challenges include fostering collaborations between the developers and users of PLF systems, developing techniques for identifying and disclosing important forms of information, and pursuing forms of PLF that can be responsibly employed with less transparency. Given the complexity of transparency and its ethical and practical importance, a framework for developing and evaluating transparency will be an important element of ongoing PLF research.
Collapse
Affiliation(s)
- Kevin C. Elliott
- Lyman Briggs College, Department of Fisheries and Wildlife, and Department of Philosophy, Michigan State University, East Lansing, MI 48825, USA
| | - Ian Werkheiser
- Department of Philosophy, University of Texas Rio Grande Valley, Edinburg, TX 78539, USA
| |
Collapse
|
12
|
Marshall K, Poole J, Oyieng E, Ouma E, Kugonza DR. A farmer-friendly tool for estimation of weights of pigs kept by smallholder farmers in Uganda. Trop Anim Health Prod 2023; 55:219. [PMID: 37219661 DOI: 10.1007/s11250-023-03561-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 03/29/2023] [Indexed: 05/24/2023]
Abstract
Pig keeping is important to the livelihoods of many rural Ugandans. Pigs are typically sold based on live weight or a carcass weight derived from this; however this weight is commonly estimated due to the lack of access to scales. Here, we explore the development of a weigh band for more accurate weight determination and potentially increased farmer bargaining power on sale price. Pig weights and varied body measurements (heart girth, height, and length) were collected on 764 pigs of different ages, sex, and breed types, from 157 smallholder pig keeping households in Central and Western Uganda. Mixed-effects linear regression analyses, with household as a random effect and the varied body measurements as a fixed effect, were performed to determine the best single predictor for cube root of weight (transformation of weight for normality), for 749 pigs ranging between 0 and 125 kg. The most predictive single body measurement was heart girth, where weight in kg = (0.4011 + heart girth in cm × 0.0381)3. This model was found to be most suitable for pigs between 5 and 110 kg, notably more accurate than farmers' estimates, but still with somewhat broad confidence intervals (for example, ±11.5 kg for pigs with a predicted weight of 51.3 kg). We intend to pilot test a weigh band based on this model before deciding on whether it is suitable for wider scaling.
Collapse
Affiliation(s)
- Karen Marshall
- International Livestock Research Institute, P.O Box 30709 - 00100, Nairobi, Kenya.
| | - Jane Poole
- International Livestock Research Institute, P.O Box 30709 - 00100, Nairobi, Kenya
| | - Edwin Oyieng
- International Livestock Research Institute, P.O Box 30709 - 00100, Nairobi, Kenya
| | - Emily Ouma
- International Livestock Research Institute, c/o Bioversity International, P.O Box 24384, Kampala, Uganda
| | - Donald R Kugonza
- School of Agricultural Sciences, College of Agricultural and Environmental Sciences, Makerere University, P.O Box 7062, Kampala, Uganda
| |
Collapse
|
13
|
Varona L, González-Recio O. Invited review: Recursive models in animal breeding: Interpretation, limitations, and extensions. J Dairy Sci 2023; 106:2198-2212. [PMID: 36870846 DOI: 10.3168/jds.2022-22578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 10/30/2022] [Indexed: 03/05/2023]
Abstract
Structural equation models allow causal effects between 2 or more variables to be considered and can postulate unidirectional (recursive models; RM) or bidirectional (simultaneous models) causality between variables. This review evaluated the properties of RM in animal breeding and how to interpret the genetic parameters and the corresponding estimated breeding values. In many cases, RM and mixed multitrait models (MTM) are statistically equivalent, although subject to the assumption of variance-covariance matrices and restrictions imposed for achieving model identification. Inference under RM requires imposing some restrictions on the (co)variance matrix or on the location parameters. The estimates of the variance components and the breeding values can be transformed from RM to MTM, although the biological interpretation differs. In the MTM, the breeding values predict the full influence of the additive genetic effects on the traits and should be used for breeding purposes. In contrast, the RM breeding values express the additive genetic effect while holding the causal traits constant. The differences between the additive genetic effect in RM and MTM can be used to identify the genomic regions that affect the additive genetic variation of traits directly or causally mediated for another trait or traits. Furthermore, we presented some extensions of the RM that are useful for modeling quantitative traits with alternative assumptions. The equivalence of RM and MTM can be used to infer causal effects on sequentially expressed traits by manipulating the residual (co)variance matrix under the MTM. Further, RM can be implemented to analyze causality between traits that might differ among subgroups or within the parametric space of the independent traits. In addition, RM can be expanded to create models that introduce some degree of regularization in the recursive structure that aims to estimate a large number of recursive parameters. Finally, RM can be used in some cases for operational reasons, although there is no causality between traits.
Collapse
Affiliation(s)
- L Varona
- Instituto Agroalimentario de Aragón (IA2), Facultad de Veterinaria, Universidad de Zaragoza, C/ Miguel Servet 177, 50013 Zaragoza, Spain.
| | - O González-Recio
- Departamento de mejora genética animal, INIA-CSIC, Crta, de la Coruña km 7.5, 28040 Madrid, Spain
| |
Collapse
|
14
|
Huang L, Zhang W, Zhou W, Chen L, Liu G, Shi W. Behaviour, a potential bioindicator for toxicity analysis of waterborne microplastics: A review. Trends Analyt Chem 2023. [DOI: 10.1016/j.trac.2023.117044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
15
|
Ariede RB, Lemos CG, Batista FM, Oliveira RR, Agudelo JFG, Borges CHS, Iope RL, Almeida FLO, Brega JRF, Hashimoto DT. Computer vision system using deep learning to predict rib and loin yield in the fish Colossoma macropomum. Anim Genet 2023; 54:375-388. [PMID: 36756733 DOI: 10.1111/age.13302] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 01/04/2023] [Accepted: 01/26/2023] [Indexed: 02/10/2023]
Abstract
Computer vision system (CVSs) are effective tools that enable large-scale phenotyping with a low-cost and non-invasive method, which avoids animal stress. Economically important traits, such as rib and loin yield, are difficult to measure; therefore, the use of CVS is crucial to accurately predict several measures to allow their inclusion in breeding goals by indirect predictors. Therefore, this study aimed (1) to validate CVS by a deep learning approach and to automatically predict morphometric measurements in tambaqui and (2) to estimate genetic parameters for growth traits and body yield. Data from 365 individuals belonging to 11 full-sib families were evaluated. Seven growth traits were measured. After biometrics, each fish was processed in the following body regions: head, rib, loin, R + L (rib + loin). For deep learning image segmentation, we adopted a method based on the instance segmentation of the Mask R-CNN (Region-based Convolutional Neural Networks) model. Pearson's correlation values between measurements predicted manually and automatically by the CVS were high and positive. Regarding the classification performance, visible differences were detected in only about 3% of the images. Heritability estimates for growth and body yield traits ranged from low to high. The genetic correlations between the percentage of body parts and morphometric characteristics were favorable and highly correlated, except for percentage head, whose correlations were unfavorable. In conclusion, the CVS validated in this image dataset proved to be resilient and can be used for large-scale phenotyping in tambaqui. The weight of the rib and loin are traits under moderate genetic control and should respond to selection. In addition, standard length and pelvis length can be used as an efficient and indirect selection criterion for body yield in this tambaqui population.
Collapse
Affiliation(s)
- Raquel B Ariede
- Aquaculture Center of Unesp, São Paulo State University, Jaboticabal, SP, Brazil
| | - Celma G Lemos
- Aquaculture Center of Unesp, São Paulo State University, Jaboticabal, SP, Brazil
| | | | - Rubens R Oliveira
- Aquaculture Center of Unesp, São Paulo State University, Jaboticabal, SP, Brazil
| | - John F G Agudelo
- Aquaculture Center of Unesp, São Paulo State University, Jaboticabal, SP, Brazil
| | - Carolina H S Borges
- Aquaculture Center of Unesp, São Paulo State University, Jaboticabal, SP, Brazil
| | - Rogério L Iope
- Center for Scientific Computing, São Paulo State University, São Paulo, SP, Brazil
| | | | - José R F Brega
- School of Sciences, São Paulo State University, Bauru, SP, Brazil
| | - Diogo T Hashimoto
- Aquaculture Center of Unesp, São Paulo State University, Jaboticabal, SP, Brazil
| |
Collapse
|
16
|
Image Classification and Automated Machine Learning to Classify Lung Pathologies in Deceased Feedlot Cattle. Vet Sci 2023; 10:vetsci10020113. [PMID: 36851417 PMCID: PMC9960640 DOI: 10.3390/vetsci10020113] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 01/27/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
Bovine respiratory disease (BRD) and acute interstitial pneumonia (AIP) are the main reported respiratory syndromes (RSs) causing significant morbidity and mortality in feedlot cattle. Recently, bronchopneumonia with an interstitial pattern (BIP) was described as a concerning emerging feedlot lung disease. Necropsies are imperative to assist lung disease diagnosis and pinpoint feedlot management sectors that require improvement. However, necropsies can be logistically challenging due to location and veterinarians' time constraints. Technology advances allow image collection for veterinarians' asynchronous evaluation, thereby reducing challenges. This study's goal was to develop image classification models using machine learning to determine RS diagnostic accuracy in right lateral necropsied feedlot cattle lungs. Unaltered and cropped lung images were labeled using gross and histopathology diagnoses generating four datasets: unaltered lung images labeled with gross diagnoses, unaltered lung images labeled with histopathological diagnoses, cropped images labeled with gross diagnoses, and cropped images labeled with histopathological diagnoses. Datasets were exported to create image classification models, and a best trial was selected for each model based on accuracy. Gross diagnoses accuracies ranged from 39 to 41% for unaltered and cropped images. Labeling images with histopathology diagnoses did not improve average accuracies; 34-38% for unaltered and cropped images. Moderately high sensitivities were attained for BIP (60-100%) and BRD (20-69%) compared to AIP (0-23%). The models developed still require fine-tuning; however, they are the first step towards assisting veterinarians' lung diseases diagnostics in field necropsies.
Collapse
|
17
|
Wang Y, Chen YL, Huang CM, Chen LT, Liao LD. Visible CCD Camera-Guided Photoacoustic Imaging System for Precise Navigation during Functional Rat Brain Imaging. BIOSENSORS 2023; 13:107. [PMID: 36671941 PMCID: PMC9856069 DOI: 10.3390/bios13010107] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/20/2022] [Accepted: 12/29/2022] [Indexed: 06/17/2023]
Abstract
In photoacoustic (PA) imaging, tissue absorbs specific wavelengths of light. The absorbed energy results in thermal expansion that generates ultrasound waves that are reconstructed into images. Existing commercial PA imaging systems for preclinical brain imaging are limited by imprecise positioning capabilities and inflexible user interfaces. We introduce a new visible charge-coupled device (CCD) camera-guided photoacoustic imaging (ViCPAI) system that integrates an ultrasound (US) transducer and a data acquisition platform with a CCD camera for positioning. The CCD camera accurately positions the US probe at the measurement location. The programmable MATLAB-based platform has an intuitive user interface. In vitro carbon fiber and in vivo animal experiments were performed to investigate the precise positioning and imaging capabilities of the ViCPAI system. We demonstrated real-time capturing of bilateral cerebral hemodynamic changes during (1) forelimb electrical stimulation under normal conditions, (2) forelimb stimulation after right brain focal photothrombotic ischemia (PTI) stroke, and (3) progression of KCl-induced cortical spreading depression (CSD). The ViCPAI system accurately located target areas and achieved reproducible positioning, which is crucial in animal and clinical experiments. In animal experiments, the ViCPAI system was used to investigate bilateral cerebral cortex responses to left forelimb electrical stimulation before and after stroke, showing that the CBV and SO2 in the right primary somatosensory cortex of the forelimb (S1FL) region were significantly changed by left forelimb electrical stimulation before stroke. No CBV or SO2 changes were observed in the bilateral cortex in the S1FL area in response to left forelimb electrical stimulation after stroke. While monitoring CSD progression, the ViCPAI system accurately locates the S1FL area and returns to the same position after the probe moves, demonstrating reproducible positioning and reducing positioning errors. The ViCPAI system utilizes the real-time precise positioning capability of CCD cameras to overcome various challenges in preclinical and clinical studies.
Collapse
Affiliation(s)
- Yuhling Wang
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, No.35, Keyan Road, Zhunan Town, Miaoli County 350, Taiwan
| | - Yu-Lin Chen
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, No.35, Keyan Road, Zhunan Town, Miaoli County 350, Taiwan
| | - Chih-Mao Huang
- Department of Biological Science and Technology, National Yang Ming Chiao Tung University, No.75 Po-Ai St., Hsinchu 300, Taiwan
| | - Li-Tzong Chen
- Department of Internal Medicine, Kaohsiung Medical University Hospital and Center for Cancer Research, Kaohsiung Medical University, No.100, Tzyou 1st Road, Sanmin Dist., Kaohsiung City 80756, Taiwan
- National Institute of Cancer Research, National Health Research Institutes, No.35, Keyan Road, Zhunan Town, Miaoli County 350, Taiwan
| | - Lun-De Liao
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, No.35, Keyan Road, Zhunan Town, Miaoli County 350, Taiwan
| |
Collapse
|
18
|
Jones HE, Wilson PB. Progress and opportunities through use of genomics in animal production. Trends Genet 2022; 38:1228-1252. [PMID: 35945076 DOI: 10.1016/j.tig.2022.06.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 06/08/2022] [Accepted: 06/17/2022] [Indexed: 01/24/2023]
Abstract
The rearing of farmed animals is a vital component of global food production systems, but its impact on the environment, human health, animal welfare, and biodiversity is being increasingly challenged. Developments in genetic and genomic technologies have had a key role in improving the productivity of farmed animals for decades. Advances in genome sequencing, annotation, and editing offer a means not only to continue that trend, but also, when combined with advanced data collection, analytics, cloud computing, appropriate infrastructure, and regulation, to take precision livestock farming (PLF) and conservation to an advanced level. Such an approach could generate substantial additional benefits in terms of reducing use of resources, health treatments, and environmental impact, while also improving animal health and welfare.
Collapse
Affiliation(s)
- Huw E Jones
- UK Genetics for Livestock and Equines (UKGLE) Committee, Department for Environment, Food and Rural Affairs, Nobel House, 17 Smith Square, London, SW1P 3JR, UK; Nottingham Trent University, Brackenhurst Campus, Brackenhurst Lane, Southwell, NG25 0QF, UK.
| | - Philippe B Wilson
- UK Genetics for Livestock and Equines (UKGLE) Committee, Department for Environment, Food and Rural Affairs, Nobel House, 17 Smith Square, London, SW1P 3JR, UK; Nottingham Trent University, Brackenhurst Campus, Brackenhurst Lane, Southwell, NG25 0QF, UK
| |
Collapse
|
19
|
Olaniyi EO, Lu Y, Cai J, Sukumaran AT, Jarvis T, Rowe C. Feasibility of imaging under structured illumination for evaluation of white striping in broiler breast fillets. J FOOD ENG 2022. [DOI: 10.1016/j.jfoodeng.2022.111359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
20
|
McVey C, Egger D, Pinedo P. Improving the Reliability of Scale-Free Image Morphometrics in Applications with Minimally Restrained Livestock Using Projective Geometry and Unsupervised Machine Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:8347. [PMID: 36366045 PMCID: PMC9653925 DOI: 10.3390/s22218347] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/26/2022] [Accepted: 10/28/2022] [Indexed: 06/16/2023]
Abstract
Advances in neural networks have garnered growing interest in applications of machine vision in livestock management, but simpler landmark-based approaches suitable for small, early stage exploratory studies still represent a critical stepping stone towards these more sophisticated analyses. While such approaches are well-validated for calibrated images, the practical limitations of such imaging systems restrict their applicability in working farm environments. The aim of this study was to validate novel algorithmic approaches to improving the reliability of scale-free image biometrics acquired from uncalibrated images of minimally restrained livestock. Using a database of 551 facial images acquired from 108 dairy cows, we demonstrate that, using a simple geometric projection-based approach to metric extraction, a priori knowledge may be leveraged to produce more intuitive and reliable morphometric measurements than conventional informationally complete Euclidean distance matrix analysis. Where uncontrolled variations in image annotation, camera position, and animal pose could not be fully controlled through the design of morphometrics, we further demonstrate how modern unsupervised machine learning tools may be used to leverage the systematic error structures created by such lurking variables in order to generate bias correction terms that may subsequently be used to improve the reliability of downstream statistical analyses and dimension reduction.
Collapse
Affiliation(s)
- Catherine McVey
- Department of Animal Sciences, Colorado State University, Fort Collins, CO 80523, USA
| | - Daniel Egger
- Pratt School of Engineering, Duke University, Durham, NC 27708, USA
| | - Pablo Pinedo
- Department of Animal Sciences, Colorado State University, Fort Collins, CO 80523, USA
| |
Collapse
|
21
|
Woodward-Greene MJ, Kinser JM, Sonstegard TS, Sölkner J, Vaisman II, Van Tassell CP. PreciseEdge raster RGB image segmentation algorithm reduces user input for livestock digital body measurements highly correlated to real-world measurements. PLoS One 2022; 17:e0275821. [PMID: 36227957 PMCID: PMC9560539 DOI: 10.1371/journal.pone.0275821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 09/24/2022] [Indexed: 11/25/2022] Open
Abstract
Computer vision is a tool that could provide livestock producers with digital body measures and records that are important for animal health and production, namely body height and length, and chest girth. However, to build these tools, the scarcity of labeled training data sets with uniform images (pose, lighting) that also represent real-world livestock can be a challenge. Collecting images in a standard way, with manual image labeling is the gold standard to create such training data, but the time and cost can be prohibitive. We introduce the PreciseEdge image segmentation algorithm to address these issues by employing a standard image collection protocol with a semi-automated image labeling method, and a highly precise image segmentation for automated body measurement extraction directly from each image. These elements, from image collection to extraction are designed to work together to yield values highly correlated to real-world body measurements. PreciseEdge adds a brief preprocessing step inspired by chromakey to a modified GrabCut procedure to generate image masks for data extraction (body measurements) directly from the images. Three hundred RGB (red, green, blue) image samples were collected uniformly per the African Goat Improvement Network Image Collection Protocol (AGIN-ICP), which prescribes camera distance, poses, a blue backdrop, and a custom AGIN-ICP calibration sign. Images were taken in natural settings outdoors and in barns under high and low light, using a Ricoh digital camera producing JPG images (converted to PNG prior to processing). The rear and side AGIN-ICP poses were used for this study. PreciseEdge and GrabCut image segmentation methods were compared for differences in user input required to segment the images. The initial bounding box image output was captured for visual comparison. Automated digital body measurements extracted were compared to manual measures for each method. Both methods allow additional optional refinement (mouse strokes) to aid the segmentation algorithm. These optional mouse strokes were captured automatically and compared. Stroke count distributions for both methods were not normally distributed per Kolmogorov-Smirnov tests. Non-parametric Wilcoxon tests showed the distributions were different (p< 0.001) and the GrabCut stroke count was significantly higher (p = 5.115 e-49), with a mean of 577.08 (std 248.45) versus 221.57 (std 149.45) with PreciseEdge. Digital body measures were highly correlated to manual height, length, and girth measures, (0.931, 0.943, 0.893) for PreciseEdge and (0.936, 0. 944, 0.869) for GrabCut (Pearson correlation coefficient). PreciseEdge image segmentation allowed for masks yielding accurate digital body measurements highly correlated to manual, real-world measurements with over 38% less user input for an efficient, reliable, non-invasive alternative to livestock hand-held direct measuring tools.
Collapse
Affiliation(s)
- M. Jennifer Woodward-Greene
- USDA-ARS-NEA Animal Genomics and Improvement Laboratory, Beltsville, MD, United States of America
- Bioinformatics and Computational Biology Department, George Mason University, Manassas, VA, United States of America
| | - Jason M. Kinser
- Department of Computational and Data Sciences, George Mason University, Fairfax, VA, United States of America
| | - Tad S. Sonstegard
- Recombinetics at Acceligen, St. Paul, Minnesota, United States of America
| | - Johann Sölkner
- BOKU University of Natural Resources and Life Sciences, Vienna, Austria
| | - Iosif I. Vaisman
- Bioinformatics and Computational Biology Department, George Mason University, Manassas, VA, United States of America
| | - Curtis P. Van Tassell
- USDA-ARS-NEA Animal Genomics and Improvement Laboratory, Beltsville, MD, United States of America
| |
Collapse
|
22
|
Oppong SO, Twum F, Hayfron-Acquah JB, Missah YM. A Novel Computer Vision Model for Medicinal Plant Identification Using Log-Gabor Filters and Deep Learning Algorithms. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1189509. [PMID: 36203732 PMCID: PMC9532088 DOI: 10.1155/2022/1189509] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 08/16/2022] [Accepted: 09/05/2022] [Indexed: 11/27/2022]
Abstract
Computer vision is the science that enables computers and machines to see and perceive image content on a semantic level. It combines concepts, techniques, and ideas from various fields such as digital image processing, pattern matching, artificial intelligence, and computer graphics. A computer vision system is designed to model the human visual system on a functional basis as closely as possible. Deep learning and Convolutional Neural Networks (CNNs) in particular which are biologically inspired have significantly contributed to computer vision studies. This research develops a computer vision system that uses CNNs and handcrafted filters from Log-Gabor filters to identify medicinal plants based on their leaf textural features in an ensemble manner. The system was tested on a dataset developed from the Centre of Plant Medicine Research, Ghana (MyDataset) consisting of forty-nine (49) plant species. Using the concept of transfer learning, ten pretrained networks including Alexnet, GoogLeNet, DenseNet201, Inceptionv3, Mobilenetv2, Restnet18, Resnet50, Resnet101, vgg16, and vgg19 were used as feature extractors. The DenseNet201 architecture resulted with the best outcome of 87% accuracy and GoogLeNet with 79% preforming the worse averaged across six supervised learning algorithms. The proposed model (OTAMNet), created by fusing a Log-Gabor layer into the transition layers of the DenseNet201 architecture achieved 98% accuracy when tested on MyDataset. OTAMNet was tested on other benchmark datasets; Flavia, Swedish Leaf, MD2020, and the Folio dataset. The Flavia dataset achieved 99%, Swedish Leaf 100%, MD2020 99%, and the Folio dataset 97%. A false-positive rate of less than 0.1% was achieved in all cases.
Collapse
Affiliation(s)
| | - Frimpong Twum
- Department of Computer Science, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - James Ben Hayfron-Acquah
- Department of Computer Science, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - Yaw Marfo Missah
- Department of Computer Science, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| |
Collapse
|
23
|
Gorssen W, Winters C, Meyermans R, D’Hooge R, Janssens S, Buys N. Estimating genetics of body dimensions and activity levels in pigs using automated pose estimation. Sci Rep 2022; 12:15384. [PMID: 36100692 PMCID: PMC9470733 DOI: 10.1038/s41598-022-19721-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 09/02/2022] [Indexed: 11/09/2022] Open
Abstract
Pig breeding is changing rapidly due to technological progress and socio-ecological factors. New precision livestock farming technologies such as computer vision systems are crucial for automated phenotyping on a large scale for novel traits, as pigs’ robustness and behavior are gaining importance in breeding goals. However, individual identification, data processing and the availability of adequate (open source) software currently pose the main hurdles. The overall goal of this study was to expand pig weighing with automated measurements of body dimensions and activity levels using an automated video-analytic system: DeepLabCut. Furthermore, these data were coupled with pedigree information to estimate genetic parameters for breeding programs. We analyzed 7428 recordings over the fattening period of 1556 finishing pigs (Piétrain sire x crossbred dam) with two-week intervals between recordings on the same pig. We were able to accurately estimate relevant body parts with an average tracking error of 3.3 cm. Body metrics extracted from video images were highly heritable (61–74%) and significantly genetically correlated with average daily gain (rg = 0.81–0.92). Activity traits were low to moderately heritable (22–35%) and showed low genetic correlations with production traits and physical abnormalities. We demonstrated a simple and cost-efficient method to extract body dimension parameters and activity traits. These traits were estimated to be heritable, and hence, can be selected on. These findings are valuable for (pig) breeding organizations, as they offer a method to automatically phenotype new production and behavioral traits on an individual level.
Collapse
|
24
|
Caffarini JG, Bresolin T, Dorea JRR. Predicting ribeye area and circularity in live calves through 3D image analyses of body surface. J Anim Sci 2022; 100:skac242. [PMID: 35852484 PMCID: PMC9495505 DOI: 10.1093/jas/skac242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 07/19/2022] [Indexed: 07/21/2023] Open
Abstract
The use of sexed semen at dairy farms has improved heifer replacement over the last decade by allowing greater control over the number of retained females and enabling the selection of dams with superior genetics. Alternatively, beef semen can be used in genetically inferior dairy cows to produce crossbred (beef x dairy) animals that can be sold at a higher price. Although crossbreeding became profitable for dairy farmers, meat cuts from beef x dairy crosses often lack quality and shape uniformity. Technologies for quickly predicting carcass traits for animal grouping before harvest may improve meat cut uniformity in crossbred cattle. Our objective was to develop a deep learning approach for predicting ribeye area and circularity of live animals through 3D body surface images using two neural networks: 1) nested Pyramid Scene Parsing Network (nPSPNet) for extracting features and 2) Convolutional Neural Network (CNN) for estimating ribeye area and circularity from these features. A group of 56 calves were imaged using an Intel RealSense D435 camera. A total of 327 depth images were captured from 30 calves and labeled with masks outlining the calf body to train the nPSPNet for feature extraction. Additional 42,536 depth images were taken from the remaining 26 calves along with three ultrasound images collected for each calf from the 12/13th ribs. The ultrasound images (three by calf) were manually segmented to calculate the average ribeye area and circularity and then paired with the depth images for CNN training. We implemented a nested cross-validation approach, in which all images for one calf were removed (leave-one-out, LOO), and the remaining calves were further divided into training (70%) and validation (30%) sets within each LOO iteration. The proposed model predicted ribeye area with an average coefficient of determination (R2) of 0.74% and 7.3% mean absolute error of prediction (MAEP) and the ribeye circularity with an average R2 of 0.87% and 2.4% MAEP. Our results indicate that computer vision systems could be used to predict ribeye area and circularity in live animals, allowing optimal management decisions toward smart animal grouping in beef x dairy crosses and purebred.
Collapse
Affiliation(s)
- Joseph G Caffarini
- Department of Neurology, University of Wisconsin-Madison, Madison, WI 53703, USA
- Department of Animal and Dairy Sciences, University of Wisconsin-Madison, –Madison, WI 53703, USA
| | - Tiago Bresolin
- Department of Animal and Dairy Sciences, University of Wisconsin-Madison, –Madison, WI 53703, USA
| | | |
Collapse
|
25
|
Automatic Weight Prediction System for Korean Cattle Using Bayesian Ridge Algorithm on RGB-D Image. ELECTRONICS 2022. [DOI: 10.3390/electronics11101663] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Weighting the Hanwoo (Korean cattle) is very important for Korean beef producers when selling the Hanwoo at the right time. Recently, research is being conducted on the automatic prediction of the weight of Hanwoo only through images with the achievement of research using deep learning and image recognition. In this paper, we propose a method for the automatic weight prediction of Hanwoo using the Bayesian ridge algorithm on RGB-D images. The proposed system consists of three parts: segmentation, extraction of features, and estimation of the weight of Korean cattle from a given RGB-D image. The first step is to segment the Hanwoo area from a given RGB-D image using depth information and color information, respectively, and then combine them to perform optimal segmentation. Additionally, we correct the posture using ellipse fitting on segmented body image. The second step is to extract features for weight prediction from the segmented Hanwoo image. We extracted three features: size, shape, and gradients. The third step is to find the optimal machine learning model by comparing eight types of well-known machine learning models. In this step, we compared each model with the aim of finding an efficient model that is lightweight and can be used in an embedded system in the real field. To evaluate the performance of the proposed weight prediction system, we collected 353 RGB-D images from livestock farms in Wonju, Gangwon-do in Korea. In the experimental results, random forest showed the best performance, and the Bayesian ridge model is the second best in MSE or the coefficient of determination. However, we suggest that the Bayesian ridge model is the most optimal model in the aspect of time complexity and space complexity. Finally, it is expected that the proposed system will be casually used to determine the shipping time of Hanwoo in wild farms for a portable commercial device.
Collapse
|
26
|
Sustainable Intensification of Beef Production in the Tropics: The Role of Genetically Improving Sexual Precocity of Heifers. Animals (Basel) 2022; 12:ani12020174. [PMID: 35049797 PMCID: PMC8772995 DOI: 10.3390/ani12020174] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 01/07/2022] [Accepted: 01/08/2022] [Indexed: 12/16/2022] Open
Abstract
Simple Summary Tropical pasture-based beef production systems play a vital role in global food security. The importance of promoting sustainable intensification of such systems has been debated worldwide. Demand for beef is growing together with concerns over the impact of its production on the environment. Implementing sustainable livestock intensification programs relies on animal genetic improvement. In tropical areas, the lack of sexual precocity is a bottleneck for cattle efficiency, directly impacting the sustainability of production systems. In the present review we present and discuss the state of the art of genetic evaluation for sexual precocity in Bos indicus beef cattle, covering the definition of measurable traits, genetic parameter estimates, genomic analyses, and a case study of selection for sexual precocity in Nellore breeding programs. Abstract Increasing productivity through continued animal genetic improvement is a crucial part of implementing sustainable livestock intensification programs. In Zebu cattle, the lack of sexual precocity is one of the main obstacles to improving beef production efficiency. Puberty-related traits are complex, but large-scale data sets from different “omics” have provided information on specific genes and biological processes with major effects on the expression of such traits, which can greatly increase animal genetic evaluation. In addition, genetic parameter estimates and genomic predictions involving sexual precocity indicator traits and productive, reproductive, and feed-efficiency related traits highlighted the feasibility and importance of direct selection for anticipating heifer reproductive life. Indeed, the case study of selection for sexual precocity in Nellore breeding programs presented here show that, in 12 years of selection for female early precocity and improved management practices, the phenotypic means of age at first calving showed a strong decreasing trend, changing from nearly 34 to less than 28 months, with a genetic trend of almost −2 days/year. In this period, the percentage of early pregnancy in the herds changed from around 10% to more than 60%, showing that the genetic improvement of heifer’s sexual precocity allows optimizing the productive cycle by reducing the number of unproductive animals in the herd. It has a direct impact on sustainability by better use of resources. Genomic selection breeding programs accounting for genotype by environment interaction represent promising tools for accelerating genetic progress for sexual precocity in tropical beef cattle.
Collapse
|
27
|
Chen D, Wu P, Wang K, Wang S, Ji X, Shen Q, Yu Y, Qiu X, Xu X, Liu Y, Tang G. Combining computer vision score and conventional meat quality traits to estimate the intramuscular fat content using machine learning in pigs. Meat Sci 2021; 185:108727. [PMID: 34971942 DOI: 10.1016/j.meatsci.2021.108727] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 12/16/2021] [Accepted: 12/19/2021] [Indexed: 11/16/2022]
Abstract
Intramuscular fat content (IMF%) is an important factor that affects the quality of pork. The traditional testing method (Soxhlet extraction) is accurate; however, it has a long preprocessing time. In this study, a total of 1481 photographs of 200 pigs' loin muscles were used to obtain a computer vision score (IIMF %). Then, actual IMF%, meat color, marbling score, pH value, and drip loss of 200 pigs were measured. Stepwise regression (SR) and gradient boosting machine (GBM) were used to construct the estimation model of IMF%. The results showed that the correlation coefficients between IMF% and IIMF%, marbling score, backfat thickness, percentage of moisture (POM), and pH value were 0.68, 0.64, 0.48, 0.45, and 0.25, respectively. The model accuracies of SR and GBM base on residuals distribution were 0.875 and 0.89, respectively. This study presents a method for estimating IMF% using computer vision technology and meat quality traits.
Collapse
Affiliation(s)
- Dong Chen
- Farm Animal Genetic Resources Exploration and Innovation Key Laboratory of Sichuan Province, Sichuan Agricultural University, Chengdu, China
| | - Pingxian Wu
- Chongqing Academy of Animal Sciences, Chongqing, China
| | - Kai Wang
- Farm Animal Genetic Resources Exploration and Innovation Key Laboratory of Sichuan Province, Sichuan Agricultural University, Chengdu, China
| | - Shujie Wang
- Farm Animal Genetic Resources Exploration and Innovation Key Laboratory of Sichuan Province, Sichuan Agricultural University, Chengdu, China
| | - Xiang Ji
- Farm Animal Genetic Resources Exploration and Innovation Key Laboratory of Sichuan Province, Sichuan Agricultural University, Chengdu, China
| | - Qi Shen
- Farm Animal Genetic Resources Exploration and Innovation Key Laboratory of Sichuan Province, Sichuan Agricultural University, Chengdu, China
| | - Yang Yu
- Farm Animal Genetic Resources Exploration and Innovation Key Laboratory of Sichuan Province, Sichuan Agricultural University, Chengdu, China
| | - Xiaotian Qiu
- National Animal Husbandry Service, Beijing, China
| | - Xu Xu
- National Animal Husbandry Service, Beijing, China
| | - Yihui Liu
- National Animal Husbandry Service, Beijing, China
| | - Guoqing Tang
- Farm Animal Genetic Resources Exploration and Innovation Key Laboratory of Sichuan Province, Sichuan Agricultural University, Chengdu, China.
| |
Collapse
|
28
|
Siberski-Cooper CJ, Koltes JE. Opportunities to Harness High-Throughput and Novel Sensing Phenotypes to Improve Feed Efficiency in Dairy Cattle. Animals (Basel) 2021; 12:ani12010015. [PMID: 35011121 PMCID: PMC8749788 DOI: 10.3390/ani12010015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 12/13/2021] [Accepted: 12/15/2021] [Indexed: 11/16/2022] Open
Abstract
Simple Summary Sensors, routinely collected on-farm tests, and other repeatable, high-throughput measurements can provide novel phenotype information on a frequent basis. Information from these sensors and high-throughput measurements could be harnessed to monitor or predict individual dairy cow feed intake. Predictive algorithms would allow for genetic selection of animals that consume less feed while producing the same amount of milk. Improved monitoring of feed intake could reduce the cost of milk production, improve animal health, and reduce the environmental impact of the dairy industry. Moreover, data from these information sources could aid in animal management (e.g., precision feeding and health detection). In order to implement tools, the relationship of measurements with feed intake needs to be established and prediction equations developed. Lastly, consideration should be given to the frequency of data collection, the need for standardization of data and other potential limitations of tools in the prediction of feed intake. This review summarizes measurements of feed efficiency, factors that may impact the efficiency and feed consumption of an animal, tools that have been researched and new traits that could be utilized for the prediction of feed intake and efficiency, and prediction equations for feed intake and efficiency presented in the literature to date. Abstract Feed for dairy cattle has a major impact on profitability and the environmental impact of farms. Sustainable dairy production relies on continued improvement in feed efficiency as a way to reduce costs and nutrient loss from feed. Advances in breeding, feeding and management have led to the dilution of maintenance energy and thus more efficient dairy cattle. Still, many additional opportunities are available to improve individual animal feed efficiency. Sensing technologies such as wearable sensors, image-based and high-throughput phenotyping technologies (e.g., milk testing) are becoming more available on commercial farm. The application of these technologies as indicator traits for feed intake and efficiency related traits would be advantageous to provide additional information to predict and manage feed efficiency. This review focuses on precision livestock technologies and high-throughput phenotyping in use today as well as those that could be developed in the future as possible indicators of feed intake. Several technologies such as milk spectral data, activity, rumen measures, and image-based phenotypes have been associated with feed intake. Future applications will depend on the ability to repeatably measure and calibrate these data across locations, so that they can be integrated for use in predicting and managing feed intake and efficiency on farm.
Collapse
|
29
|
Kaewtapee C, Thepparak S, Rakangtong C, Bunchasak C, Supratak A. Objective scoring of footpad dermatitis in broilers using video image segmentation and a deep learning approach: vcamera-based scoring system. Br Poult Sci 2021; 63:427-433. [PMID: 34870524 DOI: 10.1080/00071668.2021.2013439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
1. Footpad dermatitis (FPD) can be used as an important indicator of animal welfare and for economic evaluation; however, human scoring is subjective, biased and labour intensive. This paper proposed a novel deep learning approach that can automatically determine the severity of FPD based on images of chicken's feet. 2. This approach first determined the areas of the FPD lesion, normal parts of each foot and the background, using a deep segmentation model. The proportion of the FPD for the chicken's two feet was calculated by dividing the number of FPD pixels by the number of feet pixels. The proportion was then categorised using a five-point score for FPD. The approach was evaluated from 244 images of the left and right footpads using five-fold cross-validation. These images were collected at a commercial slaughter plant and scored by trained observers. 3. The result showed that this approach achieved an overall accuracy and a macro F1-score of 0.82. The per-class F1-scores from all FPD scores (scores 0 to 4) were similar (0.85, 0.80, 0,80, 0,80, and 0.87, respectively), which demonstrated that this approach performed equally well for all classes of scores. 4. The results suggested that image segmentation and a deep learning approach can be used to automate the process of scoring FPD based on chicken foot images, which can help to minimise the subjective bias inherent in manual scoring.
Collapse
Affiliation(s)
- C Kaewtapee
- Department of Animal Science, Faculty of Agriculture, Kasetsart University, 50 Wgam Wong Wan Rd., Latyao, Chatuchak, Bangkok 10900 Thailand
| | - S Thepparak
- Department of Animal Science, Faculty of Agriculture, Kasetsart University, 50 Wgam Wong Wan Rd., Latyao, Chatuchak, Bangkok 10900 Thailand
| | - C Rakangtong
- Department of Animal Science, Faculty of Agriculture, Kasetsart University, 50 Wgam Wong Wan Rd., Latyao, Chatuchak, Bangkok 10900 Thailand
| | - C Bunchasak
- Department of Animal Science, Faculty of Agriculture, Kasetsart University, 50 Wgam Wong Wan Rd., Latyao, Chatuchak, Bangkok 10900 Thailand
| | - A Supratak
- Computer Science Academic Group, Faculty of Information and Communication Technology, Mahidol University, 999 Phuttamonthon 4 Road, Salaya, Nakhon Pathom 73170 Thailand
| |
Collapse
|
30
|
Kara I, Tahillioglu E. Digital image analysis of gunshot residue dimensional dispersion by computer vision method. Microsc Res Tech 2021; 85:971-979. [PMID: 34655131 DOI: 10.1002/jemt.23966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 08/23/2021] [Accepted: 10/04/2021] [Indexed: 11/06/2022]
Abstract
Detection and identification of gunshot residues (GSR) have been used as base evidence in elucidating forensic cases. GSR particles consist of burnt and partially unburned material and contaminate the hands, face, hair, and clothes of the shooter when coming out of the gun. Nowadays, GSR samples are collected from the hands of the suspect and are analyzed routinely in forensic laboratories by the scanning electron microscope/energy dispersive spectroscopy (SEM/EDS) method. GSR particles are comprised of a morphological and specific structure (generally spherical and have a diameter between 0 and 100 μm [occasionally even larger]). In addition, the present studies in the field have claimed that GSR particles during formation are formed under equilibrium surface distribution and are unrelated to morphological dimensional classification. Our contribution to this study is two-folded. First, this study offers a new approach to identify images of GSR particles by computer vision gathered by SEM/EDS method from the hand of the shooter. Second, it presents open access to the SEM/EDS image data set of the analyzed GSR. During the study, a new data set consisting of 22,408 samples from three different types of MKEK (Mechanical and Chemical Industries Corporation) brand ammunition has been used. It is seen in the results that the computer vision method has been successful in the dimensional classification of GSR.
Collapse
Affiliation(s)
- Ilker Kara
- Department of Medical Services and Techniques, Eldivan Medical Services Vocational School, Çankırı Karatekin University, Çankırı, Turkey
| | | |
Collapse
|
31
|
The Application of Cameras in Precision Pig Farming: An Overview for Swine-Keeping Professionals. Animals (Basel) 2021; 11:ani11082343. [PMID: 34438800 PMCID: PMC8388688 DOI: 10.3390/ani11082343] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/19/2021] [Accepted: 08/06/2021] [Indexed: 01/06/2023] Open
Abstract
Simple Summary The preeminent purpose of precision livestock farming (PLF) is to provide affordable and straightforward solutions to severe problems with certainty. Some data collection techniques in PLF such as RFID are accurate but not affordable for small- and medium-sized farms. On the other hand, camera sensors are cheap, commonly available, and easily used to collect information compared to other sensor systems in precision pig farming. Cameras have ample chance to monitor pigs with high precision at an affordable cost. However, the lack of targeted information about the application of cameras in the pig industry is a shortcoming for swine farmers and researchers. This review describes the state of the art in 3D imaging systems (i.e., depth sensors and time-of-flight cameras), along with 2D cameras, for effectively identifying pig behaviors, and presents automated approaches for monitoring and investigating pigs’ feeding, drinking, lying, locomotion, aggressive, and reproductive behaviors. In addition, the review summarizes the related literature and points out limitations to open up new dimensions for future researchers to explore. Abstract Pork is the meat with the second-largest overall consumption, and chicken, pork, and beef together account for 92% of global meat production. Therefore, it is necessary to adopt more progressive methodologies such as precision livestock farming (PLF) rather than conventional methods to improve production. In recent years, image-based studies have become an efficient solution in various fields such as navigation for unmanned vehicles, human–machine-based systems, agricultural surveying, livestock, etc. So far, several studies have been conducted to identify, track, and classify the behaviors of pigs and achieve early detection of disease, using 2D/3D cameras. This review describes the state of the art in 3D imaging systems (i.e., depth sensors and time-of-flight cameras), along with 2D cameras, for effectively identifying pig behaviors and presents automated approaches for the monitoring and investigation of pigs’ feeding, drinking, lying, locomotion, aggressive, and reproductive behaviors.
Collapse
|
32
|
Pérez-Enciso M, Steibel JP. Phenomes: the current frontier in animal breeding. Genet Sel Evol 2021; 53:22. [PMID: 33673800 PMCID: PMC7934239 DOI: 10.1186/s12711-021-00618-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 02/22/2021] [Indexed: 12/13/2022] Open
Abstract
Improvements in genomic technologies have outpaced the most optimistic predictions, allowing industry-scale application of genomic selection. However, only marginal gains in genetic prediction accuracy can now be expected by increasing marker density up to sequence, unless causative mutations are identified. We argue that some of the most scientifically disrupting and industry-relevant challenges relate to ‘phenomics’ instead of ‘genomics’. Thanks to developments in sensor technology and artificial intelligence, there is a wide range of analytical tools that are already available and many more will be developed. We can now address some of the pressing societal demands on the industry, such as animal welfare concerns or efficiency in the use of resources. From the statistical and computational point of view, phenomics raises two important issues that require further work: penalization and dimension reduction. This will be complicated by the inherent heterogeneity and ‘missingness’ of the data. Overall, we can expect that precision livestock technologies will make it possible to collect hundreds of traits on a continuous basis from large numbers of animals. Perhaps the main revolution will come from redesigning animal breeding schemes to explicitly allow for high-dimensional phenomics. In the meantime, phenomics data will definitely enlighten our knowledge on the biological basis of phenotypes.
Collapse
Affiliation(s)
- Miguel Pérez-Enciso
- ICREA, Passeig de Lluís Companys 23, 08010, Barcelona, Spain. .,Centre for Research in Agricultural Genomics (CRAG), CSIC-IRTA-UAB-UB, Bellaterra, 08193, Barcelona, Spain.
| | - Juan P Steibel
- Department of Animal Science, Michigan State University, East Lansing, MI, 48824, USA.,Department of Fisheries and Wildlife, Michigan State University, East Lansing, MI, 48824, USA
| |
Collapse
|
33
|
Blay C, Haffray P, Bugeon J, D’Ambrosio J, Dechamp N, Collewet G, Enez F, Petit V, Cousin X, Corraze G, Phocas F, Dupont-Nivet M. Genetic Parameters and Genome-Wide Association Studies of Quality Traits Characterised Using Imaging Technologies in Rainbow Trout, Oncorhynchus mykiss. Front Genet 2021; 12:639223. [PMID: 33692832 PMCID: PMC7937956 DOI: 10.3389/fgene.2021.639223] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 02/03/2021] [Indexed: 12/18/2022] Open
Abstract
One of the top priorities of the aquaculture industry is the genetic improvement of economically important traits in fish, such as those related to processing and quality. However, the accuracy of genetic evaluations has been hindered by a lack of data on such traits from a sufficiently large population of animals. The objectives of this study were thus threefold: (i) to estimate genetic parameters of growth-, yield-, and quality-related traits in rainbow trout (Oncorhynchus mykiss) using three different phenotyping technologies [invasive and non-invasive: microwave-based, digital image analysis, and magnetic resonance imaging (MRI)], (ii) to detect quantitative trait loci (QTLs) associated with these traits, and (iii) to identify candidate genes present within these QTL regions. Our study collected data from 1,379 fish on growth, yield-related traits (body weight, condition coefficient, head yield, carcass yield, headless gutted carcass yield), and quality-related traits (total fat, percentage of fat in subcutaneous adipose tissue, percentage of fat in flesh, flesh colour); genotypic data were then obtained for all fish using the 57K SNP Axiom® Trout Genotyping array. Heritability estimates for most of the 14 traits examined were moderate to strong, varying from 0.12 to 0.67. Most traits were clearly polygenic, but our genome-wide association studies (GWASs) identified two genomic regions on chromosome 8 that explained up to 10% of the genetic variance (cumulative effects of two QTLs) for several traits (weight, condition coefficient, subcutaneous and total fat content, carcass and headless gutted carcass yields). For flesh colour traits, six QTLs explained 1-4% of the genetic variance. Within these regions, we identified several genes (htr1, gnpat, ephx1, bcmo1, and cyp2x) that have been implicated in adipogenesis or carotenoid metabolism, and thus represent good candidates for further functional validation. Finally, of the three techniques used for phenotyping, MRI demonstrated particular promise for measurements of fat content and distribution, while the digital image analysis-based approach was very useful in quantifying colour-related traits. This work provides new insights that may aid the development of commercial breeding programmes in rainbow trout, specifically with regard to the genetic improvement of yield and flesh-quality traits as well as the use of invasive and/or non-invasive technologies to predict such traits.
Collapse
Affiliation(s)
- Carole Blay
- Université Paris-Saclay, INRAE, AgroParisTech, GABI, Jouy-en-Josas, France
| | | | | | - Jonathan D’Ambrosio
- Université Paris-Saclay, INRAE, AgroParisTech, GABI, Jouy-en-Josas, France
- SYSAAF, Station LPGP-INRAE, Rennes, France
| | - Nicolas Dechamp
- Université Paris-Saclay, INRAE, AgroParisTech, GABI, Jouy-en-Josas, France
| | | | | | | | - Xavier Cousin
- Université Paris-Saclay, INRAE, AgroParisTech, GABI, Jouy-en-Josas, France
- MARBEC, University of Montpellier, CNRS, Ifremer, IRD, Palavas-les-Flots, France
| | - Geneviève Corraze
- INRAE, University of Pau & Pays Adour, E2S UPPA, UMR 1419 NuMéA, Saint-Pée-sur-Nivelle, France
| | - Florence Phocas
- Université Paris-Saclay, INRAE, AgroParisTech, GABI, Jouy-en-Josas, France
| | | |
Collapse
|
34
|
|
35
|
Morota G, Cheng H, Cook D, Tanaka E. ASAS-NANP SYMPOSIUM: prospects for interactive and dynamic graphics in the era of data-rich animal science1. J Anim Sci 2021; 99:skaa402. [PMID: 33626150 PMCID: PMC7904041 DOI: 10.1093/jas/skaa402] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 12/15/2020] [Indexed: 12/19/2022] Open
Abstract
Statistical graphics, and data visualization, play an essential but under-utilized, role for data analysis in animal science, and also to visually illustrate the concepts, ideas, or outputs of research and in curricula. The recent rise in web technologies and ubiquitous availability of web browsers enables easier sharing of interactive and dynamic graphics. Interactivity and dynamic feedback enhance human-computer interaction and data exploration. Web applications such as decision support systems coupled with multimedia tools synergize with interactive and dynamic graphics. However, the importance of graphics for effectively communicating data, understanding data uncertainty, and the state of the field of interactive and dynamic graphics is underappreciated in animal science. To address this gap, we describe the current state of graphical methodology and technology that might be more broadly adopted. This includes an explanation of a conceptual framework for effective graphics construction. The ideas and technology are illustrated using publicly available animal datasets. We foresee that many new types of big and complex data being generated in precision livestock farming create exciting opportunities for applying interactive and dynamic graphics to improve data analysis and make data-supported decisions.
Collapse
Affiliation(s)
- Gota Morota
- Department of Animal and Poultry Sciences, Virginia Polytechnic Institute and State University, Blacksburg, VA
- Center for Advanced Innovation in Agriculture, Virginia Polytechnic Institute and State University, Blacksburg, VA
| | - Hao Cheng
- Department of Animal Science, University of California, Davis, CA
| | - Dianne Cook
- Department of Econometrics and Business Statistics, Monash University, Clayton, VIC, Australia
| | - Emi Tanaka
- Department of Econometrics and Business Statistics, Monash University, Clayton, VIC, Australia
| |
Collapse
|
36
|
Automatic Assessment of Keel Bone Damage in Laying Hens at the Slaughter Line. Animals (Basel) 2021; 11:ani11010163. [PMID: 33445636 PMCID: PMC7827378 DOI: 10.3390/ani11010163] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 12/18/2020] [Accepted: 12/21/2020] [Indexed: 11/17/2022] Open
Abstract
Simple Summary Keel bone damage (KBD) is very prevalent in commercial laying hen flocks with a wide range of affected hens/flock. It can cause pain, and affected hens have been found to be less mobile. The assessment of this animal welfare indicator provides important feedback for the farmer about flock health and consequently on the need for interventions. However, the assessment of keel bone damage is time-consuming, and prior training is needed in order to gain reliable results. Optical detection methods can be a means to automatedly score hens at the slaughter line with high sample sizes and in a standardized way. We developed and validated an automatic 3D camera-based detection system. While it generally underestimates the presence of KBD due to the purely visual assessment and technical constraints, it nevertheless shows good accuracy and high correlation of prevalences with those visually determined by a trained human assessor. Therefore, this system opens up opportunities to better monitor and combat a severe animal welfare problem in the long-term. Abstract Keel bone damage (KBD) can be found in all commercial laying hen flocks with a wide range of 23% to 69% of hens/flock found to be affected in this study. As KBD may be linked with chronic pain and a decrease in mobility, it is a serious welfare problem. An automatic assessment system at the slaughter line could support the detection of KBD and would have the advantage of being standardized and fast scoring including high sample sizes. A 2MP stereo camera combined with an IDS imaging color camera was used for the automatic assessment. A trained human assessor visually scored KBD in defeathered hens during the slaughter process and compared results with further human assessors and automatic recording. In a first step, an algorithm was developed on the basis of assessments of keel status of 2287 hens of different genetics with varying degrees of KBD. In two optimization steps, performance data were calculated, and flock prevalences were determined, which were compared between the assessor and the automatic system. The proposed technique finally reached a sensitivity of 0.95, specificity of 0.77, accuracy of 0.86 and precision of 0.81. In the last optimization step, the automatic system scored on average about 10.5% points lower KBD prevalences than the human assessor. However, a proposed change of scoring system (setting the limit for KBD at 0.5 cm deviation from the straight line) would lower this deviation. We conclude that the developed automatic scoring technique is a reliable and potentially valuable tool for the assessment of KBD.
Collapse
|