1
|
Research Progress on Quality Detection of Livestock and Poultry Meat Based on Machine Vision, Hyperspectral and Multi-Source Information Fusion Technologies. Foods 2024; 13:469. [PMID: 38338604 PMCID: PMC10855881 DOI: 10.3390/foods13030469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 01/29/2024] [Accepted: 01/30/2024] [Indexed: 02/12/2024] Open
Abstract
Presently, the traditional methods employed for detecting livestock and poultry meat predominantly involve sensory evaluation conducted by humans, chemical index detection, and microbial detection. While these methods demonstrate commendable accuracy in detection, their application becomes more challenging when applied to large-scale production by enterprises. Compared with traditional detection methods, machine vision and hyperspectral technology can realize real-time online detection of large throughput because of their advantages of high efficiency, accuracy, and non-contact measurement, so they have been widely concerned by researchers. Based on this, in order to further enhance the accuracy of online quality detection for livestock and poultry meat, this article presents a comprehensive overview of methods based on machine vision, hyperspectral, and multi-sensor information fusion technologies. This review encompasses an examination of the current research status and the latest advancements in these methodologies while also deliberating on potential future development trends. The ultimate objective is to provide pertinent information and serve as a valuable research resource for the non-destructive online quality detection of livestock and poultry meat.
Collapse
|
2
|
Opportunities for Regulatory Authorities to Assess Animal-Based Measures at the Slaughterhouse Using Sensor Technology and Artificial Intelligence: A Review. Animals (Basel) 2023; 13:3028. [PMID: 37835634 PMCID: PMC10571985 DOI: 10.3390/ani13193028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 09/19/2023] [Accepted: 09/20/2023] [Indexed: 10/15/2023] Open
Abstract
Animal-based measures (ABMs) are the preferred way to assess animal welfare. However, manual scoring of ABMs is very time-consuming during the meat inspection. Automatic scoring by using sensor technology and artificial intelligence (AI) may bring a solution. Based on review papers an overview was made of ABMs recorded at the slaughterhouse for poultry, pigs and cattle and applications of sensor technology to measure the identified ABMs. Also, relevant legislation and work instructions of the Dutch Regulatory Authority (RA) were scanned on applied ABMs. Applications of sensor technology in a research setting, on farm or at the slaughterhouse were reported for 10 of the 37 ABMs identified for poultry, 4 of 32 for cattle and 13 of 41 for pigs. Several applications are related to aspects of meat inspection. However, by European law meat inspection must be performed by an official veterinarian, although there are exceptions for the post mortem inspection of poultry. The examples in this study show that there are opportunities for using sensor technology by the RA to support the inspection and to give more insight into animal welfare risks. The lack of external validation for multiple commercially available systems is a point of attention.
Collapse
|
3
|
CNN-Bi-LSTM: A Complex Environment-Oriented Cattle Behavior Classification Network Based on the Fusion of CNN and Bi-LSTM. SENSORS (BASEL, SWITZERLAND) 2023; 23:7714. [PMID: 37765771 PMCID: PMC10536225 DOI: 10.3390/s23187714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/19/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023]
Abstract
Cattle behavior classification technology holds a crucial position within the realm of smart cattle farming. Addressing the requisites of cattle behavior classification in the agricultural sector, this paper presents a novel cattle behavior classification network tailored for intricate environments. This network amalgamates the capabilities of CNN and Bi-LSTM. Initially, a data collection method is devised within an authentic farm setting, followed by the delineation of eight fundamental cattle behaviors. The foundational step involves utilizing VGG16 as the cornerstone of the CNN network, thereby extracting spatial feature vectors from each video data sequence. Subsequently, these features are channeled into a Bi-LSTM classification model, adept at unearthing semantic insights from temporal data in both directions. This process ensures precise recognition and categorization of cattle behaviors. To validate the model's efficacy, ablation experiments, generalization effect assessments, and comparative analyses under consistent experimental conditions are performed. These investigations, involving module replacements within the classification model and comprehensive analysis of ablation experiments, affirm the model's effectiveness. The self-constructed dataset about cattle is subjected to evaluation using cross-entropy loss, assessing the model's generalization efficacy across diverse subjects and viewing perspectives. Classification performance accuracy is quantified through the application of a confusion matrix. Furthermore, a set of comparison experiments is conducted, involving three pertinent deep learning models: MASK-RCNN, CNN-LSTM, and EfficientNet-LSTM. The outcomes of these experiments unequivocally substantiate the superiority of the proposed model. Empirical results underscore the CNN-Bi-LSTM model's commendable performance metrics: achieving 94.3% accuracy, 94.2% precision, and 93.4% recall while navigating challenges such as varying light conditions, occlusions, and environmental influences. The objective of this study is to employ a fusion of CNN and Bi-LSTM to autonomously extract features from multimodal data, thereby addressing the challenge of classifying cattle behaviors within intricate scenes. By surpassing the constraints imposed by conventional methodologies and the analysis of single-sensor data, this approach seeks to enhance the precision and generalizability of cattle behavior classification. The consequential practical, economic, and societal implications for the agricultural sector are of considerable significance.
Collapse
|
4
|
Spatio-Temporal-Based Identification of Aggressive Behavior in Group Sheep. Animals (Basel) 2023; 13:2636. [PMID: 37627427 PMCID: PMC10451720 DOI: 10.3390/ani13162636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 08/04/2023] [Accepted: 08/11/2023] [Indexed: 08/27/2023] Open
Abstract
In order to solve the problems of low efficiency and subjectivity of manual observation in the process of group-sheep-aggression detection, we propose a video streaming-based model for detecting aggressive behavior in group sheep. In the experiment, we collected videos of the sheep's daily routine and videos of the aggressive behavior of sheep in the sheep pen. Using the open-source software LabelImg, we labeled the data with bounding boxes. Firstly, the YOLOv5 detects all sheep in each frame of the video and outputs the coordinates information. Secondly, we sort the sheep's coordinates using a sheep tracking heuristic proposed in this paper. Finally, the sorted data are fed into an LSTM framework to predict the occurrence of aggression. To optimize the model's parameters, we analyze the confidence, batch size and skipping frame. The best-performing model from our experiments has 93.38% Precision and 91.86% Recall. Additionally, we compare our video streaming-based model with image-based models for detecting aggression in group sheep. In sheep aggression, the video stream detection model can solve the false detection phenomenon caused by head impact feature occlusion of aggressive sheep in the image detection model.
Collapse
|
5
|
Scientific Productions on Precision Livestock Farming: An Overview of the Evolution and Current State of Research Based on a Bibliometric Analysis. Animals (Basel) 2023; 13:2280. [PMID: 37508057 PMCID: PMC10376211 DOI: 10.3390/ani13142280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 07/04/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023] Open
Abstract
The interest in precision livestock farming (PLF)-a concept discussed for the first time in the early 2000s-has advanced considerably in recent years due to its important role in the development of sustainable livestock production systems. However, a comprehensive bibliometric analysis of the PLF literature is lacking. To address this gap, this study analyzed documents published from 2005 to 2021, aiming to understand the historical influences on technology adoption in livestock farming, identify future global trends, and examine shifts in scientific research on this topic. By using specific search terms in the Web of Science Core Collection, 886 publications were identified and analyzed using the bibliometrix R-package. The analysis revealed that the collection consisted mostly of research articles (74.6%) and reviews (10.4%). The top three core journals were the Journal of Dairy Science, Computers and Electronics in Agriculture, and Animals. Over time, the number of publications has steadily increased, with a higher growth rate in the last five years (29.0%) compared to the initial period (13.7%). Authors and institutions from multiple countries have contributed to the literature, with the USA, the Netherlands, and Italy leading in terms of publication numbers. The analysis also highlighted the growing interest in bovine production systems, emphasizing the importance of behavioral studies in PLF tool development. Automated milking systems were identified as central drivers of innovation in the PLF sector. Emerging themes for the future included "emissions" and "mitigation", indicating a focus on environmental concerns.
Collapse
|
6
|
Automatic Detection of Group Recumbency in Pigs via AI-Supported Camera Systems. Animals (Basel) 2023; 13:2205. [PMID: 37444003 DOI: 10.3390/ani13132205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/23/2023] [Accepted: 06/29/2023] [Indexed: 07/15/2023] Open
Abstract
The resting behavior of rearing pigs provides information about their perception of the current temperature. A pen that is too cold or too warm can impact the well-being of the animals as well as their physical development. Previous studies that have automatically recorded animal behavior often utilized body posture. However, this method is error-prone because hidden animals (so-called false positives) strongly influence the results. In the present study, a method was developed for the automated identification of time periods in which all pigs are lying down using video recordings (an AI-supported camera system). We used velocity data (measured by the camera) of pigs in the pen to identify these periods. To determine the threshold value for images with the highest probability of containing only recumbent pigs, a dataset with 9634 images and velocity values was used. The resulting velocity threshold (0.0006020622 m/s) yielded an accuracy of 94.1%. Analysis of the testing dataset revealed that recumbent pigs were correctly identified based on velocity values derived from video recordings. This represents an advance toward automated detection from the previous manual detection method.
Collapse
|
7
|
Grazing Sheep Behaviour Recognition Based on Improved YOLOV5. SENSORS (BASEL, SWITZERLAND) 2023; 23:4752. [PMID: 37430666 DOI: 10.3390/s23104752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/11/2023] [Accepted: 05/12/2023] [Indexed: 07/12/2023]
Abstract
Fundamental sheep behaviours, for instance, walking, standing, and lying, can be closely associated with their physiological health. However, monitoring sheep in grazing land is complex as limited range, varied weather, and diverse outdoor lighting conditions, with the need to accurately recognise sheep behaviour in free range situations, are critical problems that must be addressed. This study proposes an enhanced sheep behaviour recognition algorithm based on the You Only Look Once Version 5 (YOLOV5) model. The algorithm investigates the effect of different shooting methodologies on sheep behaviour recognition and the model's generalisation ability under different environmental conditions and, at the same time, provides an overview of the design for the real-time recognition system. The initial stage of the research involves the construction of sheep behaviour datasets using two shooting methods. Subsequently, the YOLOV5 model was executed, resulting in better performance on the corresponding datasets, with an average accuracy of over 90% for the three classifications. Next, cross-validation was employed to verify the model's generalisation ability, and the results indicated the handheld camera-trained model had better generalisation ability. Furthermore, the enhanced YOLOV5 model with the addition of an attention mechanism module before feature extraction results displayed a mAP@0.5 of 91.8% which represented an increase of 1.7%. Lastly, a cloud-based structure was proposed with the Real-Time Messaging Protocol (RTMP) to push the video stream for real-time behaviour recognition to apply the model in a practical situation. Conclusively, this study proposes an improved YOLOV5 algorithm for sheep behaviour recognition in pasture scenarios. The model can effectively detect sheep's daily behaviour for precision livestock management, promoting modern husbandry development.
Collapse
|
8
|
Mapping Welfare: Location Determining Techniques and Their Potential for Managing Cattle Welfare—A Review. DAIRY 2022. [DOI: 10.3390/dairy3040053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023] Open
Abstract
Several studies have suggested that precision livestock farming (PLF) is a useful tool for animal welfare management and assessment. Location, posture and movement of an individual are key elements in identifying the animal and recording its behaviour. Currently, multiple technologies are available for automated monitoring of the location of individual animals, ranging from Global Navigation Satellite Systems (GNSS) to ultra-wideband (UWB), RFID, wireless sensor networks (WSN) and even computer vision. These techniques and developments all yield potential to manage and assess animal welfare, but also have their constraints, such as range and accuracy. Combining sensors such as accelerometers with any location determining technique into a sensor fusion system can give more detailed information on the individual cow, achieving an even more reliable and accurate indication of animal welfare. We conclude that location systems are a promising approach to determining animal welfare, especially when applied in conjunction with additional sensors, but additional research focused on the use of technology in animal welfare monitoring is needed.
Collapse
|
9
|
A Review of Monitoring Techniques for Livestock Respiration and Sounds. FRONTIERS IN ANIMAL SCIENCE 2022. [DOI: 10.3389/fanim.2022.904834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
This article reviews the different techniques used to monitor the respiration and sounds of livestock. Livestock respiration is commonly assessed visually by observing abdomen fluctuation; however, the traditional methods are time consuming, subjective, being therefore impractical for large-scale operations and must rely on automation. Contact and non-contact technologies are used to automatically monitor respiration rate; contact technologies (e.g., accelerometers, pressure sensors, and thermistors) utilize sensors that are physically mounted on livestock while non-contact technologies (e.g., computer vision, thermography, and sound analysis) enable a non-invasive method of monitoring respiration. This work summarizes the advantages and disadvantages of contact and non-contact technologies and discusses the emerging role of non-contact sensors in automating monitoring for large-scale farming operations. This work is the first in-depth examination of automated monitoring technologies for livestock respiratory diseases; the findings and recommendations are important for livestock researchers and practitioners who can gain a better understanding of these different technologies, especially emerging non-contact sensing.
Collapse
|
10
|
Detection of Small-Sized Insects in Sticky Trapping Images Using Spectral Residual Model and Machine Learning. FRONTIERS IN PLANT SCIENCE 2022; 13:915543. [PMID: 35837447 PMCID: PMC9274131 DOI: 10.3389/fpls.2022.915543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 05/24/2022] [Indexed: 06/15/2023]
Abstract
One fundamental component of Integrated pest management (IPM) is field monitoring and growers use information gathered from scouting to make an appropriate control tactics. Whitefly (Bemisia tabaci) and thrips (Frankliniella occidentalis) are two most prominent pests in greenhouses of northern China. Traditionally, growers estimate the population of these pests by counting insects caught on sticky traps, which is not only a challenging task but also an extremely time-consuming one. To alleviate this situation, this study proposed an automated detection approach to meet the need for continuous monitoring of pests in greenhouse conditions. Candidate targets were firstly located using a spectral residual model and then different color features were extracted. Ultimately, Whitefly and thrips were identified using a support vector machine classifier with an accuracy of 93.9 and 89.9%, a true positive rate of 93.1 and 80.1%, and a false positive rate of 9.9 and 12.3%, respectively. Identification performance was further tested via comparison between manual and automatic counting with a coefficient of determination, R 2, of 0.9785 and 0.9582. The results show that the proposed method can provide a comparable performance with previous handcrafted feature-based methods, furthermore, it does not require the support of high-performance hardware compare with deep learning-based method. This study demonstrates the potential of developing a vision-based identification system to facilitate rapid gathering of information pertaining to numbers of small-sized pests in greenhouse agriculture and make a reliable estimation of overall population density.
Collapse
|
11
|
Automatic Weight Prediction System for Korean Cattle Using Bayesian Ridge Algorithm on RGB-D Image. ELECTRONICS 2022. [DOI: 10.3390/electronics11101663] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Weighting the Hanwoo (Korean cattle) is very important for Korean beef producers when selling the Hanwoo at the right time. Recently, research is being conducted on the automatic prediction of the weight of Hanwoo only through images with the achievement of research using deep learning and image recognition. In this paper, we propose a method for the automatic weight prediction of Hanwoo using the Bayesian ridge algorithm on RGB-D images. The proposed system consists of three parts: segmentation, extraction of features, and estimation of the weight of Korean cattle from a given RGB-D image. The first step is to segment the Hanwoo area from a given RGB-D image using depth information and color information, respectively, and then combine them to perform optimal segmentation. Additionally, we correct the posture using ellipse fitting on segmented body image. The second step is to extract features for weight prediction from the segmented Hanwoo image. We extracted three features: size, shape, and gradients. The third step is to find the optimal machine learning model by comparing eight types of well-known machine learning models. In this step, we compared each model with the aim of finding an efficient model that is lightweight and can be used in an embedded system in the real field. To evaluate the performance of the proposed weight prediction system, we collected 353 RGB-D images from livestock farms in Wonju, Gangwon-do in Korea. In the experimental results, random forest showed the best performance, and the Bayesian ridge model is the second best in MSE or the coefficient of determination. However, we suggest that the Bayesian ridge model is the most optimal model in the aspect of time complexity and space complexity. Finally, it is expected that the proposed system will be casually used to determine the shipping time of Hanwoo in wild farms for a portable commercial device.
Collapse
|
12
|
Research on the lying pattern of grouped pigs using unsupervised clustering and deep learning. Livest Sci 2022. [DOI: 10.1016/j.livsci.2022.104946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
13
|
Applications of Smart Technology as a Sustainable Strategy in Modern Swine Farming. SUSTAINABILITY 2022. [DOI: 10.3390/su14052607] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
The size of the pork market is increasing globally to meet the demand for animal protein, resulting in greater farm size for swine and creating a great challenge to swine farmers and industry owners in monitoring the farm activities and the health and behavior of the herd of swine. In addition, the growth of swine production is resulting in a changing climate pattern along with the environment, animal welfare, and human health issues, such as antimicrobial resistance, zoonosis, etc. The profit of swine farms depends on the optimum growth and good health of swine, while modern farming practices can ensure healthy swine production. To solve these issues, a future strategy should be considered with information and communication technology (ICT)-based smart swine farming, considering auto-identification, remote monitoring, feeding behavior, animal rights/welfare, zoonotic diseases, nutrition and food quality, labor management, farm operations, etc., with a view to improving meat production from the swine industry. Presently, swine farming is not only focused on the development of infrastructure but is also occupied with the application of technological knowledge for designing feeding programs, monitoring health and welfare, and the reproduction of the herd. ICT-based smart technologies, including smart ear tags, smart sensors, the Internet of Things (IoT), deep learning, big data, and robotics systems, can take part directly in the operation of farm activities, and have been proven to be effective tools for collecting, processing, and analyzing data from farms. In this review, which considers the beneficial role of smart technologies in swine farming, we suggest that smart technologies should be applied in the swine industry. Thus, the future swine industry should be automated, considering sustainability and productivity.
Collapse
|
14
|
Computer Vision for Detection of Body Posture and Behavior of Red Foxes. Animals (Basel) 2022; 12:ani12030233. [PMID: 35158557 PMCID: PMC8833490 DOI: 10.3390/ani12030233] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 01/14/2022] [Accepted: 01/14/2022] [Indexed: 01/27/2023] Open
Abstract
The behavior of animals is related to their health and welfare status. The latter plays a particular role in animal experiments, where continuous monitoring is essential for animal welfare. In this study, we focus on red foxes in an experimental setting and study their behavior. Although animal behavior is a complex concept, it can be described as a combination of body posture and activity. To measure body posture and activity, video monitoring can be used as a non-invasive and cost-efficient tool. While it is possible to analyze the video data resulting from the experiment manually, this method is time consuming and costly. We therefore use computer vision to detect and track the animals over several days. The detector is based on a neural network architecture. It is trained to detect red foxes and their body postures, i.e., ‘lying’, ‘sitting’, and ‘standing’. The trained algorithm has a mean average precision of 99.91%. The combination of activity and posture results in nearly continuous monitoring of animal behavior. Furthermore, the detector is suitable for real-time evaluation. In conclusion, evaluating the behavior of foxes in an experimental setting using computer vision is a powerful tool for cost-efficient real-time monitoring.
Collapse
|
15
|
Automated Individual Cattle Identification Using Video Data: A Unified Deep Learning Architecture Approach. FRONTIERS IN ANIMAL SCIENCE 2021. [DOI: 10.3389/fanim.2021.759147] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Individual cattle identification is a prerequisite and foundation for precision livestock farming. Existing methods for cattle identification require radio frequency or visual ear tags, all of which are prone to loss or damage. Here, we propose and implement a new unified deep learning approach to cattle identification using video analysis. The proposed deep learning framework is composed of a Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) with a self-attention mechanism. More specifically, the Inception-V3 CNN was used to extract features from a cattle video dataset taken in a feedlot with rear-view. Extracted features were then fed to a BiLSTM layer to capture spatio-temporal information. Then, self-attention was employed to provide a different focus on the features captured by BiLSTM for the final step of cattle identification. We used a total of 363 rear-view videos from 50 cattle at three different times with an interval of 1 month between data collection periods. The proposed method achieved 93.3% identification accuracy using a 30-frame video length, which outperformed current state-of-the-art methods (Inception-V3, MLP, SimpleRNN, LSTM, and BiLSTM). Furthermore, two different attention schemes, namely, additive and multiplicative attention mechanisms were compared. Our results show that the additive attention mechanism achieved 93.3% accuracy and 91.0% recall, greater than multiplicative attention mechanism with 90.7% accuracy and 87.0% recall. Video length also impacted accuracy, with video sequence length up to 30-frames enhancing identification performance. Overall, our approach can capture key spatio-temporal features to improve cattle identification accuracy, enabling automated cattle identification for precision livestock farming.
Collapse
|
16
|
|
17
|
Deep-Learning-Based Automatic Monitoring of Pigs' Physico-Temporal Activities at Different Greenhouse Gas Concentrations. Animals (Basel) 2021; 11:3089. [PMID: 34827821 PMCID: PMC8614322 DOI: 10.3390/ani11113089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 10/27/2021] [Accepted: 10/28/2021] [Indexed: 12/29/2022] Open
Abstract
Pig behavior is an integral part of health and welfare management, as pigs usually reflect their inner emotions through behavior change. The livestock environment plays a key role in pigs' health and wellbeing. A poor farm environment increases the toxic GHGs, which might deteriorate pigs' health and welfare. In this study a computer-vision-based automatic monitoring and tracking model was proposed to detect pigs' short-term physical activities in the compromised environment. The ventilators of the livestock barn were closed for an hour, three times in a day (07:00-08:00, 13:00-14:00, and 20:00-21:00) to create a compromised environment, which increases the GHGs level significantly. The corresponding pig activities were observed before, during, and after an hour of the treatment. Two widely used object detection models (YOLOv4 and Faster R-CNN) were trained and compared their performances in terms of pig localization and posture detection. The YOLOv4, which outperformed the Faster R-CNN model, was coupled with a Deep-SORT tracking algorithm to detect and track the pig activities. The results revealed that the pigs became more inactive with the increase in GHG concentration, reducing their standing and walking activities. Moreover, the pigs shortened their sternal-lying posture, increasing the lateral lying posture duration at higher GHG concentration. The high detection accuracy (mAP: 98.67%) and tracking accuracy (MOTA: 93.86% and MOTP: 82.41%) signify the models' efficacy in the monitoring and tracking of pigs' physical activities non-invasively.
Collapse
|
18
|
Changes in tail posture detected by a 3D machine vision system are associated with injury from damaging behaviours and ill health on commercial pig farms. PLoS One 2021; 16:e0258895. [PMID: 34710143 PMCID: PMC8553069 DOI: 10.1371/journal.pone.0258895] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 10/07/2021] [Indexed: 11/20/2022] Open
Abstract
To establish whether pig tail posture is affected by injuries and ill health, a machine vision system using 3D cameras to measure tail angle was used. Camera data from 1692 pigs in 41 production batches of 42.4 (±16.6) days in length over 17 months at seven diverse grower/finisher commercial pig farms, was validated by visiting farms every 14(±10) days to score injury and ill health. Linear modelling of tail posture found considerable farm and batch effects. The percentage of tails held low (0°) or mid (1-45°) decreased over time from 54.9% and 23.8% respectively by -0.16 and -0.05%/day, while tails high (45-90°) increased from 21.5% by 0.20%/day. Although 22% of scored pigs had scratched tails, severe tail biting was rare; only 6% had tail wounds and 5% partial tail loss. Adding tail injury to models showed associations with tail posture: overall tail injury, worsening tail injury, and tail loss were associated with more pigs detected with low tail posture and fewer with high tails. Minor tail injuries and tail swelling were also associated with altered tail posture. Unexpectedly, other health and injury scores had a larger effect on tail posture- more low tails were observed when a greater proportion of pigs in a pen were scored with lameness or lesions caused by social aggression. Ear injuries were linked with reduced high tails. These findings are consistent with the idea that low tail posture could be a general indicator of poor welfare. However, effects of flank biting and ocular discharge on tail posture were not consistent with this. Our results show for the first time that perturbations in the normal time trends of tail posture are associated with tail biting and other signs of adverse health/welfare at diverse commercial farms, forming the basis for a decision support system.
Collapse
|
19
|
Intelligent Perception-Based Cattle Lameness Detection and Behaviour Recognition: A Review. Animals (Basel) 2021; 11:ani11113033. [PMID: 34827766 PMCID: PMC8614286 DOI: 10.3390/ani11113033] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 10/14/2021] [Accepted: 10/20/2021] [Indexed: 01/22/2023] Open
Abstract
Simple Summary Cattle lameness detection as well as behaviour recognition are the two main objectives in the applications of precision livestock farming (PLF). Over the last five years, the development of smart sensors, big data, and artificial intelligence has offered more automatic tools. In this review, we discuss over 100 papers that used automated techniques to detect cattle lameness and to recognise animal behaviours. To assist researchers and policy-makers in promoting various livestock technologies for monitoring cattle welfare and productivity, we conducted a comprehensive investigation of intelligent perception for cattle lameness detection and behaviour analysis in the PLF domain. Based on the literature review, we anticipate that PLF will develop in an objective, autonomous, and real-time direction. Additionally, we suggest that further research should be dedicated to improving the data quality, modeling accuracy, and commercial availability. Abstract The growing world population has increased the demand for animal-sourced protein. However, animal farming productivity is faced with challenges from traditional farming practices, socioeconomic status, and climate change. In recent years, smart sensors, big data, and deep learning have been applied to animal welfare measurement and livestock farming applications, including behaviour recognition and health monitoring. In order to facilitate research in this area, this review summarises and analyses some main techniques used in smart livestock farming, focusing on those related to cattle lameness detection and behaviour recognition. In this study, more than 100 relevant papers on cattle lameness detection and behaviour recognition have been evaluated and discussed. Based on a review and a comparison of recent technologies and methods, we anticipate that intelligent perception for cattle behaviour and welfare monitoring will develop towards standardisation, a larger scale, and intelligence, combined with Internet of things (IoT) and deep learning technologies. In addition, the key challenges and opportunities of future research are also highlighted and discussed.
Collapse
|
20
|
Interactive Rooting Towers and Behavioural Observations as Strategies to Reduce Tail Biting on Conventional Pig Fattening Farms. Animals (Basel) 2021; 11:ani11113025. [PMID: 34827758 PMCID: PMC8614303 DOI: 10.3390/ani11113025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/20/2021] [Accepted: 10/18/2021] [Indexed: 11/16/2022] Open
Abstract
Eight pens (25 pigs/pen; n = 200) provided with an interactive straw-filled rooting tower (experimental group) and five pens (25 pigs/pen; n = 125) with a stationary (fixed) tower without straw (control group) were compared within three fattening periods on a conventional farm with fully slatted flooring. The effectiveness of the tower to trigger favourable behaviour in feeding and outside feeding periods was assessed. The incidence of deep tail injuries was lower in the experimental group (experimental group: Odds Ratio 0.3, p < 0.001) and was influenced by the batch (Odds Ratio: 2.38, p < 0.001) but not by pen and sex. In spring, most pens were excluded due to severe tail biting. Tail injury scores were more severe in the control group in weeks 5, 6 and 7 compared to the experimental group (p = 0.002, p < 0.001, p < 0.001, respectively). Tower manipulation was more frequent during feeding compared to outside feeding time (p = 0.002). More head than tail manipulation occurred in the experimental group (p = 0.03). The interactive tower as the only measure was not appropriate to reduce tail biting sufficiently in pigs with intact tails on a conventional fattening farm. Of high priority to prevent tail biting outbreaks was the early detection of biting pigs.
Collapse
|
21
|
The Application of Cameras in Precision Pig Farming: An Overview for Swine-Keeping Professionals. Animals (Basel) 2021; 11:ani11082343. [PMID: 34438800 PMCID: PMC8388688 DOI: 10.3390/ani11082343] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/19/2021] [Accepted: 08/06/2021] [Indexed: 01/06/2023] Open
Abstract
Simple Summary The preeminent purpose of precision livestock farming (PLF) is to provide affordable and straightforward solutions to severe problems with certainty. Some data collection techniques in PLF such as RFID are accurate but not affordable for small- and medium-sized farms. On the other hand, camera sensors are cheap, commonly available, and easily used to collect information compared to other sensor systems in precision pig farming. Cameras have ample chance to monitor pigs with high precision at an affordable cost. However, the lack of targeted information about the application of cameras in the pig industry is a shortcoming for swine farmers and researchers. This review describes the state of the art in 3D imaging systems (i.e., depth sensors and time-of-flight cameras), along with 2D cameras, for effectively identifying pig behaviors, and presents automated approaches for monitoring and investigating pigs’ feeding, drinking, lying, locomotion, aggressive, and reproductive behaviors. In addition, the review summarizes the related literature and points out limitations to open up new dimensions for future researchers to explore. Abstract Pork is the meat with the second-largest overall consumption, and chicken, pork, and beef together account for 92% of global meat production. Therefore, it is necessary to adopt more progressive methodologies such as precision livestock farming (PLF) rather than conventional methods to improve production. In recent years, image-based studies have become an efficient solution in various fields such as navigation for unmanned vehicles, human–machine-based systems, agricultural surveying, livestock, etc. So far, several studies have been conducted to identify, track, and classify the behaviors of pigs and achieve early detection of disease, using 2D/3D cameras. This review describes the state of the art in 3D imaging systems (i.e., depth sensors and time-of-flight cameras), along with 2D cameras, for effectively identifying pig behaviors and presents automated approaches for the monitoring and investigation of pigs’ feeding, drinking, lying, locomotion, aggressive, and reproductive behaviors.
Collapse
|
22
|
Application of YOLOv4 for Detection and Motion Monitoring of Red Foxes. Animals (Basel) 2021; 11:ani11061723. [PMID: 34207726 PMCID: PMC8228056 DOI: 10.3390/ani11061723] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 05/31/2021] [Accepted: 06/04/2021] [Indexed: 11/16/2022] Open
Abstract
Simple Summary The use of surveillance videos of animals is an important method for monitoring them, as animals often behave differently in the presence of humans. Moreover, the presence of humans can be a source of stress for the animals and can lead to changes in behavior. Extensive video material of red foxes has been recorded as part of a vaccine study. Since manual analysis of videos is both time-consuming and costly, we performed an analysis using a computer vision application in the present study. This made it possible to automatically analyze the videos and monitor animal activity and residency patterns without human interference. In this study, we used the computer vision architecture ‘you only look once’ version 4 (YOLOv4) to detect foxes and monitor their movement and, thus, their activity. Computer vision thereby outperforms manual and sensor-based exhaustive monitoring of the animals. Abstract Animal activity is an indicator for its welfare and manual observation is time and cost intensive. To this end, automatic detection and monitoring of live captive animals is of major importance for assessing animal activity, and, thereby, allowing for early recognition of changes indicative for diseases and animal welfare issues. We demonstrate that machine learning methods can provide a gap-less monitoring of red foxes in an experimental lab-setting, including a classification into activity patterns. Therefore, bounding boxes are used to measure fox movements, and, thus, the activity level of the animals. We use computer vision, being a non-invasive method for the automatic monitoring of foxes. More specifically, we train the existing algorithm ‘you only look once’ version 4 (YOLOv4) to detect foxes, and the trained classifier is applied to video data of an experiment involving foxes. As we show, computer evaluation outperforms other evaluation methods. Application of automatic detection of foxes can be used for detecting different movement patterns. These, in turn, can be used for animal behavioral analysis and, thus, animal welfare monitoring. Once established for a specific animal species, such systems could be used for animal monitoring in real-time under experimental conditions, or other areas of animal husbandry.
Collapse
|
23
|
Welfare Health and Productivity in Commercial Pig Herds. Animals (Basel) 2021; 11:1176. [PMID: 33924224 PMCID: PMC8074599 DOI: 10.3390/ani11041176] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/16/2021] [Accepted: 04/17/2021] [Indexed: 12/02/2022] Open
Abstract
In recent years, there have been very dynamic changes in both pork production and pig breeding technology around the world. The general trend of increasing the efficiency of pig production, with reduced employment, requires optimisation and a comprehensive approach to herd management. One of the most important elements on the way to achieving this goal is to maintain animal welfare and health. The health of the pigs on the farm is also a key aspect in production economics. The need to maintain a high health status of pig herds by eliminating the frequency of different disease units and reducing the need for antimicrobial substances is part of a broadly understood high potential herd management strategy. Thanks to the use of sensors (cameras, microphones, accelerometers, or radio-frequency identification transponders), the images, sounds, movements, and vital signs of animals are combined through algorithms and analysed for non-invasive monitoring of animals, which allows for early detection of diseases, improves their welfare, and increases the productivity of breeding. Automated, innovative early warning systems based on continuous monitoring of specific physiological (e.g., body temperature) and behavioural parameters can provide an alternative to direct diagnosis and visual assessment by the veterinarian or the herd keeper.
Collapse
|
24
|
ASAS-NANP SYMPOSIUM: Applications of machine learning for livestock body weight prediction from digital images. J Anim Sci 2021; 99:6149204. [PMID: 33626149 PMCID: PMC7904040 DOI: 10.1093/jas/skab022] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 01/25/2021] [Indexed: 01/01/2023] Open
Abstract
Monitoring, recording, and predicting livestock body weight (BW) allows for timely intervention in diets and health, greater efficiency in genetic selection, and identification of optimal times to market animals because animals that have already reached the point of slaughter represent a burden for the feedlot. There are currently two main approaches (direct and indirect) to measure the BW in livestock. Direct approaches include partial-weight or full-weight industrial scales placed in designated locations on large farms that measure passively or dynamically the weight of livestock. While these devices are very accurate, their acquisition, intended purpose and operation size, repeated calibration and maintenance costs associated with their placement in high-temperature variability, and corrosive environments are significant and beyond the affordability and sustainability limits of small and medium size farms and even of commercial operators. As a more affordable alternative to direct weighing approaches, indirect approaches have been developed based on observed or inferred relationships between biometric and morphometric measurements of livestock and their BW. Initial indirect approaches involved manual measurements of animals using measuring tapes and tubes and the use of regression equations able to correlate such measurements with BW. While such approaches have good BW prediction accuracies, they are time consuming, require trained and skilled farm laborers, and can be stressful for both animals and handlers especially when repeated daily. With the concomitant advancement of contactless electro-optical sensors (e.g., 2D, 3D, infrared cameras), computer vision (CV) technologies, and artificial intelligence fields such as machine learning (ML) and deep learning (DL), 2D and 3D images have started to be used as biometric and morphometric proxies for BW estimations. This manuscript provides a review of CV-based and ML/DL-based BW prediction methods and discusses their strengths, weaknesses, and industry applicability potential.
Collapse
|
25
|
Deep Learning-Based Cattle Vocal Classification Model and Real-Time Livestock Monitoring System with Noise Filtering. Animals (Basel) 2021; 11:ani11020357. [PMID: 33535390 PMCID: PMC7911430 DOI: 10.3390/ani11020357] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 01/24/2021] [Accepted: 01/27/2021] [Indexed: 11/16/2022] Open
Abstract
The priority placed on animal welfare in the meat industry is increasing the importance of understanding livestock behavior. In this study, we developed a web-based monitoring and recording system based on artificial intelligence analysis for the classification of cattle sounds. The deep learning classification model of the system is a convolutional neural network (CNN) model that takes voice information converted to Mel-frequency cepstral coefficients (MFCCs) as input. The CNN model first achieved an accuracy of 91.38% in recognizing cattle sounds. Further, short-time Fourier transform-based noise filtering was applied to remove background noise, improving the classification model accuracy to 94.18%. Categorized cattle voices were then classified into four classes, and a total of 897 classification records were acquired for the classification model development. A final accuracy of 81.96% was obtained for the model. Our proposed web-based platform that provides information obtained from a total of 12 sound sensors provides cattle vocalization monitoring in real time, enabling farm owners to determine the status of their cattle.
Collapse
|
26
|
Automatic Assessment of Keel Bone Damage in Laying Hens at the Slaughter Line. Animals (Basel) 2021; 11:ani11010163. [PMID: 33445636 PMCID: PMC7827378 DOI: 10.3390/ani11010163] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 12/18/2020] [Accepted: 12/21/2020] [Indexed: 11/17/2022] Open
Abstract
Simple Summary Keel bone damage (KBD) is very prevalent in commercial laying hen flocks with a wide range of affected hens/flock. It can cause pain, and affected hens have been found to be less mobile. The assessment of this animal welfare indicator provides important feedback for the farmer about flock health and consequently on the need for interventions. However, the assessment of keel bone damage is time-consuming, and prior training is needed in order to gain reliable results. Optical detection methods can be a means to automatedly score hens at the slaughter line with high sample sizes and in a standardized way. We developed and validated an automatic 3D camera-based detection system. While it generally underestimates the presence of KBD due to the purely visual assessment and technical constraints, it nevertheless shows good accuracy and high correlation of prevalences with those visually determined by a trained human assessor. Therefore, this system opens up opportunities to better monitor and combat a severe animal welfare problem in the long-term. Abstract Keel bone damage (KBD) can be found in all commercial laying hen flocks with a wide range of 23% to 69% of hens/flock found to be affected in this study. As KBD may be linked with chronic pain and a decrease in mobility, it is a serious welfare problem. An automatic assessment system at the slaughter line could support the detection of KBD and would have the advantage of being standardized and fast scoring including high sample sizes. A 2MP stereo camera combined with an IDS imaging color camera was used for the automatic assessment. A trained human assessor visually scored KBD in defeathered hens during the slaughter process and compared results with further human assessors and automatic recording. In a first step, an algorithm was developed on the basis of assessments of keel status of 2287 hens of different genetics with varying degrees of KBD. In two optimization steps, performance data were calculated, and flock prevalences were determined, which were compared between the assessor and the automatic system. The proposed technique finally reached a sensitivity of 0.95, specificity of 0.77, accuracy of 0.86 and precision of 0.81. In the last optimization step, the automatic system scored on average about 10.5% points lower KBD prevalences than the human assessor. However, a proposed change of scoring system (setting the limit for KBD at 0.5 cm deviation from the straight line) would lower this deviation. We conclude that the developed automatic scoring technique is a reliable and potentially valuable tool for the assessment of KBD.
Collapse
|
27
|
Description of Behavioral Patterns Displayed by a Recently Weaned Cohort of Healthy Dairy Calves. Animals (Basel) 2020; 10:ani10122452. [PMID: 33371394 PMCID: PMC7767454 DOI: 10.3390/ani10122452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 12/12/2020] [Accepted: 12/17/2020] [Indexed: 11/18/2022] Open
Abstract
Simple Summary Modern technology has allowed researchers to track the movement patterns of cattle with increasing accuracy in order to gain a greater understanding of both overt and subtle activity trends. The aim of this study was to describe and analyze movement patterns displayed by recently weaned and healthy dairy calves. Three movement pattern clusters were identified, and calves in this study were more active in the afternoon and at night. There was a correlation between the rate of movement, linearity ratio, and the distance traveled. However, turning angles do not have any influence on the distance traveled and the rate of movement across the three cluster-type movements. The findings reported in this study could be used to further develop the interpretation of movement and behavior patterns of calves in order to establish an early detection system for poor health and welfare on dairy farms. Abstract Animals display movement patterns that can be used as health indicators. The movement of dairy cattle can be characterized into three distinct cluster types. These are cluster type 1 (resting), cluster type 2 (traveling), and cluster type 3 (searching). This study aimed to analyze the movement patterns of healthy calves and assess the relationship between the variables that constitute the three cluster types. Eleven Holstein calves were fitted with GPS data loggers, which recorded their movement over a two week period during spring. The GPS data loggers captured longitude and latitude coordinates, distance, time and speed. It was found that the calves were most active during the afternoon and at night. Slight inconsistencies from previous studies were found in the cluster movements. Cluster type 2 (traveling) reported the fastest rate of movement, whereas cluster type 1 (resting) reported the slowest. These diverse movement patterns could be used to enhance the assessment of dairy animal health and welfare on farms.
Collapse
|
28
|
Development and Validation of an Automated Video Tracking Model for Stabled Horses. Animals (Basel) 2020; 10:ani10122258. [PMID: 33266297 PMCID: PMC7760072 DOI: 10.3390/ani10122258] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 11/25/2020] [Accepted: 11/27/2020] [Indexed: 12/25/2022] Open
Abstract
Simple Summary Although there are some methods to detect pain in horses, because of bias and time-consumption, those methods are practically challenging. However, in recent years rapidly developed automated tracking methods have proven that computer-based behaviour monitoring is more reliable in many animal species. That is why in this study we aimed to investigate an automated video tracking model for horses in a clinical context. The findings will help to develop the automated detection of daily activity, to meet the ultimate objective of objectively assessing the pain and wellbeing of horses. An initial analysis of the obtained data offers the opportunity to construct an algorithm to track automatically behaviour patterns of horses. Abstract Changes in behaviour are often caused by painful conditions. Therefore, the assessment of behaviour is important for the recognition of pain, but also for the assessment of quality of life. Automated detection of movement and the behaviour of a horse in the box stall should represent a significant advancement. In this study, videos of horses in an animal hospital were recorded using an action camera and a time-lapse mode. These videos were processed using the convolutional neural network Loopy for automated prediction of body parts. Development of the model was carried out in several steps, including annotation of the key points, training of the network to generate the model and checking the model for its accuracy. The key points nose, withers and tail are detected with a sensitivity of more than 80% and an error rate between 2 and 7%, depending on the key point. By means of a case study, the possibility of further analysis with the acquired data was investigated. The results will significantly improve the pain recognition of horses and will help to develop algorithms for the automated recognition of behaviour using machine learning.
Collapse
|
29
|
|
30
|
Image Analysis and Computer Vision Applications in Animal Sciences: An Overview. Front Vet Sci 2020; 7:551269. [PMID: 33195522 PMCID: PMC7609414 DOI: 10.3389/fvets.2020.551269] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 09/15/2020] [Indexed: 11/13/2022] Open
Abstract
Computer Vision, Digital Image Processing, and Digital Image Analysis can be viewed as an amalgam of terms that very often are used to describe similar processes. Most of this confusion arises because these are interconnected fields that emerged with the development of digital image acquisition. Thus, there is a need to understand the connection between these fields, how a digital image is formed, and the differences regarding the many sensors available, each best suited for different applications. From the advent of the charge-coupled devices demarking the birth of digital imaging, the field has advanced quite fast. Sensors have evolved from grayscale to color with increasingly higher resolution and better performance. Also, many other sensors have appeared, such as infrared cameras, stereo imaging, time of flight sensors, satellite, and hyperspectral imaging. There are also images generated by other signals, such as sound (ultrasound scanners and sonars) and radiation (standard x-ray and computed tomography), which are widely used to produce medical images. In animal and veterinary sciences, these sensors have been used in many applications, mostly under experimental conditions and with just some applications yet developed on commercial farms. Such applications can range from the assessment of beef cuts composition to live animal identification, tracking, behavior monitoring, and measurement of phenotypes of interest, such as body weight, condition score, and gait. Computer vision systems (CVS) have the potential to be used in precision livestock farming and high-throughput phenotyping applications. We believe that the constant measurement of traits through CVS can reduce management costs and optimize decision-making in livestock operations, in addition to opening new possibilities in selective breeding. Applications of CSV are currently a growing research area and there are already commercial products available. However, there are still challenges that demand research for the successful development of autonomous solutions capable of delivering critical information. This review intends to present significant developments that have been made in CVS applications in animal and veterinary sciences and to highlight areas in which further research is still needed before full deployment of CVS in breeding programs and commercial farms.
Collapse
|
31
|
Large-Scale Phenotyping of Livestock Welfare in Commercial Production Systems: A New Frontier in Animal Breeding. Front Genet 2020; 11:793. [PMID: 32849798 PMCID: PMC7411239 DOI: 10.3389/fgene.2020.00793] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 07/03/2020] [Indexed: 12/13/2022] Open
Abstract
Genomic breeding programs have been paramount in improving the rates of genetic progress of productive efficiency traits in livestock. Such improvement has been accompanied by the intensification of production systems, use of a wider range of precision technologies in routine management practices, and high-throughput phenotyping. Simultaneously, a greater public awareness of animal welfare has influenced livestock producers to place more emphasis on welfare relative to production traits. Therefore, management practices and breeding technologies in livestock have been developed in recent years to enhance animal welfare. In particular, genomic selection can be used to improve livestock social behavior, resilience to disease and other stress factors, and ease habituation to production system changes. The main requirements for including novel behavioral and welfare traits in genomic breeding schemes are: (1) to identify traits that represent the biological mechanisms of the industry breeding goals; (2) the availability of individual phenotypic records measured on a large number of animals (ideally with genomic information); (3) the derived traits are heritable, biologically meaningful, repeatable, and (ideally) not highly correlated with other traits already included in the selection indexes; and (4) genomic information is available for a large number of individuals (or genetically close individuals) with phenotypic records. In this review, we (1) describe a potential route for development of novel welfare indicator traits (using ideal phenotypes) for both genetic and genomic selection schemes; (2) summarize key indicator variables of livestock behavior and welfare, including a detailed assessment of thermal stress in livestock; (3) describe the primary statistical and bioinformatic methods available for large-scale data analyses of animal welfare; and (4) identify major advancements, challenges, and opportunities to generate high-throughput and large-scale datasets to enable genetic and genomic selection for improved welfare in livestock. A wide variety of novel welfare indicator traits can be derived from information captured by modern technology such as sensors, automatic feeding systems, milking robots, activity monitors, video cameras, and indirect biomarkers at the cellular and physiological levels. The development of novel traits coupled with genomic selection schemes for improved welfare in livestock can be feasible and optimized based on recently developed (or developing) technologies. Efficient implementation of genetic and genomic selection for improved animal welfare also requires the integration of a multitude of scientific fields such as cell and molecular biology, neuroscience, immunology, stress physiology, computer science, engineering, quantitative genomics, and bioinformatics.
Collapse
|
32
|
Panoptic Segmentation of Individual Pigs for Posture Recognition. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3710. [PMID: 32630794 PMCID: PMC7374502 DOI: 10.3390/s20133710] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 06/23/2020] [Accepted: 06/29/2020] [Indexed: 11/30/2022]
Abstract
Behavioural research of pigs can be greatly simplified if automatic recognition systems are used. Systems based on computer vision in particular have the advantage that they allow an evaluation without affecting the normal behaviour of the animals. In recent years, methods based on deep learning have been introduced and have shown excellent results. Object and keypoint detector have frequently been used to detect individual animals. Despite promising results, bounding boxes and sparse keypoints do not trace the contours of the animals, resulting in a lot of information being lost. Therefore, this paper follows the relatively new approach of panoptic segmentation and aims at the pixel accurate segmentation of individual pigs. A framework consisting of a neural network for semantic segmentation as well as different network heads and postprocessing methods will be discussed. The method was tested on a data set of 1000 hand-labeled images created specifically for this experiment and achieves detection rates of around 95% (F1 score) despite disturbances such as occlusions and dirty lenses.
Collapse
|
33
|
Automated Video Behavior Recognition of Pigs Using Two-Stream Convolutional Networks. SENSORS 2020; 20:s20041085. [PMID: 32079299 PMCID: PMC7070994 DOI: 10.3390/s20041085] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 02/12/2020] [Accepted: 02/13/2020] [Indexed: 02/06/2023]
Abstract
The detection of pig behavior helps detect abnormal conditions such as diseases and dangerous movements in a timely and effective manner, which plays an important role in ensuring the health and well-being of pigs. Monitoring pig behavior by staff is time consuming, subjective, and impractical. Therefore, there is an urgent need to implement methods for identifying pig behavior automatically. In recent years, deep learning has been gradually applied to the study of pig behavior recognition. Existing studies judge the behavior of the pig only based on the posture of the pig in a still image frame, without considering the motion information of the behavior. However, optical flow can well reflect the motion information. Thus, this study took image frames and optical flow from videos as two-stream input objects to fully extract the temporal and spatial behavioral characteristics. Two-stream convolutional network models based on deep learning were proposed, including inflated 3D convnet (I3D) and temporal segment networks (TSN) whose feature extraction network is Residual Network (ResNet) or the Inception architecture (e.g., Inception with Batch Normalization (BN-Inception), InceptionV3, InceptionV4, or InceptionResNetV2) to achieve pig behavior recognition. A standard pig video behavior dataset that included 1000 videos of feeding, lying, walking, scratching and mounting from five kinds of different behavioral actions of pigs under natural conditions was created. The dataset was used to train and test the proposed models, and a series of comparative experiments were conducted. The experimental results showed that the TSN model whose feature extraction network was ResNet101 was able to recognize pig feeding, lying, walking, scratching, and mounting behaviors with a higher average of 98.99%, and the average recognition time of each video was 0.3163 s. The TSN model (ResNet101) is superior to the other models in solving the task of pig behavior recognition.
Collapse
|
34
|
|
35
|
Continuous Monitoring of Pigs in Fattening Using a Multi-Sensor System: Behavior Patterns. Animals (Basel) 2019; 10:ani10010052. [PMID: 31888006 PMCID: PMC7022589 DOI: 10.3390/ani10010052] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Revised: 12/20/2019] [Accepted: 12/24/2019] [Indexed: 12/02/2022] Open
Abstract
Simple Summary The livestock sector seeks technologies and procedures to collect and manage data and information about its facilities and animals being the basis of the so-called precision livestock. The installation of unusual devices in commercial facilities, as well as the use of electronic feeding stations, allows observers to characterize the behavior pattern of each individual in order to improve farm management techniques and, therefore, its productivity. In this study, 30 Landrace pigs were monitored during the whole fattening period. Results from the study show that the ear skin temperatures of the animals can be used to distinguish animals with different thermal patterns. The parameters extracted from the feeding stations show consistent relationships between the parameters related to the frequency, size, and duration parameters, highlighting the differences in the feeding strategies. Abstract In this work, a complete fattening period (81 days) of a total of 30 Landrace pigs housed in two pens of a nucleus in Villatobas (Castilla-La Mancha, Spain) were supervised. The ear skin temperature of each animal was recorded every three minutes. The body weight, the date, the duration, and the amount of feed consumed per animal was monitored via an electronic feeding station. The objective was the identification of animals with different behaviors based on the integration of their thermal and intake patterns. The ear skin temperatures of the animals showed a negative relationship between the mean and the standard deviation (r = 0.83), distinguishing animals with different thermal patterns: individuals with high-temperature values show less thermal variability and vice versa. Feeding parameters showed differences in the feeding strategies of animals, identifying fast-eating animals with a high rate feed intake (60 g/min) and slow eaters (30 g/min). The correlation between the change in the rate of feed intake along with animal growth and feed efficiency reached a significant negative value (−0.57), indicating that animals that do not alter their rate of feed intake along breeding showed higher efficiencies. The difference in temperature of an animal with respect to the averaged group value has allowed us to identify animals with differentiated feeding patterns.
Collapse
|
36
|
A Vision for Development and Utilization of High-Throughput Phenotyping and Big Data Analytics in Livestock. Front Genet 2019; 10:1197. [PMID: 31921279 PMCID: PMC6934059 DOI: 10.3389/fgene.2019.01197] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 10/29/2019] [Indexed: 01/28/2023] Open
Abstract
Automated high-throughput phenotyping with sensors, imaging, and other on-farm technologies has resulted in a flood of data that are largely under-utilized. Drastic cost reductions in sequencing and other omics technology have also facilitated the ability for deep phenotyping of livestock at the molecular level. These advances have brought the animal sciences to a cross-roads in data science where increased training is needed to manage, record, and analyze data to generate knowledge and advances in Agriscience related disciplines. This paper describes the opportunities and challenges in using high-throughput phenotyping, “big data,” analytics, and related technologies in the livestock industry based on discussions at the Livestock High-Throughput Phenotyping and Big Data Analytics meeting, held in November 2017 (see: https://www.animalgenome.org/bioinfo/community/workshops/2017/). Critical needs for investments in infrastructure for people (e.g., “big data” training), data (e.g., data transfer, management, and analytics), and technology (e.g., development of low cost sensors) were defined by this group. Though some subgroups of animal science have extensive experience in predictive modeling, cross-training in computer science, statistics, and related disciplines are needed to use big data for diverse applications in the field. Extensive opportunities exist for public and private entities to harness big data to develop valuable research knowledge and products to the benefit of society under the increased demands for food in a rapidly growing population.
Collapse
|
37
|
Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs. SENSORS (BASEL, SWITZERLAND) 2019; 19:E3738. [PMID: 31470571 PMCID: PMC6749226 DOI: 10.3390/s19173738] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 08/15/2019] [Accepted: 08/28/2019] [Indexed: 02/08/2023]
Abstract
Posture detection targeted towards providing assessments for the monitoring of health and welfare of pigs has been of great interest to researchers from different disciplines. Existing studies applying machine vision techniques are mostly based on methods using three-dimensional imaging systems, or two-dimensional systems with the limitation of monitoring under controlled conditions. Thus, the main goal of this study was to determine whether a two-dimensional imaging system, along with deep learning approaches, could be utilized to detect the standing and lying (belly and side) postures of pigs under commercial farm conditions. Three deep learning-based detector methods, including faster regions with convolutional neural network features (Faster R-CNN), single shot multibox detector (SSD) and region-based fully convolutional network (R-FCN), combined with Inception V2, Residual Network (ResNet) and Inception ResNet V2 feature extractions of RGB images were proposed. Data from different commercial farms were used for training and validation of the proposed models. The experimental results demonstrated that the R-FCN ResNet101 method was able to detect lying and standing postures with higher average precision (AP) of 0.93, 0.95 and 0.92 for standing, lying on side and lying on belly postures, respectively and mean average precision (mAP) of more than 0.93.
Collapse
|
38
|
Genomic Selection in Aquaculture: Application, Limitations and Opportunities With Special Reference to Marine Shrimp and Pearl Oysters. Front Genet 2019; 9:693. [PMID: 30728827 PMCID: PMC6351666 DOI: 10.3389/fgene.2018.00693] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 12/11/2018] [Indexed: 11/20/2022] Open
Abstract
Within aquaculture industries, selection based on genomic information (genomic selection) has the profound potential to change genetic improvement programs and production systems. Genomic selection exploits the use of realized genomic relationships among individuals and information from genome-wide markers in close linkage disequilibrium with genes of biological and economic importance. We discuss the technical advances, practical requirements, and commercial applications that have made genomic selection feasible in a range of aquaculture industries, with a particular focus on molluscs (pearl oysters, Pinctada maxima) and marine shrimp (Litopenaeus vannamei and Penaeus monodon). The use of low-cost genome sequencing has enabled cost-effective genotyping on a large scale and is of particular value for species without a reference genome or access to commercial genotyping arrays. We highlight the pitfalls and offer the solutions to the genotyping by sequencing approach and the building of appropriate genetic resources to undertake genomic selection from first-hand experience. We describe the potential to capture large-scale commercial phenotypes based on image analysis and artificial intelligence through machine learning, as inputs for calculation of genomic breeding values. The application of genomic selection over traditional aquatic breeding programs offers significant advantages through being able to accurately predict complex polygenic traits including disease resistance; increasing rates of genetic gain; minimizing inbreeding; and negating potential limiting effects of genotype by environment interactions. Further practical advantages of genomic selection through the use of large-scale communal mating and rearing systems are highlighted, as well as presenting rate-limiting steps that impact on attaining maximum benefits from adopting genomic selection. Genomic selection is now at the tipping point where commercial applications can be readily adopted and offer significant short- and long-term solutions to sustainable and profitable aquaculture industries.
Collapse
|
39
|
Changes in activity and object manipulation before tail damage in finisher pigs as an early detector of tail biting. Animal 2019; 13:1037-1044. [DOI: 10.1017/s1751731118002689] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
|
40
|
Insights to optimise marketing decisions on pig-grower farms. ANIMAL PRODUCTION SCIENCE 2019. [DOI: 10.1071/an17360] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Modern pig production in a vertically integrated company is a highly specialised and industrialised activity, requiring increasingly complex and critical decision-making. The present paper focuses on the decisions made on the pig-grower farms operating on an all-in–all-out management policy at the last stage of pig production. Based on a mixed-integer linear-programming model, an assessment for specific parameters to support marketing decisions on farms without individual weight control is made. The analysis of several key factors affecting the optimal marketing policy, such as transportation cost, when and how many pigs to deliver to the abattoir and weight homogeneity of the batch, served to gain insight into marketing decisions. The results confirmed that not just the feeding program, but also the grading price system, transportation and batch homogeneity have an enormous impact on the optimal marketing policy of fattening farms in a vertically integrated company. In addition, within the range of conditions considered, a time window of 4 weeks was deemed as optimal for delivering animals to the abattoir and the subsequent revenue was 15% higher than with traditional marketing rules.
Collapse
|
41
|
Agreement between passive infrared detector measurements and human observations of animal activity. Livest Sci 2018. [DOI: 10.1016/j.livsci.2018.06.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
42
|
Fractal measures in activity patterns: Do gastrointestinal parasites affect the complexity of sheep behaviour? Appl Anim Behav Sci 2018. [DOI: 10.1016/j.applanim.2018.05.014] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
43
|
Automatic early warning of tail biting in pigs: 3D cameras can detect lowered tail posture before an outbreak. PLoS One 2018; 13:e0194524. [PMID: 29617403 PMCID: PMC5884497 DOI: 10.1371/journal.pone.0194524] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 03/05/2018] [Indexed: 12/11/2022] Open
Abstract
Tail biting is a major welfare and economic problem for indoor pig producers worldwide. Low tail posture is an early warning sign which could reduce tail biting unpredictability. Taking a precision livestock farming approach, we used Time-of-flight 3D cameras, processing data with machine vision algorithms, to automate the measurement of pig tail posture. Validation of the 3D algorithm found an accuracy of 73.9% at detecting low vs. not low tails (Sensitivity 88.4%, Specificity 66.8%). Twenty-three groups of 29 pigs per group were reared with intact (not docked) tails under typical commercial conditions over 8 batches. 15 groups had tail biting outbreaks, following which enrichment was added to pens and biters and/or victims were removed and treated. 3D data from outbreak groups showed the proportion of low tail detections increased pre-outbreak and declined post-outbreak. Pre-outbreak, the increase in low tails occurred at an increasing rate over time, and the proportion of low tails was higher one week pre-outbreak (-1) than 2 weeks pre-outbreak (-2). Within each batch, an outbreak and a non-outbreak control group were identified. Outbreak groups had more 3D low tail detections in weeks -1, +1 and +2 than their matched controls. Comparing 3D tail posture and tail injury scoring data, a greater proportion of low tails was associated with more injured pigs. Low tails might indicate more than just tail biting as tail posture varied between groups and over time and the proportion of low tails increased when pigs were moved to a new pen. Our findings demonstrate the potential for a 3D machine vision system to automate tail posture detection and provide early warning of tail biting on farm.
Collapse
|
44
|
Automated tracking to measure behavioural changes in pigs for health and welfare monitoring. Sci Rep 2017; 7:17582. [PMID: 29242594 PMCID: PMC5730557 DOI: 10.1038/s41598-017-17451-6] [Citation(s) in RCA: 63] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2017] [Accepted: 11/27/2017] [Indexed: 11/13/2022] Open
Abstract
Since animals express their internal state through behaviour, changes in said behaviour may be used to detect early signs of problems, such as in animal health. Continuous observation of livestock by farm staff is impractical in a commercial setting to the degree required to detect behavioural changes relevant for early intervention. An automated monitoring system is developed; it automatically tracks pig movement with depth video cameras, and automatically measures standing, feeding, drinking, and locomotor activities from 3D trajectories. Predictions of standing, feeding, and drinking were validated, but not locomotor activities. An artificial, disruptive challenge; i.e., introduction of a novel object, is used to cause reproducible behavioural changes to enable development of a system to detect the changes automatically. Validation of the automated monitoring system with the controlled challenge study provides a reproducible framework for further development of robust early warning systems for pigs. The automated system is practical in commercial settings because it provides continuous monitoring of multiple behaviours, with metrics of behaviours that may be considered more intuitive and have diagnostic validity. The method has the potential to transform how livestock are monitored, directly impact their health and welfare, and address issues in livestock farming, such as antimicrobial use.
Collapse
|
45
|
Using automated image analysis in pig behavioural research: Assessment of the influence of enrichment substrate provision on lying behaviour. Appl Anim Behav Sci 2017. [DOI: 10.1016/j.applanim.2017.06.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|