1
|
Wang J, Chen H, Wang J, Zhao K, Li X, Liu B, Zhou Y. Identification of oestrus cows based on vocalisation characteristics and machine learning technique using a dual-channel-equipped acoustic tag. Animal 2023; 17:100811. [PMID: 37150135 DOI: 10.1016/j.animal.2023.100811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/31/2023] [Accepted: 04/03/2023] [Indexed: 05/09/2023] Open
Abstract
Timely and accurate detection of oestrus in cows is an essential element of the good management of dairy farms. At present, the detection of cows in oestrus by acoustic means is impeded by the problems of filtering, incomplete feature selection, and poor recognition accuracy. To overcome these difficulties, this study proposes a sound detection method for cows in oestrus based on machine learning technology using an optimal feature combination and an optimal time window. Firstly, a dual-channel sound detection tag consisting of a unidirectional microphone and an omnidirectional microphone (OM) was developed. The Least Mean Squares adaptive algorithm based on wavelet thresholds was used to filter the signals from the OM, and the dual-channel endpoint detection algorithm was used to identify the lowing of individual cows. The Friedman analysis was then used to select the sound features with significant differences before and after oestrus in terms of time, frequency, and cepstrum, and these were used to determine the most acceptable feature combination. We then analysed the effects of Back Propagation Neural Network (BPNN), Cartesian Regression Tree, Support Vector Machine, and Random Forest classification on the accuracy, precision, sensitivity, specificity, and F1 score of oestrus discrimination. Different time windows were used, and the discrimination performance of these algorithms was evaluated using the Area under Receiver Operating Characteristic Curve to find the most satisfactory match between the time window and the recognition algorithm. The dual-channel acoustic tag's accuracy, precision, sensitivity, and specificity results were 91.25, 98.83, 91.75, and 83.68%, respectively. BPNN with the 70 ms time window and the feature combination (spectral roll-off + spectral flatness + Mel-Frequency Cepstrum Coefficients) was confirmed as the most suitable oestrus recognition method. The average accuracy, precision, sensitivity, specificity, and F1 score of this method were 97.62, 98.07, 97.17, 97.19, and 97.63%, respectively. Based on these results, the approach was shown to be a feasible means of oestrus detection in dairy cows. Based on its ability to differentiate cows and its consistency, it was demonstrated that sound has the potential to replace accelerometers as an early indicator of oestrus in dairy cows.
Collapse
Affiliation(s)
- Jun Wang
- School of Information Engineering, Henan University of Science and Technology, Luoyang, Henan 471003, PR China.
| | - Haoran Chen
- School of Information Engineering, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| | - Jianping Wang
- School of Animal Science and Technology, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| | - Kaixuan Zhao
- School of Agricultural Equipment Engineering, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| | - Xiaoxia Li
- School of Animal Science and Technology, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| | - Bo Liu
- School of Information Engineering, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| | - Yu Zhou
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| |
Collapse
|
2
|
Volkmann N, Kulig B, Hoppe S, Stracke J, Hensel O, Kemper N. On-farm detection of claw lesions in dairy cows based on acoustic analyses and machine learning. J Dairy Sci 2021; 104:5921-5931. [PMID: 33663849 DOI: 10.3168/jds.2020-19206] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 12/23/2020] [Indexed: 11/19/2022]
Abstract
Claw lesions are a serious problem on dairy farms, affecting both the health and welfare of the cow. Automated detection of lameness with a practical, on-farm application would support the early detection and treatment of lame cows, potentially reducing the number and severity of claw lesions. Therefore, in this study, a method was proposed for the detection of claw lesions based on the acoustic analysis of a cow's gait. A panel was constructed to measure the impact sound of animals walking over it. The recorded impact sound was edited, and 640 sound files from 64 cows were analyzed. The classification of animal-lameness status was performed using a machine-learning process with a random forest algorithm. The gold standard was a 2-point scale of hoof-trimming results (healthy vs. affected), and 38 properties of the recorded sound files were used as influencing factors. A prediction model for classifying the cow lameness was built using a random forest algorithm. This was validated by comparing the reference output from hoof-trimming with the model output concerning the impact sound. Altering the likelihood settings and changing the cutoff value to predict lame animals improved the prediction model. At a cutoff at 0.4, a decreased false-negative rate was generated, and the false-positive rate only increased slightly. This model obtained a sensitivity of 0.81 and a specificity of 0.97. With this procedure, Cohen's Kappa value of 0.80 showed good agreement between model classification and diagnoses from hoof-trimming. In summary, the prediction model enabled the detection of cows with claw lesions. This study shows that lameness can be detected by machine learning from the impact sound of hoofs in dairy cows.
Collapse
Affiliation(s)
- N Volkmann
- Institute for Animal Hygiene, Animal Welfare and Animal Behavior, University of Veterinary Medicine Hannover, Foundation, Bischofsholer Damm 15, D-30173 Hannover, Germany.
| | - B Kulig
- Section of Agricultural and Biosystems Engineering, University of Kassel, Nordbahnhofstraße 1a, D-37213 Witzenhausen, Germany
| | - S Hoppe
- Agricultural Research and Training Center Haus Riswick, Agricultural Chamber of North Rhine-Westphalia, Elsenpaß 5, D-47533 Kleve, Germany
| | - J Stracke
- Institute for Animal Hygiene, Animal Welfare and Animal Behavior, University of Veterinary Medicine Hannover, Foundation, Bischofsholer Damm 15, D-30173 Hannover, Germany
| | - O Hensel
- Section of Agricultural and Biosystems Engineering, University of Kassel, Nordbahnhofstraße 1a, D-37213 Witzenhausen, Germany
| | - N Kemper
- Institute for Animal Hygiene, Animal Welfare and Animal Behavior, University of Veterinary Medicine Hannover, Foundation, Bischofsholer Damm 15, D-30173 Hannover, Germany
| |
Collapse
|
3
|
IoT Technologies for Livestock Management: A Review of Present Status, Opportunities, and Future Trends. BIG DATA AND COGNITIVE COMPUTING 2021. [DOI: 10.3390/bdcc5010010] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The world population currently stands at about 7 billion amidst an expected increase in 2030 from 9.4 billion to around 10 billion in 2050. This burgeoning population has continued to influence the upward demand for animal food. Moreover, the management of finite resources such as land, the need to reduce livestock contribution to greenhouse gases, and the need to manage inherent complex, highly contextual, and repetitive day-to-day livestock management (LsM) routines are some examples of challenges to overcome in livestock production. The Internet of Things (IoT)’s usefulness in other vertical industries (OVI) shows that its role will be significant in LsM. This work uses the systematic review methodology of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) to guide a review of existing literature on IoT in OVI. The goal is to identify the IoT’s ecosystem, architecture, and its technicalities—present status, opportunities, and expected future trends—regarding its role in LsM. Among identified IoT roles in LsM, the authors found that data will be its main contributor. The traditional approach of reactive data processing will give way to the proactive approach of augmented analytics to provide insights about animal processes. This will undoubtedly free LsM from the drudgery of repetitive tasks with opportunities for improved productivity.
Collapse
|
4
|
Jung DH, Kim NY, Moon SH, Jhin C, Kim HJ, Yang JS, Kim HS, Lee TS, Lee JY, Park SH. Deep Learning-Based Cattle Vocal Classification Model and Real-Time Livestock Monitoring System with Noise Filtering. Animals (Basel) 2021; 11:ani11020357. [PMID: 33535390 PMCID: PMC7911430 DOI: 10.3390/ani11020357] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 01/24/2021] [Accepted: 01/27/2021] [Indexed: 11/16/2022] Open
Abstract
The priority placed on animal welfare in the meat industry is increasing the importance of understanding livestock behavior. In this study, we developed a web-based monitoring and recording system based on artificial intelligence analysis for the classification of cattle sounds. The deep learning classification model of the system is a convolutional neural network (CNN) model that takes voice information converted to Mel-frequency cepstral coefficients (MFCCs) as input. The CNN model first achieved an accuracy of 91.38% in recognizing cattle sounds. Further, short-time Fourier transform-based noise filtering was applied to remove background noise, improving the classification model accuracy to 94.18%. Categorized cattle voices were then classified into four classes, and a total of 897 classification records were acquired for the classification model development. A final accuracy of 81.96% was obtained for the model. Our proposed web-based platform that provides information obtained from a total of 12 sound sensors provides cattle vocalization monitoring in real time, enabling farm owners to determine the status of their cattle.
Collapse
Affiliation(s)
- Dae-Hyun Jung
- Smart Farm Research Center, Korea Institute of Science and Technology (KIST), Gangneung 25451, Korea; (D.-H.J.); (C.J.); (J.-S.Y.); (H.S.K.); (T.S.L.); (J.Y.L.)
| | - Na Yeon Kim
- Department of Bio-Convergence Science, College of Biomedical and Health Science, Konkuk University, Chungju 27478, Korea; (N.Y.K.); (S.H.M.)
- Asia Pacific Ruminant Institute, Icheon 17385, Korea
| | - Sang Ho Moon
- Department of Bio-Convergence Science, College of Biomedical and Health Science, Konkuk University, Chungju 27478, Korea; (N.Y.K.); (S.H.M.)
| | - Changho Jhin
- Smart Farm Research Center, Korea Institute of Science and Technology (KIST), Gangneung 25451, Korea; (D.-H.J.); (C.J.); (J.-S.Y.); (H.S.K.); (T.S.L.); (J.Y.L.)
- Department of Smartfarm Research, 1778 Living Tech, Sejong 30033, Korea
| | - Hak-Jin Kim
- Department of Biosystems and Biomaterial Engineering, College of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea;
| | - Jung-Seok Yang
- Smart Farm Research Center, Korea Institute of Science and Technology (KIST), Gangneung 25451, Korea; (D.-H.J.); (C.J.); (J.-S.Y.); (H.S.K.); (T.S.L.); (J.Y.L.)
| | - Hyoung Seok Kim
- Smart Farm Research Center, Korea Institute of Science and Technology (KIST), Gangneung 25451, Korea; (D.-H.J.); (C.J.); (J.-S.Y.); (H.S.K.); (T.S.L.); (J.Y.L.)
| | - Taek Sung Lee
- Smart Farm Research Center, Korea Institute of Science and Technology (KIST), Gangneung 25451, Korea; (D.-H.J.); (C.J.); (J.-S.Y.); (H.S.K.); (T.S.L.); (J.Y.L.)
| | - Ju Young Lee
- Smart Farm Research Center, Korea Institute of Science and Technology (KIST), Gangneung 25451, Korea; (D.-H.J.); (C.J.); (J.-S.Y.); (H.S.K.); (T.S.L.); (J.Y.L.)
| | - Soo Hyun Park
- Smart Farm Research Center, Korea Institute of Science and Technology (KIST), Gangneung 25451, Korea; (D.-H.J.); (C.J.); (J.-S.Y.); (H.S.K.); (T.S.L.); (J.Y.L.)
- Correspondence: ; Tel.: +82-33-650-3661
| |
Collapse
|
5
|
Field-Applicable Pig Anomaly Detection System Using Vocalization for Embedded Board Implementations. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10196991] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Failure to quickly and accurately detect abnormal situations, such as the occurrence of infectious diseases, in pig farms can cause significant damage to the pig farms and the pig farming industry of the country. In this study, we propose an economical and lightweight sound-based pig anomaly detection system that can be applicable even in small-scale farms. The system consists of a pipeline structure, starting from sound acquisition to abnormal situation detection, and can be installed and operated in an actual pig farm. It has the following structure that makes it executable on the embedded board TX-2: (1) A module that collects sound signals; (2) A noise-robust preprocessing module that detects sound regions from signals and converts them into spectrograms; and (3) A pig anomaly detection module based on MnasNet, a lightweight deep learning method, to which the 8-bit filter clustering method proposed in this study is applied, reducing its size by 76.3% while maintaining its identification performance. The proposed system recorded an F1-score of 0.947 as a stable pig’s abnormality identification performance, even in various noisy pigpen environments, and the system’s execution time allowed it to perform in real time.
Collapse
|
6
|
Fogarty ES, Swain DL, Cronin GM, Moraes LE, Bailey DW, Trotter MG. Potential for autonomous detection of lambing using global navigation satellite system technology. ANIMAL PRODUCTION SCIENCE 2020. [DOI: 10.1071/an18654] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Context
On-animal sensing systems are being promoted as a solution to the increased demand for monitoring livestock for health and welfare. One key sensor platform, global navigation satellite system (GNSS) positioning, provides information on the location and movement of sheep. This information could be used to detect partition in sheep, a key period of time when both ewes and lambs are at risk. The development of algorithms based on key behavioural features could provide alerts to sheep managers to enable intervention when problems arise.
Aims
To investigate the use of GNSS monitoring as a method for detecting behavioural changes in sheep in the period around parturition.
Methods
GNSS collars were attached to 40 late gestation ewes grazing a 3.09 ha paddock in New Zealand. Several metrics were derived: (i) mean daily speed, (ii) maximum daily speed, (iii) minimum daily speed, (iv) mean daily distance to peers, and (v) spatial paddock utilisation by 95% minimum convex polygon. Speed metrics and distance to peers were also evaluated at an hourly scale for the 12 h before and 12 h after lambing.
Key results
Minimum daily speed peaked on the day of parturition (P < 0.001), suggesting animals may have been expressing more agitation and did not settle. Isolation was also evident during this time, with postpartum ewes located further from their peers than pre-partum ewes (P < 0.001). Day of lambing was also evident by reduced spatial paddock utilisation (P < 0.001).
Conclusions
This study demonstrates that GNSS technology can be used to detect parturition-related behaviours in sheep at a day scale; however, detection at the hour scale using GNSS is not possible.
Implications
This research highlights the opportunity to develop predictive models that autonomously detect behavioural changes in ewes at parturition using GNSS. This could then be extended to identify ewes experiencing prolonged parturition, for example dystocic birth enabling intervention which would improve both production and welfare outcomes for the sheep industry.
Collapse
|
7
|
Devi I, Singh P, Lathwal SS, Dudi K, Singh Y, Ruhil AP, Kumar A, Dash S, Malhotra R. Threshold values of acoustic features to assess estrous cycle phases in water buffaloes (Bubalus bubalis). Appl Anim Behav Sci 2019. [DOI: 10.1016/j.applanim.2019.104838] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
|
8
|
Kim Y, Sa J, Chung Y, Park D, Lee S. Resource-Efficient Pet Dog Sound Events Classification Using LSTM-FCN Based on Time-Series Data. SENSORS 2018; 18:s18114019. [PMID: 30453674 PMCID: PMC6263678 DOI: 10.3390/s18114019] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 11/13/2018] [Accepted: 11/15/2018] [Indexed: 11/16/2022]
Abstract
The use of IoT (Internet of Things) technology for the management of pet dogs left alone at home is increasing. This includes tasks such as automatic feeding, operation of play equipment, and location detection. Classification of the vocalizations of pet dogs using information from a sound sensor is an important method to analyze the behavior or emotions of dogs that are left alone. These sounds should be acquired by attaching the IoT sound sensor to the dog, and then classifying the sound events (e.g., barking, growling, howling, and whining). However, sound sensors tend to transmit large amounts of data and consume considerable amounts of power, which presents issues in the case of resource-constrained IoT sensor devices. In this paper, we propose a way to classify pet dog sound events and improve resource efficiency without significant degradation of accuracy. To achieve this, we only acquire the intensity data of sounds by using a relatively resource-efficient noise sensor. This presents issues as well, since it is difficult to achieve sufficient classification accuracy using only intensity data due to the loss of information from the sound events. To address this problem and avoid significant degradation of classification accuracy, we apply long short-term memory-fully convolutional network (LSTM-FCN), which is a deep learning method, to analyze time-series data, and exploit bicubic interpolation. Based on experimental results, the proposed method based on noise sensors (i.e., Shapelet and LSTM-FCN for time-series) was found to improve energy efficiency by 10 times without significant degradation of accuracy compared to typical methods based on sound sensors (i.e., mel-frequency cepstrum coefficient (MFCC), spectrogram, and mel-spectrum for feature extraction, and support vector machine (SVM) and k-nearest neighbor (K-NN) for classification).
Collapse
Affiliation(s)
- Yunbin Kim
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Jaewon Sa
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Yongwha Chung
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Daihee Park
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Sungju Lee
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| |
Collapse
|
9
|
Invited review: The evolution of cattle bioacoustics and application for advanced dairy systems. Animal 2018; 12:1250-1259. [DOI: 10.1017/s1751731117002646] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
10
|
|
11
|
Lee J, Jin L, Park D, Chung Y. Automatic Recognition of Aggressive Behavior in Pigs Using a Kinect Depth Sensor. SENSORS 2016; 16:s16050631. [PMID: 27144572 PMCID: PMC4883322 DOI: 10.3390/s16050631] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Revised: 04/27/2016] [Accepted: 04/28/2016] [Indexed: 12/18/2022]
Abstract
Aggression among pigs adversely affects economic returns and animal welfare in intensive pigsties. In this study, we developed a non-invasive, inexpensive, automatic monitoring prototype system that uses a Kinect depth sensor to recognize aggressive behavior in a commercial pigpen. The method begins by extracting activity features from the Kinect depth information obtained in a pigsty. The detection and classification module, which employs two binary-classifier support vector machines in a hierarchical manner, detects aggressive activity, and classifies it into aggressive sub-types such as head-to-head (or body) knocking and chasing. Our experimental results showed that this method is effective for detecting aggressive pig behaviors in terms of both cost-effectiveness (using a low-cost Kinect depth sensor) and accuracy (detection and classification accuracies over 95.7% and 90.2%, respectively), either as a standalone solution or to complement existing methods.
Collapse
Affiliation(s)
- Jonguk Lee
- Department of Computer and Information Science, Korea University, Sejong Campus, Sejong City 30019, Korea.
| | - Long Jin
- Ctrip Co., 99 Fu Quan Road, IT Security Center, Shanghai 200335, China.
| | - Daihee Park
- Department of Computer and Information Science, Korea University, Sejong Campus, Sejong City 30019, Korea.
| | - Yongwha Chung
- Department of Computer and Information Science, Korea University, Sejong Campus, Sejong City 30019, Korea.
| |
Collapse
|
12
|
Fault Detection and Diagnosis of Railway Point Machines by Sound Analysis. SENSORS 2016; 16:s16040549. [PMID: 27092509 PMCID: PMC4851063 DOI: 10.3390/s16040549] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Revised: 04/12/2016] [Accepted: 04/12/2016] [Indexed: 12/21/2022]
Abstract
Railway point devices act as actuators that provide different routes to trains by driving switchblades from the current position to the opposite one. Point failure can significantly affect railway operations, with potentially disastrous consequences. Therefore, early detection of anomalies is critical for monitoring and managing the condition of rail infrastructure. We present a data mining solution that utilizes audio data to efficiently detect and diagnose faults in railway condition monitoring systems. The system enables extracting mel-frequency cepstrum coefficients (MFCCs) from audio data with reduced feature dimensions using attribute subset selection, and employs support vector machines (SVMs) for early detection and classification of anomalies. Experimental results show that the system enables cost-effective detection and diagnosis of faults using a cheap microphone, with accuracy exceeding 94.1% whether used alone or in combination with other known methods.
Collapse
|
13
|
Lee J, Noh B, Jang S, Park D, Chung Y, Chang HH. Stress detection and classification of laying hens by sound analysis. ASIAN-AUSTRALASIAN JOURNAL OF ANIMAL SCIENCES 2015; 28:592-8. [PMID: 25656176 PMCID: PMC4341110 DOI: 10.5713/ajas.14.0654] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Revised: 12/08/2014] [Accepted: 12/15/2014] [Indexed: 11/27/2022]
Abstract
Stress adversely affects the wellbeing of commercial chickens, and comes with an economic cost to the industry that cannot be ignored. In this paper, we first develop an inexpensive and non-invasive, automatic online-monitoring prototype that uses sound data to notify producers of a stressful situation in a commercial poultry facility. The proposed system is structured hierarchically with three binary-classifier support vector machines. First, it selects an optimal acoustic feature subset from the sound emitted by the laying hens. The detection and classification module detects the stress from changes in the sound and classifies it into subsidiary sound types, such as physical stress from changes in temperature, and mental stress from fear. Finally, an experimental evaluation was performed using real sound data from an audio-surveillance system. The accuracy in detecting stress approached 96.2%, and the classification model was validated, confirming that the average classification accuracy was 96.7%, and that its recall and precision measures were satisfactory.
Collapse
Affiliation(s)
| | | | | | - Daihee Park
- Department of Animal Science, Institute of Agriculture and Life Sciences, College of Agriculture and Life Sciences, Gyeongsang National University, Jinju 660-701,
Korea
| | | | - Hong-Hee Chang
- Department of Animal Science, Institute of Agriculture and Life Sciences, College of Agriculture and Life Sciences, Gyeongsang National University, Jinju 660-701,
Korea
| |
Collapse
|