1
|
Yan H, Wu Y, Bo Y, Han Y, Ren G. Study on the Impact of LDA Preprocessing on Pig Face Identification with SVM. Animals (Basel) 2025; 15:231. [PMID: 39858231 PMCID: PMC11759145 DOI: 10.3390/ani15020231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Revised: 12/21/2024] [Accepted: 01/14/2025] [Indexed: 01/27/2025] Open
Abstract
In this study, the implementation of traditional machine learning models in the intelligent management of swine is explored, focusing on the impact of LDA preprocessing on pig facial recognition using an SVM. Through experimental analysis, the kernel functions for two testing protocols, one utilizing an SVM exclusively and the other employing a combination of LDA and an SVM, were identified as polynomial and RBF, both with coefficients of 0.03. Individual identification tests conducted on 10 pigs demonstrated that the enhanced protocol improved identification accuracy from 83.66% to 86.30%. Additionally, the training and testing durations were reduced to 0.7% and 0.3% of the original times, respectively. These findings suggest that LDA preprocessing significantly enhances the efficiency of individual pig identification using an SVM, providing empirical evidence for the deployment of SVM classifiers in mobile and embedded systems.
Collapse
Affiliation(s)
- Hongwen Yan
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (Y.W.); (Y.B.); (Y.H.); (G.R.)
| | | | | | | | | |
Collapse
|
2
|
Rossi FB, Rossi N, Orso G, Barberis L, Marin RH, Kembro JM. Monitoring poultry social dynamics using colored tags: Avian visual perception, behavioral effects, and artificial intelligence precision. Poult Sci 2025; 104:104464. [PMID: 39577175 PMCID: PMC11617678 DOI: 10.1016/j.psj.2024.104464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Revised: 10/29/2024] [Accepted: 10/29/2024] [Indexed: 11/24/2024] Open
Abstract
Artificial intelligence (AI) in animal behavior and welfare research is on the rise. AI can detect behaviors and localize animals in video recordings, thus it is a valuable tool for studying social dynamics. However, maintaining the identity of individuals over time, especially in homogeneous poultry flocks, remains challenging for algorithms. We propose using differentially colored "backpack" tags (black, gray, white, orange, red, purple, and green) detectable with computer vision (eg. YOLO) from top-view video recordings of pens. These tags can also accommodate sensors, such as accelerometers. In separate experiments, we aim to: (i) evaluate avian visual perception of the different colored tags; (ii) assess the potential impact of tag colors on social behavior; and (iii) test the ability of the YOLO model to accurately distinguish between different colored tags on Japanese quail in social group settings. First, the reflectance spectra of tags and feathers were measured. An avian visual model was applied to calculate the quantum catches for each spectrum. Green and purple tags showed significant chromatic contrast to the feather. Mostly tags presented greater luminance receptor stimulation than feathers. Birds wearing white, gray, purple, and green tags pecked significantly more at their own tags than those with black (control) tags. Additionally, fewer aggressive interactions were observed in groups with orange tags compared to groups with other colors, except for red. Next, heterogeneous groups of 5 birds with different color tags were videorecorded for 1 h. The precision and accuracy of YOLO to detect each color tag were assessed, yielding values of 95.9% and 97.3%, respectively, with most errors stemming from misclassifications between black and gray tags. Lastly using the YOLO output, we estimated each bird's average social distance, locomotion speed, and the percentage of time spent moving. No behavioral differences associated with tag color were detected. In conclusion, carefully selected colored backpack tags can be identified using AI models and can also hold other sensors, making them powerful tools for behavioral and welfare studies.
Collapse
Affiliation(s)
- Florencia B Rossi
- Instituto de Investigaciones Biológicas y Tecnológicas (IIByT, CONICET-UNC), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Avenida Vélez Sarsfield 1611, Córdoba, Córdoba, Argentina; Facultad de Ciencias Exactas, Físicas y Naturales, Universidad Nacional de Córdoba (UNC), Instituto de Ciencia y Tecnología de los Alimentos (ICTA), Córdoba, Córdoba, Argentina
| | - Nicola Rossi
- Universidad Nacional de Córdoba, Facultad de Ciencias Exactas Físicas y Naturales, Laboratorio de Biología del Comportamiento, Córdoba, Argentina; Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Instituto de Diversidad y Ecología Animal (IDEA), Córdoba, Argentina
| | - Gabriel Orso
- Instituto de Investigaciones Biológicas y Tecnológicas (IIByT, CONICET-UNC), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Avenida Vélez Sarsfield 1611, Córdoba, Córdoba, Argentina; Facultad de Ciencias Exactas, Físicas y Naturales, Universidad Nacional de Córdoba (UNC), Instituto de Ciencia y Tecnología de los Alimentos (ICTA), Córdoba, Córdoba, Argentina
| | - Lucas Barberis
- Facultad de Matemática, Astronomía Física y Computación, Universidad Nacional de Córdoba, Córdoba, Argentina; Instituto de Física Enrique Gaviola (IFEG, CONICET-UNC), Córdoba, Córdoba, Argentina
| | - Raul H Marin
- Instituto de Investigaciones Biológicas y Tecnológicas (IIByT, CONICET-UNC), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Avenida Vélez Sarsfield 1611, Córdoba, Córdoba, Argentina; Facultad de Ciencias Exactas, Físicas y Naturales, Universidad Nacional de Córdoba (UNC), Instituto de Ciencia y Tecnología de los Alimentos (ICTA), Córdoba, Córdoba, Argentina
| | - Jackelyn M Kembro
- Instituto de Investigaciones Biológicas y Tecnológicas (IIByT, CONICET-UNC), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Avenida Vélez Sarsfield 1611, Córdoba, Córdoba, Argentina; Facultad de Ciencias Exactas, Físicas y Naturales, Universidad Nacional de Córdoba (UNC), Instituto de Ciencia y Tecnología de los Alimentos (ICTA), Córdoba, Córdoba, Argentina.
| |
Collapse
|
3
|
Cao KX, Deng ZC, Li SJ, Yi D, He X, Yang XJ, Guo YM, Sun LH. Poultry Nutrition: Achievement, Challenge, and Strategy. J Nutr 2024; 154:3554-3565. [PMID: 39424066 DOI: 10.1016/j.tjnut.2024.10.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 09/25/2024] [Accepted: 10/08/2024] [Indexed: 10/21/2024] Open
Abstract
Poultry, a vital economic animal, provide a high-quality protein source for human nutrition. Over the past decade, the poultry industry has witnessed substantial achievements in breeding, precision feeding, and welfare farming. However, there are still many challenges restricting the sustainable development of the poultry industry. First, overly focused breeding strategies on production performance have been shown to induce metabolic diseases in poultry. Second, a lack of robust methods for assessing the nutritional requirements poses a challenge to the practical implementation of precision feeding. Third, antibiotic alternatives and feed safety management remain pressing concerns within the poultry industry. Lastly, environmental pollution and inadequate welfare management in farming have a negative effect on poultry health. Despite numerous proposed strategies and innovative approaches, each faces its own set of strengths and limitations. In this review, we aim to provide a comprehensive understanding of the poultry industry over the past decade, by examining its achievements, challenges, and strategies, to guide its future direction.
Collapse
Affiliation(s)
- Ke-Xin Cao
- State Key Laboratory of Agricultural Microbiology, Hubei Hongshan Laboratory, Frontiers Science Center for Animal Breeding and Sustainable Production, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan, Hubei, China
| | - Zhang-Chao Deng
- State Key Laboratory of Agricultural Microbiology, Hubei Hongshan Laboratory, Frontiers Science Center for Animal Breeding and Sustainable Production, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan, Hubei, China
| | - Shi-Jun Li
- State Key Laboratory of Agricultural Microbiology, Hubei Hongshan Laboratory, Frontiers Science Center for Animal Breeding and Sustainable Production, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan, Hubei, China
| | - Dan Yi
- Hubei Key Laboratory of Animal Nutrition and Feed Science, Wuhan Polytechnic University, Wuhan, Hubei, China
| | - Xi He
- College of Animal Science and Technology, Hunan Agricultural University, Changsha, China
| | - Xiao-Jun Yang
- College of Animal Science and Technology, Northwest A&F University, Yangling, Shaanxi, China
| | - Yu-Ming Guo
- State Key Laboratory of Animal Nutrition, Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, China Agricultural University, Beijing, China.
| | - Lv-Hui Sun
- State Key Laboratory of Agricultural Microbiology, Hubei Hongshan Laboratory, Frontiers Science Center for Animal Breeding and Sustainable Production, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan, Hubei, China.
| |
Collapse
|
4
|
Bist RB, Bist K, Poudel S, Subedi D, Yang X, Paneru B, Mani S, Wang D, Chai L. Sustainable poultry farming practices: a critical review of current strategies and future prospects. Poult Sci 2024; 103:104295. [PMID: 39312848 PMCID: PMC11447413 DOI: 10.1016/j.psj.2024.104295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Revised: 08/24/2024] [Accepted: 08/31/2024] [Indexed: 09/25/2024] Open
Abstract
As global demand for poultry products, environmental sustainability, and health consciousness rises with time, the poultry industry faces both substantial challenges and new opportunities. Therefore, this review paper provides a comprehensive overview of sustainable poultry farming, focusing on integrating genetic improvements, alternative feed, precision technologies, waste management, and biotechnological innovations. Together, these strategies aim to minimize ecological footprints, uphold ethical standards, improve economic feasibility, and enhance industry resilience. In addition, this review paper explores various sustainable strategies, including eco-conscious organic farming practices and innovative feed sources like insect-based proteins, single-cell proteins, algal supplements, and food waste utilization. It also addresses barriers to adoption, such as technical challenges, financial constraints, knowledge gaps, and policy frameworks, which are crucial for advancing the poultry industry. This paper examined organic poultry farming in detail, noting several benefits like reduced pesticide use and improved animal welfare. Additionally, it discusses optimizing feed efficiency, an alternate energy source (solar photovoltaic/thermal), effective waste management, and the importance of poultry welfare. Transformative strategies, such as holistic farming systems and integrated approaches, are proposed to improve resource use and nutrient cycling and promote climate-smart agricultural practices. The review underscores the need for a structured roadmap, education, and extension services through digital platforms and participatory learning to promote sustainable poultry farming for future generations. It emphasizes the need for collaboration and knowledge exchange among stakeholders and the crucial role of researchers, policymakers, and industry professionals in shaping a future where sustainable poultry practices lead the industry, committed to ethical and resilient poultry production.
Collapse
Affiliation(s)
- Ramesh Bahadur Bist
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA; Biological and Agricultural Engineering, College of Engineering, University of Arkansas, Fayetteville, AR 72701, USA
| | - Keshav Bist
- Department of Electronics and Computer Engineering, Institute of Engineering, Tribhuvan University, Pokhara 33700, Nepal
| | - Sandesh Poudel
- College of Engineering, University of Georgia, Athens, GA 30602, USA
| | - Deepak Subedi
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA
| | - Xiao Yang
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA
| | - Bidur Paneru
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA
| | - Sudhagar Mani
- College of Engineering, University of Georgia, Athens, GA 30602, USA
| | - Dongyi Wang
- Biological and Agricultural Engineering, College of Engineering, University of Arkansas, Fayetteville, AR 72701, USA
| | - Lilong Chai
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA.
| |
Collapse
|
5
|
Yang X, Bist R, Paneru B, Chai L. Monitoring activity index and behaviors of cage-free hens with advanced deep learning technologies. Poult Sci 2024; 103:104193. [PMID: 39191000 PMCID: PMC11396067 DOI: 10.1016/j.psj.2024.104193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 08/02/2024] [Accepted: 08/05/2024] [Indexed: 08/29/2024] Open
Abstract
Chickens' behaviors and activities are important information for managing animal health and welfare in commercial poultry houses. In this study, convolutional neural networks (CNN) models were developed to monitor the chicken activity index. A dataset consisting of 1,500 top-view images was utilized to construct tracking models, with 900 images allocated for training, 300 for validation, and 300 for testing. Six different CNN models were developed, based on YOLOv5, YOLOv8, ByteTrack, DeepSORT, and StrongSORT. The final results demonstrated that the combination of YOLOv8 and DeepSORT exhibited the highest performance, achieving a multiobject tracking accuracy (MOTA) of 94%. Further application of the optimal model could facilitate the detection of abnormal behaviors such as smothering and piling, and enabled the quantification of flock activity into 3 levels (low, medium, and high) to evaluate footpad health states in the flock. This research underscores the application of deep learning in monitoring poultry activity index for assessing animal health and welfare.
Collapse
Affiliation(s)
- Xiao Yang
- Department of Poultry Science, University of Georgia, Athens, GA 30602, USA
| | - Ramesh Bist
- Department of Poultry Science, University of Georgia, Athens, GA 30602, USA
| | - Bidur Paneru
- Department of Poultry Science, University of Georgia, Athens, GA 30602, USA
| | - Lilong Chai
- Department of Poultry Science, University of Georgia, Athens, GA 30602, USA.
| |
Collapse
|
6
|
Bist RB, Yang X, Subedi S, Chai L. Automatic detection of bumblefoot in cage-free hens using computer vision technologies. Poult Sci 2024; 103:103780. [PMID: 38688138 PMCID: PMC11067544 DOI: 10.1016/j.psj.2024.103780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 04/13/2024] [Accepted: 04/17/2024] [Indexed: 05/02/2024] Open
Abstract
Cage-free (CF) housing systems are expected to be the dominant egg production system in North America and European Union countries by 2030. Within these systems, bumblefoot (a common bacterial infection and chronic inflammatory reaction) is mostly observed in hens reared on litter floors. It causes pain and stress in hens and is detrimental to their welfare. For instance, hens with bumblefoot have difficulty moving freely, thus hindering access to feeders and drinkers. However, it is technically challenging to detect hens with bumblefoot, and no automatic methods have been applied for hens' bumblefoot detection (BFD), especially in its early stages. This study aimed to develop and test artificial intelligence methods (i.e., deep learning models) to detect hens' bumblefoot condition in a CF environment under various settings such as epochs (number of times the entire dataset passes through the network during training), batch size (number of data samples processed per iteration during training), and camera height. The performance of 3 newly developed deep learning models (i.e., YOLOv5s-BFD, YOLOv5m-BFD, & YOLOv5x-BFD) were compared in detecting hens with bumblefoot of hens in CF environments. The result shows that the YOLOv5m-BFD model had the highest precision (93.7%), recall (84.6%), mAP@0.50 (90.9%), mAP@0.50:0.95 (51.8%), and F1-score (89.0%) compared with other models. The observed YOLOv5m-BFD model trained at 400 epochs and batch size 16 is recommended for bumblefoot detection in laying hens. This study provides a basis for developing an automatic bumblefoot detection system in commercial CF houses. This model will be modified and trained to detect the occurrence of broilers with bumblefoot in the future.
Collapse
Affiliation(s)
- Ramesh Bahadur Bist
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA
| | - Xiao Yang
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA
| | - Sachin Subedi
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA
| | - Lilong Chai
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA.
| |
Collapse
|
7
|
Shibanoki T, Yamazaki Y, Tonooka H. A System for Monitoring Animals Based on Behavioral Information and Internal State Information. Animals (Basel) 2024; 14:281. [PMID: 38254450 PMCID: PMC11154535 DOI: 10.3390/ani14020281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/31/2023] [Accepted: 01/06/2024] [Indexed: 01/24/2024] Open
Abstract
Managing the risk of injury or illness is an important consideration when keeping pets. This risk can be minimized if pets are monitored on a regular basis, but this can be difficult and time-consuming. However, because only the external behavior of the animal can be observed and the internal condition cannot be assessed, the animal's state can easily be misjudged. Additionally, although some systems use heartbeat measurement to determine a state of tension, or use rest to assess the internal state, because an increase in heart rate can also occur as a result of exercise, it is desirable to use this measurement in combination with behavioral information. In the current study, we proposed a monitoring system for animals using video image analysis. The proposed system first extracts features related to behavioral information and the animal's internal state via mask R-CNN using video images taken from the top of the cage. These features are used to detect typical daily activities and anomalous activities. This method produces an alert when the hamster behaves in an unusual way. In our experiment, the daily behavior of a hamster was measured and analyzed using the proposed system. The results showed that the features of the hamster's behavior were successfully detected. When loud sounds were presented from outside the cage, the system was able to discriminate between the behavioral and internal changes of the hamster. In future research, we plan to improve the accuracy of the measurement of small movements and develop a more accurate system.
Collapse
Affiliation(s)
- Taro Shibanoki
- Department of Intelligent Mechanical Systems, Faculty of Environmental, Life, Natural Science and Technology, Okayama University, Okayama 700-8530, Japan
| | - Yuugo Yamazaki
- Major in Computer and Information Sciences, Graduate School of Science and Engineering, Ibaraki University, Hitachi 316-8511, Japan; (Y.Y.); (H.T.)
| | - Hideyuki Tonooka
- Major in Computer and Information Sciences, Graduate School of Science and Engineering, Ibaraki University, Hitachi 316-8511, Japan; (Y.Y.); (H.T.)
| |
Collapse
|
8
|
Guo Z, He Z, Lyu L, Mao A, Huang E, Liu K. Automatic Detection of Feral Pigeons in Urban Environments Using Deep Learning. Animals (Basel) 2024; 14:159. [PMID: 38200890 PMCID: PMC10777961 DOI: 10.3390/ani14010159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 12/16/2023] [Accepted: 12/17/2023] [Indexed: 01/12/2024] Open
Abstract
The overpopulation of feral pigeons in Hong Kong has significantly disrupted the urban ecosystem, highlighting the urgent need for effective strategies to control their population. In general, control measures should be implemented and re-evaluated periodically following accurate estimations of the feral pigeon population in the concerned regions, which, however, is very difficult in urban environments due to the concealment and mobility of pigeons within complex building structures. With the advances in deep learning, computer vision can be a promising tool for pigeon monitoring and population estimation but has not been well investigated so far. Therefore, we propose an improved deep learning model (Swin-Mask R-CNN with SAHI) for feral pigeon detection. Our model consists of three parts. Firstly, the Swin Transformer network (STN) extracts deep feature information. Secondly, the Feature Pyramid Network (FPN) fuses multi-scale features to learn at different scales. Lastly, the model's three head branches are responsible for classification, best bounding box prediction, and segmentation. During the prediction phase, we utilize a Slicing-Aided Hyper Inference (SAHI) tool to focus on the feature information of small feral pigeon targets. Experiments were conducted on a feral pigeon dataset to evaluate model performance. The results reveal that our model achieves excellent recognition performance for feral pigeons.
Collapse
Affiliation(s)
- Zhaojin Guo
- Department of Infectious Diseases and Public Health, City University of Hong Kong, Hong Kong SAR, China; (Z.G.); (A.M.)
| | - Zheng He
- Department of Infectious Diseases and Public Health, City University of Hong Kong, Hong Kong SAR, China; (Z.G.); (A.M.)
| | - Li Lyu
- Department of Infectious Diseases and Public Health, City University of Hong Kong, Hong Kong SAR, China; (Z.G.); (A.M.)
| | - Axiu Mao
- Department of Infectious Diseases and Public Health, City University of Hong Kong, Hong Kong SAR, China; (Z.G.); (A.M.)
- School of Communication Engineering, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Endai Huang
- Department of Computer Science, City University of Hong Kong, Hong Kong SAR, China;
| | - Kai Liu
- Department of Infectious Diseases and Public Health, City University of Hong Kong, Hong Kong SAR, China; (Z.G.); (A.M.)
| |
Collapse
|
9
|
Nakrosis A, Paulauskaite-Taraseviciene A, Raudonis V, Narusis I, Gruzauskas V, Gruzauskas R, Lagzdinyte-Budnike I. Towards Early Poultry Health Prediction through Non-Invasive and Computer Vision-Based Dropping Classification. Animals (Basel) 2023; 13:3041. [PMID: 37835647 PMCID: PMC10571708 DOI: 10.3390/ani13193041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 09/17/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023] Open
Abstract
The use of artificial intelligence techniques with advanced computer vision techniques offers great potential for non-invasive health assessments in the poultry industry. Evaluating the condition of poultry by monitoring their droppings can be highly valuable as significant changes in consistency and color can be indicators of serious and infectious diseases. While most studies have prioritized the classification of droppings into two categories (normal and abnormal), with some relevant studies dealing with up to five categories, this investigation goes a step further by employing image processing algorithms to categorize droppings into six classes, based on visual information indicating some level of abnormality. To ensure a diverse dataset, data were collected in three different poultry farms in Lithuania by capturing droppings on different types of litter. With the implementation of deep learning, the object detection rate reached 92.41% accuracy. A range of machine learning algorithms, including different deep learning architectures, has been explored and, based on the obtained results, we have proposed a comprehensive solution by combining different models for segmentation and classification purposes. The results revealed that the segmentation task achieved the highest accuracy of 0.88 in terms of the Dice coefficient employing the K-means algorithm. Meanwhile, YOLOv5 demonstrated the highest classification accuracy, achieving an ACC of 91.78%.
Collapse
Affiliation(s)
- Arnas Nakrosis
- Faculty of Informatics, Kaunas University of Technology, Studentu 50, 51368 Kaunas, Lithuania; (A.N.); (I.N.); (I.L.-B.)
| | - Agne Paulauskaite-Taraseviciene
- Faculty of Informatics, Kaunas University of Technology, Studentu 50, 51368 Kaunas, Lithuania; (A.N.); (I.N.); (I.L.-B.)
- Artificial Intelligence Centre, Kaunas University of Technology, K. Barsausko 59, 51423 Kaunas, Lithuania; (V.R.); (V.G.); (R.G.)
| | - Vidas Raudonis
- Artificial Intelligence Centre, Kaunas University of Technology, K. Barsausko 59, 51423 Kaunas, Lithuania; (V.R.); (V.G.); (R.G.)
- Faculty of Electrical and Electronics, Kaunas University of Technology, Studentu 48, 51367 Kaunas, Lithuania
| | - Ignas Narusis
- Faculty of Informatics, Kaunas University of Technology, Studentu 50, 51368 Kaunas, Lithuania; (A.N.); (I.N.); (I.L.-B.)
| | - Valentas Gruzauskas
- Artificial Intelligence Centre, Kaunas University of Technology, K. Barsausko 59, 51423 Kaunas, Lithuania; (V.R.); (V.G.); (R.G.)
- Institute of Computer Science, Vilnius University, 08303 Vilnius, Lithuania
| | - Romas Gruzauskas
- Artificial Intelligence Centre, Kaunas University of Technology, K. Barsausko 59, 51423 Kaunas, Lithuania; (V.R.); (V.G.); (R.G.)
| | - Ingrida Lagzdinyte-Budnike
- Faculty of Informatics, Kaunas University of Technology, Studentu 50, 51368 Kaunas, Lithuania; (A.N.); (I.N.); (I.L.-B.)
| |
Collapse
|
10
|
Wang Y, Zhou J, Zhang C, Luo Z, Han X, Ji Y, Guan J. Bird Object Detection: Dataset Construction, Model Performance Evaluation, and Model Lightweighting. Animals (Basel) 2023; 13:2924. [PMID: 37760324 PMCID: PMC10525479 DOI: 10.3390/ani13182924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 09/09/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023] Open
Abstract
The application of object detection technology has a positive auxiliary role in advancing the intelligence of bird recognition and enhancing the convenience of bird field surveys. However, challenges arise due to the absence of dedicated bird datasets and evaluation benchmarks. To address this, we have not only constructed the largest known bird object detection dataset, but also compared the performances of eight mainstream detection models on bird object detection tasks and proposed feasible approaches for model lightweighting in bird object detection. Our constructed bird detection dataset of GBDD1433-2023, includes 1433 globally common bird species and 148,000 manually annotated bird images. Based on this dataset, two-stage detection models like Faster R-CNN and Cascade R-CNN demonstrated superior performances, achieving a Mean Average Precision (mAP) of 73.7% compared to one-stage models. In addition, compared to one-stage object detection models, two-stage object detection models have a stronger robustness to variations in foreground image scaling and background interference in bird images. On bird counting tasks, the accuracy ranged between 60.8% to 77.2% for up to five birds in an image, but this decreased sharply beyond that count, suggesting limitations of object detection models in multi-bird counting tasks. Finally, we proposed an adaptive localization distillation method for one-stage lightweight object detection models that are suitable for offline deployment, which improved the performance of the relevant models. Overall, our work furnishes an enriched dataset and practice guidelines for selecting suitable bird detection models.
Collapse
Affiliation(s)
- Yang Wang
- Department of Computer Science and Technology, Tongji University, Shanghai 201804, China; (Y.W.); (J.G.)
- Jiangsu Province Engineering Research Center for Intelligent Monitoring and Management of Small Water Bodies, Huaiyin Normal University, Huaian 223300, China;
| | - Jiaogen Zhou
- Jiangsu Province Engineering Research Center for Intelligent Monitoring and Management of Small Water Bodies, Huaiyin Normal University, Huaian 223300, China;
| | - Caiyun Zhang
- Jiangsu Province Engineering Research Center for Intelligent Monitoring and Management of Small Water Bodies, Huaiyin Normal University, Huaian 223300, China;
| | - Zhaopeng Luo
- Huai’an City Zoo, Huaian 223300, China; (Z.L.); (X.H.)
| | - Xuexue Han
- Huai’an City Zoo, Huaian 223300, China; (Z.L.); (X.H.)
| | - Yanzhu Ji
- Key Laboratory of Zoological Systematics and Evolution, Institute of Zoology, Chinese Academy of Sciences, Beijing 100101, China;
| | - Jihong Guan
- Department of Computer Science and Technology, Tongji University, Shanghai 201804, China; (Y.W.); (J.G.)
| |
Collapse
|
11
|
Zhao S, Bai Z, Meng L, Han G, Duan E. Pose Estimation and Behavior Classification of Jinling White Duck Based on Improved HRNet. Animals (Basel) 2023; 13:2878. [PMID: 37760278 PMCID: PMC10525901 DOI: 10.3390/ani13182878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 09/03/2023] [Accepted: 09/05/2023] [Indexed: 09/29/2023] Open
Abstract
In breeding ducks, obtaining the pose information is vital for perceiving their physiological health, ensuring welfare in breeding, and monitoring environmental comfort. This paper proposes a pose estimation method by combining HRNet and CBAM to achieve automatic and accurate detection of duck's multi-poses. Through comparison, HRNet-32 is identified as the optimal option for duck pose estimation. Based on this, multiple CBAM modules are densely embedded into the HRNet-32 network to obtain the pose estimation model based on HRNet-32-CBAM, realizing accurate detection and correlation of eight keypoints across six different behaviors. Furthermore, the model's generalization ability is tested under different illumination conditions, and the model's comprehensive detection abilities are evaluated on Cherry Valley ducklings of 12 and 24 days of age. Moreover, this model is compared with mainstream pose estimation methods to reveal its advantages and disadvantages, and its real-time performance is tested using images of 256 × 256, 512 × 512, and 728 × 728 pixel sizes. The experimental results indicate that for the duck pose estimation dataset, the proposed method achieves an average precision (AP) of 0.943, which has a strong generalization ability and can achieve real-time estimation of the duck's multi-poses under different ages, breeds, and farming modes. This study can provide a technical reference and a basis for the intelligent farming of poultry animals.
Collapse
Affiliation(s)
- Shida Zhao
- Institute of Agricultural Facilities and Equipment, Jiangsu Academy of Agricultural Sciences, Nanjing 210014, China
- Key Laboratory of Protected Agriculture Engineering in the Middle and Lower Reaches of Yangtze River, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
| | - Zongchun Bai
- Institute of Agricultural Facilities and Equipment, Jiangsu Academy of Agricultural Sciences, Nanjing 210014, China
- Key Laboratory of Protected Agriculture Engineering in the Middle and Lower Reaches of Yangtze River, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
| | - Lili Meng
- Institute of Agricultural Facilities and Equipment, Jiangsu Academy of Agricultural Sciences, Nanjing 210014, China
- Key Laboratory of Protected Agriculture Engineering in the Middle and Lower Reaches of Yangtze River, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
- School of Civil Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Malaysia
| | - Guofeng Han
- Institute of Agricultural Facilities and Equipment, Jiangsu Academy of Agricultural Sciences, Nanjing 210014, China
- Key Laboratory of Protected Agriculture Engineering in the Middle and Lower Reaches of Yangtze River, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
| | - Enze Duan
- Institute of Agricultural Facilities and Equipment, Jiangsu Academy of Agricultural Sciences, Nanjing 210014, China
- Key Laboratory of Protected Agriculture Engineering in the Middle and Lower Reaches of Yangtze River, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
| |
Collapse
|
12
|
Yang X, Bist RB, Subedi S, Chai L. A Computer Vision-Based Automatic System for Egg Grading and Defect Detection. Animals (Basel) 2023; 13:2354. [PMID: 37508131 PMCID: PMC10376079 DOI: 10.3390/ani13142354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 07/14/2023] [Accepted: 07/17/2023] [Indexed: 07/30/2023] Open
Abstract
Defective eggs diminish the value of laying hen production, particularly in cage-free systems with a higher incidence of floor eggs. To enhance quality, machine vision and image processing have facilitated the development of automated grading and defect detection systems. Additionally, egg measurement systems utilize weight-sorting for optimal market value. However, few studies have integrated deep learning and machine vision techniques for combined egg classification and weighting. To address this gap, a two-stage model was developed based on real-time multitask detection (RTMDet) and random forest networks to predict egg category and weight. The model uses convolutional neural network (CNN) and regression techniques were used to perform joint egg classification and weighing. RTMDet was used to sort and extract egg features for classification, and a Random Forest algorithm was used to predict egg weight based on the extracted features (major axis and minor axis). The results of the study showed that the best achieved accuracy was 94.8% and best R2 was 96.0%. In addition, the model can be used to automatically exclude non-standard-size eggs and eggs with exterior issues (e.g., calcium deposit, stains, and cracks). This detector is among the first models that perform the joint function of egg-sorting and weighing eggs, and is capable of classifying them into five categories (intact, crack, bloody, floor, and non-standard) and measuring them up to jumbo size. By implementing the findings of this study, the poultry industry can reduce costs and increase productivity, ultimately leading to better-quality products for consumers.
Collapse
Affiliation(s)
- Xiao Yang
- Department of Poultry Science, University of Georgia, Athens, GA 30602, USA
| | | | - Sachin Subedi
- Department of Poultry Science, University of Georgia, Athens, GA 30602, USA
| | - Lilong Chai
- Department of Poultry Science, University of Georgia, Athens, GA 30602, USA
| |
Collapse
|
13
|
Yan H, Cai S, Li E, Liu J, Hu Z, Li Q, Wang H. Study on the Influence of PCA Pre-Treatment on Pig Face Identification with Random Forest. Animals (Basel) 2023; 13:ani13091555. [PMID: 37174592 PMCID: PMC10177592 DOI: 10.3390/ani13091555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 04/30/2023] [Accepted: 05/04/2023] [Indexed: 05/15/2023] Open
Abstract
To explore the application of a traditional machine learning model in the intelligent management of pigs, in this paper, the influence of PCA pre-treatment on pig face identification with RF is studied. By this testing method, the parameters of two testing schemes, one adopting RF alone and the other adopting RF + PCA, were determined to be 65 and 70, respectively. With individual identification tests carried out on 10 pigs, accuracy, recall, and f1-score were increased by 2.66, 2.76, and 2.81 percentage points, respectively. Except for the slight increase in training time, the test time was reduced to 75% of the old scheme, and the efficiency of the optimized scheme was greatly improved. It indicates that PCA pre-treatment positively improved the efficiency of individual pig identification with RF. Furthermore, it provides experimental support for the mobile terminals and the embedded application of RF classifiers.
Collapse
Affiliation(s)
- Hongwen Yan
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Songrui Cai
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Erhao Li
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Jianyu Liu
- Science & Technology Information and Strategy Research Center of Shanxi, Taiyuan 030024, China
| | - Zhiwei Hu
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Qiangsheng Li
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Huiting Wang
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| |
Collapse
|
14
|
Bist RB, Yang X, Subedi S, Chai L. Mislaying behavior detection in cage-free hens with deep learning technologies. Poult Sci 2023; 102:102729. [PMID: 37192567 DOI: 10.1016/j.psj.2023.102729] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 04/01/2023] [Accepted: 04/13/2023] [Indexed: 05/18/2023] Open
Abstract
Floor egg-laying behavior (FELB) is one of the most concerning issues in commercial cage-free (CF) houses because floor eggs (i.e., mislaid eggs on the floor) result in high labor costs and food safety concerns. Farms with poor management may have up to 10% of daily floor eggs. Therefore, it is critical to improving floor eggs management. Detecting FELB timely and identifying the reason behind its cause may address the issue. The primary objectives of this research were to develop and test a new deep-learning model to detect FELB and evaluate the model's performance in 4 identical research CF houses (200 Hy-Line W-36 hens per house), where perches and litter floor were provided to mimic commercial tiered aviary system. Five different YOLOv5 models (i.e., YOLOv5n, YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x) were trained and compared. According to a dataset of 5400 images (i.e., 3780 for training, 1080 for validation, and 540 for testing), YOLOv5m-FELB and YOLOv5x-FELB models were tested with higher precision (99.9%), recall (99.2%), mAP@0.50 (99.6%), and F1-score (99.6%) than others. However, the YOLOv5m-NFELB model has lower recall than other YOLOv5-NFELB models, although it was tested with higher precision. Similarly, the speed of data processing (4%-45% FPS), and training time (3%-148%) were higher in the YOLOv5s model while requiring less GPU (1.8-4.8 times) than in other models. Furthermore, the camera height of 0.5 m and clean camera outperform compared to 3 m height and dusty camera. Thus, the newly developed and trained YOLOv5s model will be further innovated. Future studies will be conducted to verify the performance of the model in commercial CF houses to detect FELB.
Collapse
Affiliation(s)
- Ramesh Bahadur Bist
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA
| | - Xiao Yang
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA
| | - Sachin Subedi
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA
| | - Lilong Chai
- Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA.
| |
Collapse
|
15
|
Qin X, Lai C, Pan Z, Pan M, Xiang Y, Wang Y. Recognition of Abnormal-Laying Hens Based on Fast Continuous Wavelet and Deep Learning Using Hyperspectral Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:3645. [PMID: 37050705 PMCID: PMC10098863 DOI: 10.3390/s23073645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 03/26/2023] [Accepted: 03/27/2023] [Indexed: 06/19/2023]
Abstract
The egg production of laying hens is crucial to breeding enterprises in the laying hen breeding industry. However, there is currently no systematic or accurate method to identify low-egg-production-laying hens in commercial farms, and the majority of these hens are identified by breeders based on their experience. In order to address this issue, we propose a method that is widely applicable and highly precise. First, breeders themselves separate low-egg-production-laying hens and normal-laying hens. Then, under a halogen lamp, hyperspectral images of the two different types of hens are captured via hyperspectral imaging equipment. The vertex component analysis (VCA) algorithm is used to extract the cockscomb end member spectrum to obtain the cockscomb spectral feature curves of low-egg-production-laying hens and normal ones. Next, fast continuous wavelet transform (FCWT) is employed to analyze the data of the feature curves in order to obtain the two-dimensional spectral feature image dataset. Finally, referring to the two-dimensional spectral image dataset of the low-egg-production-laying hens and normal ones, we developed a deep learning model based on a convolutional neural network (CNN). When we tested the model's accuracy by using the prepared dataset, we found that it was 0.975 percent accurate. This outcome demonstrates our identification method, which combines hyperspectral imaging technology, an FCWT data analysis method, and a CNN deep learning model, and is highly effective and precise in laying-hen breeding plants. Furthermore, the attempt to use FCWT for the analysis and processing of hyperspectral data will have a significant impact on the research and application of hyperspectral technology in other fields due to its high efficiency and resolution characteristics for data signal analysis and processing.
Collapse
Affiliation(s)
- Xing Qin
- Zhejiang Key Laboratory of Large-Scale Integrated Circuit Design, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Chenxiao Lai
- Zhejiang Key Laboratory of Large-Scale Integrated Circuit Design, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Zejun Pan
- Zhejiang Key Laboratory of Large-Scale Integrated Circuit Design, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Mingzhong Pan
- Key Laboratory of Gravitational Wave Precision Measurement of Zhejiang Province, School of Physics and Photoelectric Engineering, Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou 310024, China
| | - Yun Xiang
- Agriculture Science Research Institute, Jinhua 321000, China
| | - Yikun Wang
- Key Laboratory of Gravitational Wave Precision Measurement of Zhejiang Province, School of Physics and Photoelectric Engineering, Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou 310024, China
| |
Collapse
|
16
|
Cakic S, Popovic T, Krco S, Nedic D, Babic D, Jovovic I. Developing Edge AI Computer Vision for Smart Poultry Farms Using Deep Learning and HPC. SENSORS (BASEL, SWITZERLAND) 2023; 23:3002. [PMID: 36991712 PMCID: PMC10055782 DOI: 10.3390/s23063002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/05/2023] [Accepted: 03/08/2023] [Indexed: 06/19/2023]
Abstract
This research describes the use of high-performance computing (HPC) and deep learning to create prediction models that could be deployed on edge AI devices equipped with camera and installed in poultry farms. The main idea is to leverage an existing IoT farming platform and use HPC offline to run deep learning to train the models for object detection and object segmentation, where the objects are chickens in images taken on farm. The models can be ported from HPC to edge AI devices to create a new type of computer vision kit to enhance the existing digital poultry farm platform. Such new sensors enable implementing functions such as counting chickens, detection of dead chickens, and even assessing their weight or detecting uneven growth. These functions combined with the monitoring of environmental parameters, could enable early disease detection and improve the decision-making process. The experiment focused on Faster R-CNN architectures and AutoML was used to identify the most suitable architecture for chicken detection and segmentation for the given dataset. For the selected architectures, further hyperparameter optimization was carried out and we achieved the accuracy of AP = 85%, AP50 = 98%, and AP75 = 96% for object detection and AP = 90%, AP50 = 98%, and AP75 = 96% for instance segmentation. These models were installed on edge AI devices and evaluated in the online mode on actual poultry farms. Initial results are promising, but further development of the dataset and improvements in prediction models is needed.
Collapse
Affiliation(s)
- Stevan Cakic
- Faculty for Information Systems and Technologies, University of Donja Gorica, Oktoih 1, 81000 Podgorica, Montenegro
- DigitalSmart, Bul. Dz. Vasingtona bb, 81000 Podgorica, Montenegro
| | - Tomo Popovic
- Faculty for Information Systems and Technologies, University of Donja Gorica, Oktoih 1, 81000 Podgorica, Montenegro
- DigitalSmart, Bul. Dz. Vasingtona bb, 81000 Podgorica, Montenegro
| | - Srdjan Krco
- DunavNET, Bul. Oslobodjenja 133/2, 21000 Novi Sad, Serbia
| | | | - Dejan Babic
- Faculty for Information Systems and Technologies, University of Donja Gorica, Oktoih 1, 81000 Podgorica, Montenegro
| | - Ivan Jovovic
- Faculty for Information Systems and Technologies, University of Donja Gorica, Oktoih 1, 81000 Podgorica, Montenegro
| |
Collapse
|
17
|
Subedi S, Bist R, Yang X, Chai L. Tracking Floor Eggs with Machine Vision in Cage-free Hen Houses. Poult Sci 2023; 102:102637. [PMID: 37011469 PMCID: PMC10090712 DOI: 10.1016/j.psj.2023.102637] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/28/2023] [Accepted: 03/02/2023] [Indexed: 03/11/2023] Open
Abstract
Some of the major restaurants and grocery chains in the United States have pledged to buy cage-free (CF) eggs only by 2025 or 2030. While CF house allows hens to perform more natural behaviors (e.g., dust bathing, perching, and foraging on the litter floor), a particular challenge is floor eggs (i.e., mislaid eggs on litter floor). Floor eggs have high chances of contamination. The manual collection of eggs is laborious and time-consuming. Therefore, precision poultry farming technology is necessary to detect floor eggs. In this study, 3 new deep learning models, that is, YOLOv5s-egg, YOLOv5x-egg, and YOLOv7-egg networks, were developed, trained, and compared in tracking floor eggs in 4 research cage-free laying hen facilities. Models were verified to detect eggs by using images collected in 2 different commercial houses. Results indicate that the YOLOv5s-egg model detected floor eggs with a precision of 87.9%, recall of 86.8%, and mean average precision (mAP) of 90.9%; the YOLOv5x-egg model detected the floor eggs with a precision of 90%, recall of 87.9%, and mAP of 92.1%; and the YOLOv7-egg model detected the eggs with a precision of 89.5%, recall of 85.4%, and mAP of 88%. All models performed with over 85% detection precision; however, model performance is affected by the stocking density, varying light intensity, and images occluded by equipment like drinking lines, perches, and feeders. The YOLOv5x-egg model detected floor eggs with higher accuracy, precision, mAP, and recall than YOLOv5s-egg and YOLOv7-egg. This study provides a reference for cage-free producers that floor eggs can be monitored automatically. Future studies are guaranteed to test the system in commercial houses.
Collapse
Affiliation(s)
- Sachin Subedi
- Department of Poultry Science, University of Georgia, Athens, GA 30602, USA
| | - Ramesh Bist
- Department of Poultry Science, University of Georgia, Athens, GA 30602, USA
| | - Xiao Yang
- Department of Poultry Science, University of Georgia, Athens, GA 30602, USA
| | - Lilong Chai
- Department of Poultry Science, University of Georgia, Athens, GA 30602, USA.
| |
Collapse
|
18
|
Huang Y, Yang X, Guo J, Cheng J, Qu H, Ma J, Li L. A High-Precision Method for 100-Day-Old Classification of Chickens in Edge Computing Scenarios Based on Federated Computing. Animals (Basel) 2022; 12:ani12243450. [PMID: 36552370 PMCID: PMC9774202 DOI: 10.3390/ani12243450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/02/2022] [Accepted: 12/05/2022] [Indexed: 12/12/2022] Open
Abstract
Due to the booming development of computer vision technology and artificial intelligence algorithms, it has become more feasible to implement artificial rearing of animals in real production scenarios. Improving the accuracy of day-age detection of chickens is one of the examples and is of great importance for chicken rearing. This paper focuses on the problem of classifying the age of chickens within 100 days. Due to the huge amount of data and the different computing power of different devices in practical application scenarios, it is important to maximize the computing power of edge computing devices without sacrificing accuracy. This paper proposes a high-precision federated learning-based model that can be applied to edge computing scenarios. In order to accommodate different computing power in different scenarios, this paper proposes a dual-ended adaptive federated learning framework; in order to adapt to low computing power scenarios, this paper performs lightweighting operations on the mainstream model; and in order to verify the effectiveness of the model, this paper conducts a number of targeted experiments. Compared with AlexNet, VGG, ResNet and GoogLeNet, this model improves the classification accuracy to 96.1%, which is 14.4% better than the baseline model and improves the Recall and Precision by 14.8% and 14.2%, respectively. In addition, by lightening the network, our methods reduce the inference latency and transmission latency by 24.4 ms and 10.5 ms, respectively. Finally, this model is deployed in a real-world application and an application is developed based on the wechat SDK.
Collapse
Affiliation(s)
- Yikang Huang
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Xinze Yang
- College of Economics and Management, China Agricultural University, Beijing 100083, China
| | - Jiangyi Guo
- College of Economics and Management, China Agricultural University, Beijing 100083, China
| | - Jia Cheng
- College of Agronomy and Biotechnology, China Agricultural University, Beijing 100083, China
| | - Hao Qu
- Institute of Animal Science, Guangdong Academy of Agricultural Sciences, Guangzhou 510640, China
| | - Jie Ma
- Institute of Animal Science, Guangdong Academy of Agricultural Sciences, Guangzhou 510640, China
- Correspondence: (J.M.); (L.L.)
| | - Lin Li
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
- Correspondence: (J.M.); (L.L.)
| |
Collapse
|