1
|
Degani O. Plant Fungal Diseases and Crop Protection. J Fungi (Basel) 2025; 11:274. [PMID: 40278095 PMCID: PMC12029081 DOI: 10.3390/jof11040274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2025] [Accepted: 03/31/2025] [Indexed: 04/26/2025] Open
Abstract
Fungi represent the largest group of plant pathogens, infecting their hosts via leaves, seeds, and roots [...].
Collapse
Affiliation(s)
- Ofir Degani
- Plant Sciences Department, MIGAL—Galilee Research Institute, Tarshish 2, Kiryat Shmona 1101600, Israel; or ; Tel.: +972-54-678-0114
- Faculty of Sciences, Tel-Hai College, Upper Galilee, Tel-Hai 1220800, Israel
| |
Collapse
|
2
|
Li Z, Deng Q, Liu P, Bai J, Gong Y, Yang Q, Ning J. An intelligent identification and classification system of decoration waste based on deep learning model. WASTE MANAGEMENT (NEW YORK, N.Y.) 2024; 174:462-475. [PMID: 38113671 DOI: 10.1016/j.wasman.2023.12.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 12/04/2023] [Accepted: 12/11/2023] [Indexed: 12/21/2023]
Abstract
Efficient sorting and recycling of decoration waste are crucial for the industry's transformation, upgrading, and high-quality development. However, decoration waste can contain toxic materials and has greatly varying compositions. The traditional method of manual sorting for decoration waste is inefficient and poses health risks to sorting workers. It is therefore imperative to develop an accurate and efficient intelligent classification method to address these issues. To meet the demand for intelligent identification and classification of decoration waste, this paper applied the deep learning method You Only Look Once X (YOLOX) to the task and proposed an identification and classification framework of decoration waste (YOLOX-DW framework). The proposed framework was validated and compared using a multi-label image dataset of decoration waste, and a robot automatic sorting system was constructed for practical sorting experiments. The research results show that the proposed framework achieved a mean average precision (mAP) of 99.16 % for different components of decoration waste, with a detection speed of 39.23 FPS. Its classification efficiency on the robot sorting experimental platform reached 95.06 %, indicating a high potential for application and promotion. This provides a strategy for the intelligent detection, identification, and classification of decoration waste.
Collapse
Affiliation(s)
- Zuohua Li
- School of Civil and Environmental Engineering, Harbin Institute of Technology, Shenzhen, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Intelligent and Resilient Structures for Civil Engineering, Shenzhen 518055, China
| | - Quanxue Deng
- School of Civil and Environmental Engineering, Harbin Institute of Technology, Shenzhen, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Intelligent and Resilient Structures for Civil Engineering, Shenzhen 518055, China.
| | - Peicheng Liu
- School of Civil and Environmental Engineering, Harbin Institute of Technology, Shenzhen, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Intelligent and Resilient Structures for Civil Engineering, Shenzhen 518055, China
| | - Jing Bai
- The Institute for Sustainable Development, Macau University of Science and Technology, Macau 999078, China
| | - Yunxuan Gong
- School of Civil and Environmental Engineering, Harbin Institute of Technology, Shenzhen, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Intelligent and Resilient Structures for Civil Engineering, Shenzhen 518055, China
| | - Qitao Yang
- School of Civil and Environmental Engineering, Harbin Institute of Technology, Shenzhen, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Intelligent and Resilient Structures for Civil Engineering, Shenzhen 518055, China
| | - Jiafei Ning
- School of Civil and Environmental Engineering, Harbin Institute of Technology, Shenzhen, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Intelligent and Resilient Structures for Civil Engineering, Shenzhen 518055, China
| |
Collapse
|
3
|
Demmer CR, Demmer S, McIntyre T. Drones as a tool to study and monitor endangered Grey Crowned Cranes ( Balearica regulorum): Behavioural responses and recommended guidelines. Ecol Evol 2024; 14:e10990. [PMID: 38352201 PMCID: PMC10862172 DOI: 10.1002/ece3.10990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 01/12/2024] [Accepted: 01/30/2024] [Indexed: 02/16/2024] Open
Abstract
Crane populations are declining worldwide, with anthropogenically exacerbated habitat loss emerging as the primary causal threat. The endangered Grey Crowned Crane (Balearica regulorum) is the least studied of the three crane species that reside in southern Africa. This data paucity hinders essential conservation planning and is primarily due to ineffective monitoring methods and this species' use of inaccessible habitats. In this study, we compared the behavioural responses of different Grey Crowned Crane social groupings to traditional on-foot monitoring methods and the pioneering use of drones. Grey Crowned Cranes demonstrated a lower tolerance for on-foot monitoring approaches, allowing closer monitoring proximity with drones (22.72 (95% confidence intervals - 13.75, 37.52) m) than on-foot methods (97.59 (86.13, 110.59) m) before displaying evasive behaviours. The behavioural response of flocks was minimal at flight heights above 50 m, whilst larger flocks were more likely to display evasive behaviours in response to monitoring by either method. Families displayed the least evasive behaviours to lower flights, whereas nesting birds were sensitive to the angles of drone approaches. Altogether, our findings confirm the usefulness of drones for monitoring wetland-nesting species and provide valuable species-specific guidelines for monitoring Grey Crowned Cranes. However, we caution future studies on wetland breeding birds to develop species-specific protocols before implementing drone methodologies.
Collapse
Affiliation(s)
- Carmen R. Demmer
- Department of Life and Consumer SciencesUniversity of South AfricaJohannesburgSouth Africa
| | | | - Trevor McIntyre
- Department of Life and Consumer SciencesUniversity of South AfricaJohannesburgSouth Africa
| |
Collapse
|
4
|
Sirimewan D, Bazli M, Raman S, Mohandes SR, Kineber AF, Arashpour M. Deep learning-based models for environmental management: Recognizing construction, renovation, and demolition waste in-the-wild. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2024; 351:119908. [PMID: 38169254 DOI: 10.1016/j.jenvman.2023.119908] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Revised: 12/04/2023] [Accepted: 12/11/2023] [Indexed: 01/05/2024]
Abstract
The construction industry generates a substantial volume of solid waste, often destinated for landfills, causing significant environmental pollution. Waste recycling is decisive in managing waste yet challenging due to labor-intensive sorting processes and the diverse forms of waste. Deep learning (DL) models have made remarkable strides in automating domestic waste recognition and sorting. However, the application of DL models to recognize the waste derived from construction, renovation, and demolition (CRD) activities remains limited due to the context-specific studies conducted in previous research. This paper aims to realistically capture the complexity of waste streams in the CRD context. The study encompasses collecting and annotating CRD waste images in real-world, uncontrolled environments. It then evaluates the performance of state-of-the-art DL models for automatically recognizing CRD waste in-the-wild. Several pre-trained networks are utilized to perform effectual feature extraction and transfer learning during DL model training. The results demonstrated that DL models, whether integrated with larger or lightweight backbone networks can recognize the composition of CRD waste streams in-the-wild which is useful for automated waste sorting. The outcome of the study emphasized the applicability of DL models in recognizing and sorting solid waste across various industrial domains, thereby contributing to resource recovery and encouraging environmental management efforts.
Collapse
Affiliation(s)
- Diani Sirimewan
- Department of Civil Engineering, Monash University, Melbourne, Australia.
| | - Milad Bazli
- Faculty of Science and Technology, Charles Darwin University, Australia.
| | - Sudharshan Raman
- Civil Engineering Discipline, School of Engineering, Monash University, Malaysia.
| | | | - Ahmed Farouk Kineber
- Department of Civil Engineering, Prince Sattam Bin Abdulaziz University, Saudi Arabia.
| | - Mehrdad Arashpour
- Department of Civil Engineering, Monash University, Melbourne, Australia.
| |
Collapse
|
5
|
Tian T. Visual image design of the internet of things based on AI intelligence. Heliyon 2023; 9:e22845. [PMID: 38125525 PMCID: PMC10731056 DOI: 10.1016/j.heliyon.2023.e22845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 11/18/2023] [Accepted: 11/21/2023] [Indexed: 12/23/2023] Open
Abstract
Visual object detection has emerged as a critical technology for Unmanned Arial Vehicle (UAV) use due to advances in computer vision. New developments in fields like communication technology and the UAV needs to be able to act autonomously by gathering data and then making choices. These tendencies have brought us to cutting-edge levels of health care, transportation, energy, monitoring, and security for visual image detection and manufacturing endeavors. These include coordination in communication via IoT, sustainability of IoT network, and optimization challenges in path planning. Because of their limited battery life, these gadgets are limited in their range of communication. UAVs can be seen as terminal devices connected to a large network where a swarm of other UAVs is coordinating their motions, directing one another, and maintaining watch over locations outside its visual range. One of the essential components of UAV-based applications is the ability to recognize objects of interest in aerial photographs taken by UAVs. While aerial photos might be useful, object detection is challenging. As a result, capturing aerial photographs with UAVs is a unique challenge since the size of things in these images might vary greatly. The study proposal included specific information regarding the Detection of Visual Images by UAVs (DVI-UAV) using the IoT and Artificial Intelligence (AI). Included in the study of AI is the concept of DSYolov3. The DSYolov3 model was presented to deal with these problems in the UAV industry. By fusing the channel-wise feature across multiple scales using a spatial pyramid pooling approach, the proposed study creates a novel module, Multi-scale Fusion of Channel Attention (MFCAM), for scale-variant object identification tasks. The method's effectiveness and efficiency have been thoroughly tested and evaluated experimentally. The suggested method would allow us to outperform most current detectors and guarantee that the models will be useable on UAVs. There will be a 95 % success rate in terms of visual image detection, a 94 % success rate in terms of computation cost, a 97 % success rate in terms of accuracy, and a 95 % success rate in terms of effectiveness.
Collapse
Affiliation(s)
- Tian Tian
- College of Fine Arts and Design, Mudanjiang Normal University, Mudanjiang, 157011, Heilongjiang, China
| |
Collapse
|
6
|
Krishnan BS, Jones LR, Elmore JA, Samiappan S, Evans KO, Pfeiffer MB, Blackwell BF, Iglay RB. Fusion of visible and thermal images improves automated detection and classification of animals for drone surveys. Sci Rep 2023; 13:10385. [PMID: 37369669 PMCID: PMC10300091 DOI: 10.1038/s41598-023-37295-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 06/19/2023] [Indexed: 06/29/2023] Open
Abstract
Visible and thermal images acquired from drones (unoccupied aircraft systems) have substantially improved animal monitoring. Combining complementary information from both image types provides a powerful approach for automating detection and classification of multiple animal species to augment drone surveys. We compared eight image fusion methods using thermal and visible drone images combined with two supervised deep learning models, to evaluate the detection and classification of white-tailed deer (Odocoileus virginianus), domestic cow (Bos taurus), and domestic horse (Equus caballus). We classified visible and thermal images separately and compared them with the results of image fusion. Fused images provided minimal improvement for cows and horses compared to visible images alone, likely because the size, shape, and color of these species made them conspicuous against the background. For white-tailed deer, which were typically cryptic against their backgrounds and often in shadows in visible images, the added information from thermal images improved detection and classification in fusion methods from 15 to 85%. Our results suggest that image fusion is ideal for surveying animals inconspicuous from their backgrounds, and our approach uses few image pairs to train compared to typical machine-learning methods. We discuss computational and field considerations to improve drone surveys using our fusion approach.
Collapse
Affiliation(s)
- B Santhana Krishnan
- Geosystems Research Institute, Mississippi State University, Mississippi State, Mississippi State, MS, 39762, USA
| | - Landon R Jones
- Department of Wildlife, Fisheries, and Aquaculture, Mississippi State University, Box 9690, Mississippi State, MS, 39762, USA
| | - Jared A Elmore
- Department of Wildlife, Fisheries, and Aquaculture, Mississippi State University, Box 9690, Mississippi State, MS, 39762, USA
- Department of Forestry and Environmental Conservation, Clemson University, Clemson, SC, 29634, USA
| | - Sathishkumar Samiappan
- Geosystems Research Institute, Mississippi State University, Mississippi State, Mississippi State, MS, 39762, USA
| | - Kristine O Evans
- Department of Wildlife, Fisheries, and Aquaculture, Mississippi State University, Box 9690, Mississippi State, MS, 39762, USA
| | - Morgan B Pfeiffer
- U.S. Department of Agriculture, Animal and Plant Health Inspection Service, Wildlife Services, National Wildlife Research Center, Ohio Field Station, Sandusky, OH, 44870, USA
| | - Bradley F Blackwell
- U.S. Department of Agriculture, Animal and Plant Health Inspection Service, Wildlife Services, National Wildlife Research Center, Ohio Field Station, Sandusky, OH, 44870, USA
| | - Raymond B Iglay
- Department of Wildlife, Fisheries, and Aquaculture, Mississippi State University, Box 9690, Mississippi State, MS, 39762, USA.
| |
Collapse
|
7
|
Arashpour M. AI explainability framework for environmental management research. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2023; 342:118149. [PMID: 37187074 DOI: 10.1016/j.jenvman.2023.118149] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/08/2023] [Accepted: 05/09/2023] [Indexed: 05/17/2023]
Abstract
Deep learning networks powered by AI are essential predictive tools relying on image data availability and processing hardware advancements. However, little attention has been paid to explainable AI (XAI) in application fields, including environmental management. This study develops an explainability framework with a triadic structure to focus on input, AI model and output. The framework provides three main contributions. (1) A context-based augmentation of input data to maximize generalizability and minimize overfitting. (2) A direct monitoring of AI model layers and parameters to use leaner (lighter) networks suitable for edge device deployment, (3) An output explanation procedure focusing on interpretability and robustness of predictive decisions by AI networks. These contributions significantly advance state of the art in XAI for environmental management research, offering implications for improved understanding and utilization of AI networks in this field.
Collapse
Affiliation(s)
- Mehrdad Arashpour
- Department of Civil Engineering, Monash University, Melbourne, VIC, 3800, Australia.
| |
Collapse
|