1
|
Buriboev AS, Abduvaitov A, Jeon HS. Integrating Color and Contour Analysis with Deep Learning for Robust Fire and Smoke Detection. SENSORS (BASEL, SWITZERLAND) 2025; 25:2044. [PMID: 40218557 PMCID: PMC11991653 DOI: 10.3390/s25072044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2025] [Revised: 03/04/2025] [Accepted: 03/24/2025] [Indexed: 04/14/2025]
Abstract
Detecting fire and smoke is essential for maintaining safety in urban, industrial, and outdoor settings. This study suggests a unique concatenated convolutional neural network (CNN) model that combines deep learning with hybrid preprocessing methods, such as contour-based algorithms and color characteristics analysis, to provide reliable and accurate fire and smoke detection. A benchmark dataset with a variety of situations, including dynamic surroundings and changing illumination, the D-Fire dataset was used to assess the technique. Experiments show that the suggested model outperforms both conventional techniques and the most advanced YOLO-based methods, achieving accuracy (0.989) and recall (0.983). In order to reduce false positives and false negatives, the hybrid architecture uses preprocessing to enhance Regions of Interest (ROIs). Additionally, pooling and fully linked layers provide computational efficiency and generalization. In contrast to current approaches, which frequently concentrate only on fire detection, the model's dual smoke and fire detection capabilities increase its adaptability. Although preprocessing adds a little computing expense, the methodology's excellent accuracy and resilience make it a dependable option for safety-critical real-world applications. This study sets a new standard for smoke and fire detection and provides a route forward for future developments in this crucial area.
Collapse
Affiliation(s)
| | - Akmal Abduvaitov
- Department of IT, Samarkand Branch of Tashkent University of Information Technologies, Samarkand 100084, Uzbekistan;
| | - Heung Seok Jeon
- Department of Computer Engineering, Konkuk University, Chungju 27478, Republic of Korea
| |
Collapse
|
2
|
Buriboev AS, Rakhmanov K, Soqiyev T, Choi AJ. Improving Fire Detection Accuracy through Enhanced Convolutional Neural Networks and Contour Techniques. SENSORS (BASEL, SWITZERLAND) 2024; 24:5184. [PMID: 39204881 PMCID: PMC11360108 DOI: 10.3390/s24165184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 07/30/2024] [Accepted: 08/09/2024] [Indexed: 09/04/2024]
Abstract
In this study, a novel method combining contour analysis with deep CNN is applied for fire detection. The method was made for fire detection using two main algorithms: one which detects the color properties of the fires, and another which analyzes the shape through contour detection. To overcome the disadvantages of previous methods, we generate a new labeled dataset, which consists of small fire instances and complex scenarios. We elaborated the dataset by selecting regions of interest (ROI) for enhanced fictional small fires and complex environment traits extracted through color characteristics and contour analysis, to better train our model regarding those more intricate features. Results of the experiment showed that our improved CNN model outperformed other networks. The accuracy, precision, recall and F1 score were 99.4%, 99.3%, 99.4% and 99.5%, respectively. The performance of our new approach is enhanced in all metrics compared to the previous CNN model with an accuracy of 99.4%. In addition, our approach beats many other state-of-the-art methods as well: Dilated CNNs (98.1% accuracy), Faster R-CNN (97.8% accuracy) and ResNet (94.3%). This result suggests that the approach can be beneficial for a variety of safety and security applications ranging from home, business to industrial and outdoor settings.
Collapse
Affiliation(s)
- Abror Shavkatovich Buriboev
- School of Computing, Department of AI-Software, Gachon University, Seongnam-si 13306, Republic of Korea
- Department of Infocommunication Engineering, Tashkent University of Information Technologies, Tashkent 100084, Uzbekistan
| | - Khoshim Rakhmanov
- Department of Digital and Educational Technologies, Samarkand Branch of Tashkent University of Information Technologies, Samarkand 140100, Uzbekistan
| | - Temur Soqiyev
- Digital Technologies and Artificial Intelligence Research Institute, Tashkent 100125, Uzbekistan
| | - Andrew Jaeyong Choi
- School of Computing, Department of AI-Software, Gachon University, Seongnam-si 13306, Republic of Korea
| |
Collapse
|
3
|
Zhang Z, Tan L, Robert TLK. An Improved Fire and Smoke Detection Method Based on YOLOv8n for Smart Factories. SENSORS (BASEL, SWITZERLAND) 2024; 24:4786. [PMID: 39123833 PMCID: PMC11314977 DOI: 10.3390/s24154786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 07/22/2024] [Accepted: 07/23/2024] [Indexed: 08/12/2024]
Abstract
Factories play a crucial role in economic and social development. However, fire disasters in factories greatly threaten both human lives and properties. Previous studies about fire detection using deep learning mostly focused on wildfire detection and ignored the fires that happened in factories. In addition, lots of studies focus on fire detection, while smoke, the important derivative of a fire disaster, is not detected by such algorithms. To better help smart factories monitor fire disasters, this paper proposes an improved fire and smoke detection method based on YOLOv8n. To ensure the quality of the algorithm and training process, a self-made dataset including more than 5000 images and their corresponding labels is created. Then, nine advanced algorithms are selected and tested on the dataset. YOLOv8n exhibits the best detection results in terms of accuracy and detection speed. ConNeXtV2 is then inserted into the backbone to enhance inter-channel feature competition. RepBlock and SimConv are selected to replace the original Conv and improve computational ability and memory bandwidth. For the loss function, CIoU is replaced by MPDIoU to ensure an efficient and accurate bounding box. Ablation tests show that our improved algorithm achieves better performance in all four metrics reflecting accuracy: precision, recall, F1, and mAP@50. Compared with the original model, whose four metrics are approximately 90%, the modified algorithm achieves above 95%. mAP@50 in particular reaches 95.6%, exhibiting an improvement of approximately 4.5%. Although complexity improves, the requirements of real-time fire and smoke monitoring are satisfied.
Collapse
Affiliation(s)
| | | | - Tiong Lee Kong Robert
- School of Civil & Environmental Engineering, Nanyang Technological University, Singapore 639798, Singapore; (Z.Z.); (L.T.)
| |
Collapse
|
4
|
Lv C, Zhou H, Chen Y, Fan D, Di F. A lightweight fire detection algorithm for small targets based on YOLOv5s. Sci Rep 2024; 14:14104. [PMID: 38890493 PMCID: PMC11189544 DOI: 10.1038/s41598-024-64934-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 06/14/2024] [Indexed: 06/20/2024] Open
Abstract
In response to the current challenges fire detection algorithms encounter, including low detection accuracy and limited recognition rates for small fire targets in complex environments, we present a lightweight fire detection algorithm based on an improved YOLOv5s. The introduction of the CoT (Contextual Transformer) structure into the backbone neural network, along with the creation of the novel CSP1_CoT (Cross stage partial 1_contextual transformer) module, has effectively reduced the model's parameter count while simultaneously enhancing the feature extraction and fusion capabilities of the backbone network; The network's Neck architecture has been extended by introducing a dedicated detection layer tailored for small targets and incorporating the SE (Squeeze-and-Excitation) attention mechanism. This augmentation, while minimizing parameter proliferation, has significantly bolstered the interaction of multi-feature information, resulting in an enhanced small target detection capability; The substitution of the original loss function with the Focal-EIoU (Focal-Efficient IoU) loss function has yielded a further improvement in the model's convergence speed and precision; The experimental results indicate that the modified model achieves an mAP@.5 of 96% and an accuracy of 94.8%, marking improvements of 8.8% and 8.9%, respectively, over the original model. Furthermore, the model's parameter count has been reduced by 1.1%, resulting in a compact model size of only 14.6MB. Additionally, the detection speed has reached 85 FPS (Frames Per Second), thus satisfying real-time detection requirements. This enhancement in precision and accuracy, while simultaneously meeting real-time and lightweight constraints, effectively caters to the demands of fire detection.
Collapse
Affiliation(s)
- Changzhi Lv
- National Experimental Teaching Demonstration Center for Electrical Engineering and Electronics, College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| | - Haiyong Zhou
- College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao, Shandong, China
| | - Yu Chen
- National Experimental Teaching Demonstration Center for Electrical Engineering and Electronics, College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| | - Di Fan
- College of Electronic Information Engineering, Shandong University of Science and Technology, Qingdao, Shandong, China.
| | - Fangyi Di
- College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao, Shandong, China
| |
Collapse
|
5
|
Cheng H, Zhu J, Wang S, Yan K, Wang H. Firefighting Water Jet Trajectory Detection from Unmanned Aerial Vehicle Imagery Using Learnable Prompt Vectors. SENSORS (BASEL, SWITZERLAND) 2024; 24:3553. [PMID: 38894344 PMCID: PMC11175223 DOI: 10.3390/s24113553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 05/26/2024] [Accepted: 05/29/2024] [Indexed: 06/21/2024]
Abstract
This research presents an innovative methodology aimed at monitoring jet trajectory during the jetting process using imagery captured by unmanned aerial vehicles (UAVs). This approach seamlessly integrates UAV imagery with an offline learnable prompt vector module (OPVM) to enhance trajectory monitoring accuracy and stability. By leveraging a high-resolution camera mounted on a UAV, image enhancement is proposed to solve the problem of geometric and photometric distortion in jet trajectory images, and the Faster R-CNN network is deployed to detect objects within the images and precisely identify the jet trajectory within the video stream. Subsequently, the offline learnable prompt vector module is incorporated to further refine trajectory predictions, thereby improving monitoring accuracy and stability. In particular, the offline learnable prompt vector module not only learns the visual characteristics of jet trajectory but also incorporates their textual features, thus adopting a bimodal approach to trajectory analysis. Additionally, OPVM is trained offline, thereby minimizing additional memory and computational resource requirements. Experimental findings underscore the method's remarkable precision of 95.4% and efficiency in monitoring jet trajectory, thereby laying a solid foundation for advancements in trajectory detection and tracking. This methodology holds significant potential for application in firefighting systems and industrial processes, offering a robust framework to address dynamic trajectory monitoring challenges and augment computer vision capabilities in practical scenarios.
Collapse
Affiliation(s)
- Hengyu Cheng
- School of Mechanical and Electrical Engineering, China University of Mining and Technology, Xuzhou 221006, China; (H.C.); (S.W.); (K.Y.); (H.W.)
| | - Jinsong Zhu
- School of Mechanical and Electrical Engineering, China University of Mining and Technology, Xuzhou 221006, China; (H.C.); (S.W.); (K.Y.); (H.W.)
- China Academy of Safety Science and Technology, Beijing 100012, China
- Shenzhen Research Institute of China University of Mining and Technology, Shenzhen 518057, China
| | - Sining Wang
- School of Mechanical and Electrical Engineering, China University of Mining and Technology, Xuzhou 221006, China; (H.C.); (S.W.); (K.Y.); (H.W.)
| | - Ke Yan
- School of Mechanical and Electrical Engineering, China University of Mining and Technology, Xuzhou 221006, China; (H.C.); (S.W.); (K.Y.); (H.W.)
| | - Haojie Wang
- School of Mechanical and Electrical Engineering, China University of Mining and Technology, Xuzhou 221006, China; (H.C.); (S.W.); (K.Y.); (H.W.)
| |
Collapse
|
6
|
Kim SY, Mukhiddinov M. Data Anomaly Detection for Structural Health Monitoring Based on a Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2023; 23:8525. [PMID: 37896618 PMCID: PMC10611100 DOI: 10.3390/s23208525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 10/14/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023]
Abstract
Structural health monitoring (SHM) has been extensively utilized in civil infrastructures for several decades. The status of civil constructions is monitored in real time using a wide variety of sensors; however, determining the true state of a structure can be difficult due to the presence of abnormalities in the acquired data. Extreme weather, faulty sensors, and structural damage are common causes of these abnormalities. For civil structure monitoring to be successful, abnormalities must be detected quickly. In addition, one form of abnormality generally predominates the SHM data, which might be a problem for civil infrastructure data. The current state of anomaly detection is severely hampered by this imbalance. Even cutting-edge damage diagnostic methods are useless without proper data-cleansing processes. In order to solve this problem, this study suggests a hyper-parameter-tuned convolutional neural network (CNN) for multiclass unbalanced anomaly detection. A multiclass time series of anomaly data from a real-world cable-stayed bridge is used to test the 1D CNN model, and the dataset is balanced by supplementing the data as necessary. An overall accuracy of 97.6% was achieved by balancing the database using data augmentation to enlarge the dataset, as shown in the research.
Collapse
Affiliation(s)
- Soon-Young Kim
- Department of Physical Education, Gachon University, Seongnam 13120, Republic of Korea;
| | - Mukhriddin Mukhiddinov
- Department of Communication and Digital Technologies, University of Management and Future Technologies, Tashkent 100208, Uzbekistan
| |
Collapse
|
7
|
Saydirasulovich SN, Mukhiddinov M, Djuraev O, Abdusalomov A, Cho YI. An Improved Wildfire Smoke Detection Based on YOLOv8 and UAV Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:8374. [PMID: 37896467 PMCID: PMC10610991 DOI: 10.3390/s23208374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/21/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Forest fires rank among the costliest and deadliest natural disasters globally. Identifying the smoke generated by forest fires is pivotal in facilitating the prompt suppression of developing fires. Nevertheless, succeeding techniques for detecting forest fire smoke encounter persistent issues, including a slow identification rate, suboptimal accuracy in detection, and challenges in distinguishing smoke originating from small sources. This study presents an enhanced YOLOv8 model customized to the context of unmanned aerial vehicle (UAV) images to address the challenges above and attain heightened precision in detection accuracy. Firstly, the research incorporates Wise-IoU (WIoU) v3 as a regression loss for bounding boxes, supplemented by a reasonable gradient allocation strategy that prioritizes samples of common quality. This strategic approach enhances the model's capacity for precise localization. Secondly, the conventional convolutional process within the intermediate neck layer is substituted with the Ghost Shuffle Convolution mechanism. This strategic substitution reduces model parameters and expedites the convergence rate. Thirdly, recognizing the challenge of inadequately capturing salient features of forest fire smoke within intricate wooded settings, this study introduces the BiFormer attention mechanism. This mechanism strategically directs the model's attention towards the feature intricacies of forest fire smoke, simultaneously suppressing the influence of irrelevant, non-target background information. The obtained experimental findings highlight the enhanced YOLOv8 model's effectiveness in smoke detection, proving an average precision (AP) of 79.4%, signifying a notable 3.3% enhancement over the baseline. The model's performance extends to average precision small (APS) and average precision large (APL), registering robust values of 71.3% and 92.6%, respectively.
Collapse
Affiliation(s)
| | - Mukhriddin Mukhiddinov
- Department of Communication and Digital Technologies, University of Management and Future Technologies, Tashkent 100208, Uzbekistan; (M.M.); (O.D.)
| | - Oybek Djuraev
- Department of Communication and Digital Technologies, University of Management and Future Technologies, Tashkent 100208, Uzbekistan; (M.M.); (O.D.)
| | - Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea;
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea;
| |
Collapse
|
8
|
Avazov K, Jamil MK, Muminov B, Abdusalomov AB, Cho YI. Fire Detection and Notification Method in Ship Areas Using Deep Learning and Computer Vision Approaches. SENSORS (BASEL, SWITZERLAND) 2023; 23:7078. [PMID: 37631614 PMCID: PMC10458310 DOI: 10.3390/s23167078] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/02/2023] [Accepted: 08/07/2023] [Indexed: 08/27/2023]
Abstract
Fire incidents occurring onboard ships cause significant consequences that result in substantial effects. Fires on ships can have extensive and severe wide-ranging impacts on matters such as the safety of the crew, cargo, the environment, finances, reputation, etc. Therefore, timely detection of fires is essential for quick responses and powerful mitigation. The study in this research paper presents a fire detection technique based on YOLOv7 (You Only Look Once version 7), incorporating improved deep learning algorithms. The YOLOv7 architecture, with an improved E-ELAN (extended efficient layer aggregation network) as its backbone, serves as the basis of our fire detection system. Its enhanced feature fusion technique makes it superior to all its predecessors. To train the model, we collected 4622 images of various ship scenarios and performed data augmentation techniques such as rotation, horizontal and vertical flips, and scaling. Our model, through rigorous evaluation, showcases enhanced capabilities of fire recognition to improve maritime safety. The proposed strategy successfully achieves an accuracy of 93% in detecting fires to minimize catastrophic incidents. Objects having visual similarities to fire may lead to false prediction and detection by the model, but this can be controlled by expanding the dataset. However, our model can be utilized as a real-time fire detector in challenging environments and for small-object detection. Advancements in deep learning models hold the potential to enhance safety measures, and our proposed model in this paper exhibits this potential. Experimental results proved that the proposed method can be used successfully for the protection of ships and in monitoring fires in ship port areas. Finally, we compared the performance of our method with those of recently reported fire-detection approaches employing widely used performance matrices to test the fire classification results achieved.
Collapse
Affiliation(s)
- Kuldoshbay Avazov
- Department of Computer Engineering, Gachon University, Seongnam-si 461-701, Republic of Korea; (K.A.)
| | - Muhammad Kafeel Jamil
- Department of Computer Engineering, Gachon University, Seongnam-si 461-701, Republic of Korea; (K.A.)
| | - Bahodir Muminov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | | | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Seongnam-si 461-701, Republic of Korea; (K.A.)
| |
Collapse
|
9
|
Kim SY, Muminov A. Forest Fire Smoke Detection Based on Deep Learning Approaches and Unmanned Aerial Vehicle Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:5702. [PMID: 37420867 PMCID: PMC10304711 DOI: 10.3390/s23125702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 06/14/2023] [Accepted: 06/16/2023] [Indexed: 07/09/2023]
Abstract
Wildfire poses a significant threat and is considered a severe natural disaster, which endangers forest resources, wildlife, and human livelihoods. In recent times, there has been an increase in the number of wildfire incidents, and both human involvement with nature and the impacts of global warming play major roles in this. The rapid identification of fire starting from early smoke can be crucial in combating this issue, as it allows firefighters to respond quickly to the fire and prevent it from spreading. As a result, we proposed a refined version of the YOLOv7 model for detecting smoke from forest fires. To begin, we compiled a collection of 6500 UAV pictures of smoke from forest fires. To further enhance YOLOv7's feature extraction capabilities, we incorporated the CBAM attention mechanism. Then, we added an SPPF+ layer to the network's backbone to better concentrate smaller wildfire smoke regions. Finally, decoupled heads were introduced into the YOLOv7 model to extract useful information from an array of data. A BiFPN was used to accelerate multi-scale feature fusion and acquire more specific features. Learning weights were introduced in the BiFPN so that the network can prioritize the most significantly affecting characteristic mapping of the result characteristics. The testing findings on our forest fire smoke dataset revealed that the proposed approach successfully detected forest fire smoke with an AP50 of 86.4%, 3.9% higher than previous single- and multiple-stage object detectors.
Collapse
Affiliation(s)
- Soon-Young Kim
- Department of Physical Education, Gachon University, Seongnam 13120, Republic of Korea;
| | - Azamjon Muminov
- Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
10
|
Norkobil Saydirasulovich S, Abdusalomov A, Jamil MK, Nasimov R, Kozhamzharova D, Cho YI. A YOLOv6-Based Improved Fire Detection Approach for Smart City Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:3161. [PMID: 36991872 PMCID: PMC10051218 DOI: 10.3390/s23063161] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/10/2023] [Accepted: 03/11/2023] [Indexed: 06/19/2023]
Abstract
Authorities and policymakers in Korea have recently prioritized improving fire prevention and emergency response. Governments seek to enhance community safety for residents by constructing automated fire detection and identification systems. This study examined the efficacy of YOLOv6, a system for object identification running on an NVIDIA GPU platform, to identify fire-related items. Using metrics such as object identification speed, accuracy research, and time-sensitive real-world applications, we analyzed the influence of YOLOv6 on fire detection and identification efforts in Korea. We conducted trials using a fire dataset comprising 4000 photos collected through Google, YouTube, and other resources to evaluate the viability of YOLOv6 in fire recognition and detection tasks. According to the findings, YOLOv6's object identification performance was 0.98, with a typical recall of 0.96 and a precision of 0.83. The system achieved an MAE of 0.302%. These findings suggest that YOLOv6 is an effective technique for detecting and identifying fire-related items in photos in Korea. Multi-class object recognition using random forests, k-nearest neighbors, support vector, logistic regression, naive Bayes, and XGBoost was performed on the SFSC data to evaluate the system's capacity to identify fire-related objects. The results demonstrate that for fire-related objects, XGBoost achieved the highest object identification accuracy, with values of 0.717 and 0.767. This was followed by random forest, with values of 0.468 and 0.510. Finally, we tested YOLOv6 in a simulated fire evacuation scenario to gauge its practicality in emergencies. The results show that YOLOv6 can accurately identify fire-related items in real time within a response time of 0.66 s. Therefore, YOLOv6 is a viable option for fire detection and recognition in Korea. The XGBoost classifier provides the highest accuracy when attempting to identify objects, achieving remarkable results. Furthermore, the system accurately identifies fire-related objects while they are being detected in real-time. This makes YOLOv6 an effective tool to use in fire detection and identification initiatives.
Collapse
Affiliation(s)
| | - Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| | - Muhammad Kafeel Jamil
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| | - Rashid Nasimov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Dinara Kozhamzharova
- Department of Information System, International Information Technology University, Almaty 050000, Kazakhstan
| | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| |
Collapse
|
11
|
Abdusalomov AB, Islam BMDS, Nasimov R, Mukhiddinov M, Whangbo TK. An Improved Forest Fire Detection Method Based on the Detectron2 Model and a Deep Learning Approach. SENSORS (BASEL, SWITZERLAND) 2023; 23:1512. [PMID: 36772551 PMCID: PMC9920160 DOI: 10.3390/s23031512] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/19/2023] [Accepted: 01/23/2023] [Indexed: 06/18/2023]
Abstract
With an increase in both global warming and the human population, forest fires have become a major global concern. This can lead to climatic shifts and the greenhouse effect, among other adverse outcomes. Surprisingly, human activities have caused a disproportionate number of forest fires. Fast detection with high accuracy is the key to controlling this unexpected event. To address this, we proposed an improved forest fire detection method to classify fires based on a new version of the Detectron2 platform (a ground-up rewrite of the Detectron library) using deep learning approaches. Furthermore, a custom dataset was created and labeled for the training model, and it achieved higher precision than the other models. This robust result was achieved by improving the Detectron2 model in various experimental scenarios with a custom dataset and 5200 images. The proposed model can detect small fires over long distances during the day and night. The advantage of using the Detectron2 algorithm is its long-distance detection of the object of interest. The experimental results proved that the proposed forest fire detection method successfully detected fires with an improved precision of 99.3%.
Collapse
Affiliation(s)
| | - Bappy MD Siful Islam
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Republic of Korea
| | - Rashid Nasimov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Mukhriddin Mukhiddinov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Republic of Korea
| |
Collapse
|
12
|
Mukhiddinov M, Abdusalomov AB, Cho J. A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5. SENSORS (BASEL, SWITZERLAND) 2022; 22:9384. [PMID: 36502081 PMCID: PMC9740073 DOI: 10.3390/s22239384] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 11/30/2022] [Accepted: 11/30/2022] [Indexed: 06/17/2023]
Abstract
Wildfire is one of the most significant dangers and the most serious natural catastrophe, endangering forest resources, animal life, and the human economy. Recent years have witnessed a rise in wildfire incidents. The two main factors are persistent human interference with the natural environment and global warming. Early detection of fire ignition from initial smoke can help firefighters react to such blazes before they become difficult to handle. Previous deep-learning approaches for wildfire smoke detection have been hampered by small or untrustworthy datasets, making it challenging to extrapolate the performances to real-world scenarios. In this study, we propose an early wildfire smoke detection system using unmanned aerial vehicle (UAV) images based on an improved YOLOv5. First, we curated a 6000-wildfire image dataset using existing UAV images. Second, we optimized the anchor box clustering using the K-mean++ technique to reduce classification errors. Then, we improved the network's backbone using a spatial pyramid pooling fast-plus layer to concentrate small-sized wildfire smoke regions. Third, a bidirectional feature pyramid network was applied to obtain a more accessible and faster multi-scale feature fusion. Finally, network pruning and transfer learning approaches were implemented to refine the network architecture and detection speed, and correctly identify small-scale wildfire smoke areas. The experimental results proved that the proposed method achieved an average precision of 73.6% and outperformed other one- and two-stage object detectors on a custom image dataset.
Collapse
|
13
|
Żyluk A, Zieja M, Szelmanowski A, Tomaszewska J, Perlińska M, Głyda K. Electrical Disturbances in Terms of Methods to Reduce False Activation of Aerial Fire Protection Systems. SENSORS (BASEL, SWITZERLAND) 2022; 22:8059. [PMID: 36298411 PMCID: PMC9610076 DOI: 10.3390/s22208059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 10/12/2022] [Accepted: 10/18/2022] [Indexed: 06/16/2023]
Abstract
The paper presents an analysis of false triggers of fire protection systems installed on aircraft. They not only cause task interruption but also have a direct impact on flight safety, forcing the crew to land in a risky area. Simulation models of electronic actuators were developed to determine the conditions under which false alarms occur. Testing of the simulation models was carried out in the computational package Matlab-Simulink and Circum-Maker for different electrical disturbance generation conditions. The simulation of overvoltage, voltage drops and voltage decays in the on-board electrical network supplying the fire protection system, occurring during the start-up of aircraft engines and during the switching on and off of on-board high-power devices, was studied. The conducted studies have practical applications since the simulation results are an important element for planning experimental tests of the SSP-FK-BI executive blocks under electrical disturbance conditions. Based on the simulation and experimental studies, the conditions causing false tripping of the fire protection system and the parameters for selected disturbance factors have been determined.
Collapse
Affiliation(s)
- Andrzej Żyluk
- Air Force Institute of Technology, 01-494 Warsaw, Poland
| | - Mariusz Zieja
- Air Force Institute of Technology, 01-494 Warsaw, Poland
| | | | | | | | | |
Collapse
|
14
|
Abdusalomov AB, Mukhiddinov M, Kutlimuratov A, Whangbo TK. Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People. SENSORS (BASEL, SWITZERLAND) 2022; 22:7305. [PMID: 36236403 PMCID: PMC9572756 DOI: 10.3390/s22197305] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 09/14/2022] [Accepted: 09/24/2022] [Indexed: 06/12/2023]
Abstract
Early fire detection and notification techniques provide fire prevention and safety information to blind and visually impaired (BVI) people within a short period of time in emergency situations when fires occur in indoor environments. Given its direct impact on human safety and the environment, fire detection is a difficult but crucial problem. To prevent injuries and property damage, advanced technology requires appropriate methods for detecting fires as quickly as possible. In this study, to reduce the loss of human lives and property damage, we introduce the development of the vision-based early flame recognition and notification approach using artificial intelligence for assisting BVI people. The proposed fire alarm control system for indoor buildings can provide accurate information on fire scenes. In our proposed method, all the processes performed manually were automated, and the performance efficiency and quality of fire classification were improved. To perform real-time monitoring and enhance the detection accuracy of indoor fire disasters, the proposed system uses the YOLOv5m model, which is an updated version of the traditional YOLOv5. The experimental results show that the proposed system successfully detected and notified the occurrence of catastrophic fires with high speed and accuracy at any time of day or night, regardless of the shape or size of the fire. Finally, we compared the competitiveness level of our method with that of other conventional fire-detection methods to confirm the seamless classification results achieved using performance evaluation matrices.
Collapse
Affiliation(s)
| | | | | | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-si 461-701, Gyeonggi-do, Korea
| |
Collapse
|
15
|
Zhao Q, Zheng C, Ma W. An Improved Crucible Spatial Bubble Detection Based on YOLOv5 Fusion Target Tracking. SENSORS (BASEL, SWITZERLAND) 2022; 22:6356. [PMID: 36080814 PMCID: PMC9460891 DOI: 10.3390/s22176356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 08/15/2022] [Accepted: 08/23/2022] [Indexed: 06/15/2023]
Abstract
A three-dimensional spatial bubble counting method is proposed to solve the problem of the existing crucible bubble detection only being able to perform two-dimensional statistics. First, spatial video images of the transparent layer of the crucible are acquired by a digital microscope, and a quartz crucible bubble dataset is constructed independently. Secondly, to address the problems of poor real-time and the insufficient small-target detection capability of existing methods for quartz crucible bubble detection, rich detailed feature information is retained by reducing the depth of down-sampling in the YOLOv5 network structure. In the neck, the dilated convolution algorithm is used to increase the feature map perceptual field to achieve the extraction of global semantic features; in front of the detection layer, an effective channel attention network (ECA-Net) mechanism is added to improve the capability of expressing significant channel characteristics. Furthermore, a tracking algorithm based on Kalman filtering and Hungarian matching is presented for bubble counting in crucible space. The experimental results demonstrate that the detector algorithm presented in this paper can effectively reduce the missed detection rate of tiny bubbles and increase the average detection precision from 96.27% to 98.76% while reducing weight by half and reaching a speed of 82 FPS. The excellent detector performance improves the tracker's accuracy significantly, allowing for real-time and high-precision counting of bubbles in quartz crucibles. It is an effective method for detecting crucible spatial bubbles.
Collapse
Affiliation(s)
- Qian Zhao
- School of Communication and Information Engineering, Xi’an University of Science and Technology, Xi’an 710054, China
| | - Chao Zheng
- School of Communication and Information Engineering, Xi’an University of Science and Technology, Xi’an 710054, China
| | - Wenyue Ma
- School of Communication and Information Engineering, Xi’an University of Science and Technology, Xi’an 710054, China
- Xi’an Dishan Vision Technology Limited Company, Xi’an 712044, China
| |
Collapse
|
16
|
Automatic Speech Recognition Method Based on Deep Learning Approaches for Uzbek Language. SENSORS 2022; 22:s22103683. [PMID: 35632092 PMCID: PMC9147241 DOI: 10.3390/s22103683] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 05/01/2022] [Accepted: 05/10/2022] [Indexed: 01/27/2023]
Abstract
Communication has been an important aspect of human life, civilization, and globalization for thousands of years. Biometric analysis, education, security, healthcare, and smart cities are only a few examples of speech recognition applications. Most studies have mainly concentrated on English, Spanish, Japanese, or Chinese, disregarding other low-resource languages, such as Uzbek, leaving their analysis open. In this paper, we propose an End-To-End Deep Neural Network-Hidden Markov Model speech recognition model and a hybrid Connectionist Temporal Classification (CTC)-attention network for the Uzbek language and its dialects. The proposed approach reduces training time and improves speech recognition accuracy by effectively using CTC objective function in attention model training. We evaluated the linguistic and lay-native speaker performances on the Uzbek language dataset, which was collected as a part of this study. Experimental results show that the proposed model achieved a word error rate of 14.3% using 207 h of recordings as an Uzbek language training dataset.
Collapse
|
17
|
Mukhiddinov M, Abdusalomov AB, Cho J. Automatic Fire Detection and Notification System Based on Improved YOLOv4 for the Blind and Visually Impaired. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22093307. [PMID: 35590996 PMCID: PMC9103130 DOI: 10.3390/s22093307] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 04/22/2022] [Accepted: 04/25/2022] [Indexed: 05/06/2023]
Abstract
The growing aging population suffers from high levels of vision and cognitive impairment, often resulting in a loss of independence. Such individuals must perform crucial everyday tasks such as cooking and heating with systems and devices designed for visually unimpaired individuals, which do not take into account the needs of persons with visual and cognitive impairment. Thus, the visually impaired persons using them run risks related to smoke and fire. In this paper, we propose a vision-based fire detection and notification system using smart glasses and deep learning models for blind and visually impaired (BVI) people. The system enables early detection of fires in indoor environments. To perform real-time fire detection and notification, the proposed system uses image brightness and a new convolutional neural network employing an improved YOLOv4 model with a convolutional block attention module. The h-swish activation function is used to reduce the running time and increase the robustness of YOLOv4. We adapt our previously developed smart glasses system to capture images and inform BVI people about fires and other surrounding objects through auditory messages. We create a large fire image dataset with indoor fire scenes to accurately detect fires. Furthermore, we develop an object mapping approach to provide BVI people with complete information about surrounding objects and to differentiate between hazardous and nonhazardous fires. The proposed system shows an improvement over other well-known approaches in all fire detection metrics such as precision, recall, and average precision.
Collapse
Affiliation(s)
- Mukhriddin Mukhiddinov
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Korea;
| | - Akmalbek Bobomirzaevich Abdusalomov
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan;
| | - Jinsoo Cho
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Korea;
- Correspondence:
| |
Collapse
|
18
|
Khan F, Xu Z, Sun J, Khan FM, Ahmed A, Zhao Y. Recent Advances in Sensors for Fire Detection. SENSORS (BASEL, SWITZERLAND) 2022; 22:3310. [PMID: 35590999 PMCID: PMC9100504 DOI: 10.3390/s22093310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 04/09/2022] [Accepted: 04/22/2022] [Indexed: 12/10/2022]
Abstract
Fire is indeed one of the major contributing factors to fatalities, property damage, and economic disruption. A large number of fire incidents across the world cause devastation beyond measure and description every year. To minimalize their impacts, the implementation of innovative and effective fire early warning technologies is essential. Despite the fact that research publications on fire detection technology have addressed the issue to some extent, fire detection technology still confronts hurdles in decreasing false alerts, improving sensitivity and dynamic responsibility, and providing protection for costly and complicated installations. In this review, we aim to provide a comprehensive analysis of the current futuristic practices in the context of fire detection and monitoring strategies, with an emphasis on the methods of detecting fire through the continuous monitoring of variables, such as temperature, flame, gaseous content, and smoke, along with their respective benefits and drawbacks, measuring standards, and parameter measurement spans. Current research directions and challenges related to the technology of fire detection and future perspectives on fabricating advanced fire sensors are also provided. We hope such a review can provide inspiration for fire sensor research dedicated to the development of advanced fire detection techniques.
Collapse
Affiliation(s)
- Fawad Khan
- College of Textile and Clothing Engineering, Soochow University, Suzhou 215123, China; (F.K.); (A.A.)
| | - Zhiguang Xu
- China-Australia Institute for Advanced Materials and Manufacturing, Jiaxing University, Jiaxing 314001, China
| | - Junling Sun
- Shandong Qingdao Petroleum Branch, SINOPEC Sales Co., Ltd., Qingdao 266071, China;
| | - Fazal Maula Khan
- School of Materials Science and Engineering, Beihang University, Beijing 100191, China;
| | - Adnan Ahmed
- College of Textile and Clothing Engineering, Soochow University, Suzhou 215123, China; (F.K.); (A.A.)
| | - Yan Zhao
- College of Textile and Clothing Engineering, Soochow University, Suzhou 215123, China; (F.K.); (A.A.)
| |
Collapse
|