1
|
Buriboev AS, Abduvaitov A, Jeon HS. Integrating Color and Contour Analysis with Deep Learning for Robust Fire and Smoke Detection. SENSORS (BASEL, SWITZERLAND) 2025; 25:2044. [PMID: 40218557 PMCID: PMC11991653 DOI: 10.3390/s25072044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2025] [Revised: 03/04/2025] [Accepted: 03/24/2025] [Indexed: 04/14/2025]
Abstract
Detecting fire and smoke is essential for maintaining safety in urban, industrial, and outdoor settings. This study suggests a unique concatenated convolutional neural network (CNN) model that combines deep learning with hybrid preprocessing methods, such as contour-based algorithms and color characteristics analysis, to provide reliable and accurate fire and smoke detection. A benchmark dataset with a variety of situations, including dynamic surroundings and changing illumination, the D-Fire dataset was used to assess the technique. Experiments show that the suggested model outperforms both conventional techniques and the most advanced YOLO-based methods, achieving accuracy (0.989) and recall (0.983). In order to reduce false positives and false negatives, the hybrid architecture uses preprocessing to enhance Regions of Interest (ROIs). Additionally, pooling and fully linked layers provide computational efficiency and generalization. In contrast to current approaches, which frequently concentrate only on fire detection, the model's dual smoke and fire detection capabilities increase its adaptability. Although preprocessing adds a little computing expense, the methodology's excellent accuracy and resilience make it a dependable option for safety-critical real-world applications. This study sets a new standard for smoke and fire detection and provides a route forward for future developments in this crucial area.
Collapse
Affiliation(s)
| | - Akmal Abduvaitov
- Department of IT, Samarkand Branch of Tashkent University of Information Technologies, Samarkand 100084, Uzbekistan;
| | - Heung Seok Jeon
- Department of Computer Engineering, Konkuk University, Chungju 27478, Republic of Korea
| |
Collapse
|
2
|
Bolikulov F, Abdusalomov A, Nasimov R, Akhmedov F, Cho YI. Early Poplar ( Populus) Leaf-Based Disease Detection through Computer Vision, YOLOv8, and Contrast Stretching Technique. SENSORS (BASEL, SWITZERLAND) 2024; 24:5200. [PMID: 39204895 PMCID: PMC11360347 DOI: 10.3390/s24165200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 07/24/2024] [Accepted: 08/09/2024] [Indexed: 09/04/2024]
Abstract
Poplar (Populus) trees play a vital role in various industries and in environmental sustainability. They are widely used for paper production, timber, and as windbreaks, in addition to their significant contributions to carbon sequestration. Given their economic and ecological importance, effective disease management is essential. Convolutional Neural Networks (CNNs), particularly adept at processing visual information, are crucial for the accurate detection and classification of plant diseases. This study introduces a novel dataset of manually collected images of diseased poplar leaves from Uzbekistan and South Korea, enhancing the geographic diversity and application of the dataset. The disease classes consist of "Parsha (Scab)", "Brown-spotting", "White-Gray spotting", and "Rust", reflecting common afflictions in these regions. This dataset will be made publicly available to support ongoing research efforts. Employing the advanced YOLOv8 model, a state-of-the-art CNN architecture, we applied a Contrast Stretching technique prior to model training in order to enhance disease detection accuracy. This approach not only improves the model's diagnostic capabilities but also offers a scalable tool for monitoring and treating poplar diseases, thereby supporting the health and sustainability of these critical resources. This dataset, to our knowledge, will be the first of its kind to be publicly available, offering a valuable resource for researchers and practitioners worldwide.
Collapse
Affiliation(s)
- Furkat Bolikulov
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-si 461-701, Republic of Korea; (F.B.); (A.A.)
| | - Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-si 461-701, Republic of Korea; (F.B.); (A.A.)
- Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan;
| | - Rashid Nasimov
- Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan;
| | - Farkhod Akhmedov
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-si 461-701, Republic of Korea; (F.B.); (A.A.)
| | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-si 461-701, Republic of Korea; (F.B.); (A.A.)
| |
Collapse
|
3
|
Buriboev AS, Rakhmanov K, Soqiyev T, Choi AJ. Improving Fire Detection Accuracy through Enhanced Convolutional Neural Networks and Contour Techniques. SENSORS (BASEL, SWITZERLAND) 2024; 24:5184. [PMID: 39204881 PMCID: PMC11360108 DOI: 10.3390/s24165184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 07/30/2024] [Accepted: 08/09/2024] [Indexed: 09/04/2024]
Abstract
In this study, a novel method combining contour analysis with deep CNN is applied for fire detection. The method was made for fire detection using two main algorithms: one which detects the color properties of the fires, and another which analyzes the shape through contour detection. To overcome the disadvantages of previous methods, we generate a new labeled dataset, which consists of small fire instances and complex scenarios. We elaborated the dataset by selecting regions of interest (ROI) for enhanced fictional small fires and complex environment traits extracted through color characteristics and contour analysis, to better train our model regarding those more intricate features. Results of the experiment showed that our improved CNN model outperformed other networks. The accuracy, precision, recall and F1 score were 99.4%, 99.3%, 99.4% and 99.5%, respectively. The performance of our new approach is enhanced in all metrics compared to the previous CNN model with an accuracy of 99.4%. In addition, our approach beats many other state-of-the-art methods as well: Dilated CNNs (98.1% accuracy), Faster R-CNN (97.8% accuracy) and ResNet (94.3%). This result suggests that the approach can be beneficial for a variety of safety and security applications ranging from home, business to industrial and outdoor settings.
Collapse
Affiliation(s)
- Abror Shavkatovich Buriboev
- School of Computing, Department of AI-Software, Gachon University, Seongnam-si 13306, Republic of Korea
- Department of Infocommunication Engineering, Tashkent University of Information Technologies, Tashkent 100084, Uzbekistan
| | - Khoshim Rakhmanov
- Department of Digital and Educational Technologies, Samarkand Branch of Tashkent University of Information Technologies, Samarkand 140100, Uzbekistan
| | - Temur Soqiyev
- Digital Technologies and Artificial Intelligence Research Institute, Tashkent 100125, Uzbekistan
| | - Andrew Jaeyong Choi
- School of Computing, Department of AI-Software, Gachon University, Seongnam-si 13306, Republic of Korea
| |
Collapse
|
4
|
Chemnad K, Othman A. Digital accessibility in the era of artificial intelligence-Bibliometric analysis and systematic review. Front Artif Intell 2024; 7:1349668. [PMID: 38435800 PMCID: PMC10905618 DOI: 10.3389/frai.2024.1349668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 01/29/2024] [Indexed: 03/05/2024] Open
Abstract
Introduction Digital accessibility involves designing digital systems and services to enable access for individuals, including those with disabilities, including visual, auditory, motor, or cognitive impairments. Artificial intelligence (AI) has the potential to enhance accessibility for people with disabilities and improve their overall quality of life. Methods This systematic review, covering academic articles from 2018 to 2023, focuses on AI applications for digital accessibility. Initially, 3,706 articles were screened from five scholarly databases-ACM Digital Library, IEEE Xplore, ScienceDirect, Scopus, and Springer. Results The analysis narrowed down to 43 articles, presenting a classification framework based on applications, challenges, AI methodologies, and accessibility standards. Discussion This research emphasizes the predominant focus on AI-driven digital accessibility for visual impairments, revealing a critical gap in addressing speech and hearing impairments, autism spectrum disorder, neurological disorders, and motor impairments. This highlights the need for a more balanced research distribution to ensure equitable support for all communities with disabilities. The study also pointed out a lack of adherence to accessibility standards in existing systems, stressing the urgency for a fundamental shift in designing solutions for people with disabilities. Overall, this research underscores the vital role of accessible AI in preventing exclusion and discrimination, urging a comprehensive approach to digital accessibility to cater to diverse disability needs.
Collapse
|
5
|
Mai C, Chen H, Zeng L, Li Z, Liu G, Qiao Z, Qu Y, Li L, Li L. A Smart Cane Based on 2D LiDAR and RGB-D Camera Sensor-Realizing Navigation and Obstacle Recognition. SENSORS (BASEL, SWITZERLAND) 2024; 24:870. [PMID: 38339588 PMCID: PMC10856969 DOI: 10.3390/s24030870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 12/27/2023] [Accepted: 01/19/2024] [Indexed: 02/12/2024]
Abstract
In this paper, an intelligent blind guide system based on 2D LiDAR and RGB-D camera sensing is proposed, and the system is mounted on a smart cane. The intelligent guide system relies on 2D LiDAR, an RGB-D camera, IMU, GPS, Jetson nano B01, STM32, and other hardware. The main advantage of the intelligent guide system proposed by us is that the distance between the smart cane and obstacles can be measured by 2D LiDAR based on the cartographer algorithm, thus achieving simultaneous localization and mapping (SLAM). At the same time, through the improved YOLOv5 algorithm, pedestrians, vehicles, pedestrian crosswalks, traffic lights, warning posts, stone piers, tactile paving, and other objects in front of the visually impaired can be quickly and effectively identified. Laser SLAM and improved YOLOv5 obstacle identification tests were carried out inside a teaching building on the campus of Hainan Normal University and on a pedestrian crossing on Longkun South Road in Haikou City, Hainan Province. The results show that the intelligent guide system developed by us can drive the omnidirectional wheels at the bottom of the smart cane and provide the smart cane with a self-leading blind guide function, like a "guide dog", which can effectively guide the visually impaired to avoid obstacles and reach their predetermined destination, and can quickly and effectively identify the obstacles on the way out. The mapping and positioning accuracy of the system's laser SLAM is 1 m ± 7 cm, and the laser SLAM speed of this system is 25~31 FPS, which can realize the short-distance obstacle avoidance and navigation function both in indoor and outdoor environments. The improved YOLOv5 helps to identify 86 types of objects. The recognition rates for pedestrian crosswalks and for vehicles are 84.6% and 71.8%, respectively; the overall recognition rate for 86 types of objects is 61.2%, and the obstacle recognition rate of the intelligent guide system is 25-26 FPS.
Collapse
Affiliation(s)
- Chunming Mai
- College of Physics and Eletronic Engineering, Hainan Normal University, Haikou 571158, China; (C.M.); (L.Z.); (Z.L.); (G.L.); (Z.Q.); (Y.Q.)
| | - Huaze Chen
- College of Information Science and Technology, Hainan Normal University, Haikou 571158, China;
| | - Lina Zeng
- College of Physics and Eletronic Engineering, Hainan Normal University, Haikou 571158, China; (C.M.); (L.Z.); (Z.L.); (G.L.); (Z.Q.); (Y.Q.)
- Key Laboratory of Laser Technology and Optoelectronic Functional Materials of Hainan Province, Hainan Normal University, Haikou 571158, China
- Hainan International Joint Research Center for Semiconductor Lasers, Hainan Normal University, Haikou 571158, China;
| | - Zaijin Li
- College of Physics and Eletronic Engineering, Hainan Normal University, Haikou 571158, China; (C.M.); (L.Z.); (Z.L.); (G.L.); (Z.Q.); (Y.Q.)
- Key Laboratory of Laser Technology and Optoelectronic Functional Materials of Hainan Province, Hainan Normal University, Haikou 571158, China
- Hainan International Joint Research Center for Semiconductor Lasers, Hainan Normal University, Haikou 571158, China;
| | - Guojun Liu
- College of Physics and Eletronic Engineering, Hainan Normal University, Haikou 571158, China; (C.M.); (L.Z.); (Z.L.); (G.L.); (Z.Q.); (Y.Q.)
- Key Laboratory of Laser Technology and Optoelectronic Functional Materials of Hainan Province, Hainan Normal University, Haikou 571158, China
- Hainan International Joint Research Center for Semiconductor Lasers, Hainan Normal University, Haikou 571158, China;
| | - Zhongliang Qiao
- College of Physics and Eletronic Engineering, Hainan Normal University, Haikou 571158, China; (C.M.); (L.Z.); (Z.L.); (G.L.); (Z.Q.); (Y.Q.)
- Key Laboratory of Laser Technology and Optoelectronic Functional Materials of Hainan Province, Hainan Normal University, Haikou 571158, China
- Hainan International Joint Research Center for Semiconductor Lasers, Hainan Normal University, Haikou 571158, China;
| | - Yi Qu
- College of Physics and Eletronic Engineering, Hainan Normal University, Haikou 571158, China; (C.M.); (L.Z.); (Z.L.); (G.L.); (Z.Q.); (Y.Q.)
- Key Laboratory of Laser Technology and Optoelectronic Functional Materials of Hainan Province, Hainan Normal University, Haikou 571158, China
- Hainan International Joint Research Center for Semiconductor Lasers, Hainan Normal University, Haikou 571158, China;
| | - Lianhe Li
- Hainan International Joint Research Center for Semiconductor Lasers, Hainan Normal University, Haikou 571158, China;
| | - Lin Li
- College of Physics and Eletronic Engineering, Hainan Normal University, Haikou 571158, China; (C.M.); (L.Z.); (Z.L.); (G.L.); (Z.Q.); (Y.Q.)
- Key Laboratory of Laser Technology and Optoelectronic Functional Materials of Hainan Province, Hainan Normal University, Haikou 571158, China
- Hainan International Joint Research Center for Semiconductor Lasers, Hainan Normal University, Haikou 571158, China;
| |
Collapse
|
6
|
Kim SY, Mukhiddinov M. Data Anomaly Detection for Structural Health Monitoring Based on a Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2023; 23:8525. [PMID: 37896618 PMCID: PMC10611100 DOI: 10.3390/s23208525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 10/14/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023]
Abstract
Structural health monitoring (SHM) has been extensively utilized in civil infrastructures for several decades. The status of civil constructions is monitored in real time using a wide variety of sensors; however, determining the true state of a structure can be difficult due to the presence of abnormalities in the acquired data. Extreme weather, faulty sensors, and structural damage are common causes of these abnormalities. For civil structure monitoring to be successful, abnormalities must be detected quickly. In addition, one form of abnormality generally predominates the SHM data, which might be a problem for civil infrastructure data. The current state of anomaly detection is severely hampered by this imbalance. Even cutting-edge damage diagnostic methods are useless without proper data-cleansing processes. In order to solve this problem, this study suggests a hyper-parameter-tuned convolutional neural network (CNN) for multiclass unbalanced anomaly detection. A multiclass time series of anomaly data from a real-world cable-stayed bridge is used to test the 1D CNN model, and the dataset is balanced by supplementing the data as necessary. An overall accuracy of 97.6% was achieved by balancing the database using data augmentation to enlarge the dataset, as shown in the research.
Collapse
Affiliation(s)
- Soon-Young Kim
- Department of Physical Education, Gachon University, Seongnam 13120, Republic of Korea;
| | - Mukhriddin Mukhiddinov
- Department of Communication and Digital Technologies, University of Management and Future Technologies, Tashkent 100208, Uzbekistan
| |
Collapse
|
7
|
Saydirasulovich SN, Mukhiddinov M, Djuraev O, Abdusalomov A, Cho YI. An Improved Wildfire Smoke Detection Based on YOLOv8 and UAV Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:8374. [PMID: 37896467 PMCID: PMC10610991 DOI: 10.3390/s23208374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/21/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Forest fires rank among the costliest and deadliest natural disasters globally. Identifying the smoke generated by forest fires is pivotal in facilitating the prompt suppression of developing fires. Nevertheless, succeeding techniques for detecting forest fire smoke encounter persistent issues, including a slow identification rate, suboptimal accuracy in detection, and challenges in distinguishing smoke originating from small sources. This study presents an enhanced YOLOv8 model customized to the context of unmanned aerial vehicle (UAV) images to address the challenges above and attain heightened precision in detection accuracy. Firstly, the research incorporates Wise-IoU (WIoU) v3 as a regression loss for bounding boxes, supplemented by a reasonable gradient allocation strategy that prioritizes samples of common quality. This strategic approach enhances the model's capacity for precise localization. Secondly, the conventional convolutional process within the intermediate neck layer is substituted with the Ghost Shuffle Convolution mechanism. This strategic substitution reduces model parameters and expedites the convergence rate. Thirdly, recognizing the challenge of inadequately capturing salient features of forest fire smoke within intricate wooded settings, this study introduces the BiFormer attention mechanism. This mechanism strategically directs the model's attention towards the feature intricacies of forest fire smoke, simultaneously suppressing the influence of irrelevant, non-target background information. The obtained experimental findings highlight the enhanced YOLOv8 model's effectiveness in smoke detection, proving an average precision (AP) of 79.4%, signifying a notable 3.3% enhancement over the baseline. The model's performance extends to average precision small (APS) and average precision large (APL), registering robust values of 71.3% and 92.6%, respectively.
Collapse
Affiliation(s)
| | - Mukhriddin Mukhiddinov
- Department of Communication and Digital Technologies, University of Management and Future Technologies, Tashkent 100208, Uzbekistan; (M.M.); (O.D.)
| | - Oybek Djuraev
- Department of Communication and Digital Technologies, University of Management and Future Technologies, Tashkent 100208, Uzbekistan; (M.M.); (O.D.)
| | - Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea;
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea;
| |
Collapse
|
8
|
Abdusalomov AB, Mukhiddinov M, Whangbo TK. Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging. Cancers (Basel) 2023; 15:4172. [PMID: 37627200 PMCID: PMC10453020 DOI: 10.3390/cancers15164172] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/11/2023] [Accepted: 08/17/2023] [Indexed: 08/27/2023] Open
Abstract
The rapid development of abnormal brain cells that characterizes a brain tumor is a major health risk for adults since it can cause severe impairment of organ function and even death. These tumors come in a wide variety of sizes, textures, and locations. When trying to locate cancerous tumors, magnetic resonance imaging (MRI) is a crucial tool. However, detecting brain tumors manually is a difficult and time-consuming activity that might lead to inaccuracies. In order to solve this, we provide a refined You Only Look Once version 7 (YOLOv7) model for the accurate detection of meningioma, glioma, and pituitary gland tumors within an improved detection of brain tumors system. The visual representation of the MRI scans is enhanced by the use of image enhancement methods that apply different filters to the original pictures. To further improve the training of our proposed model, we apply data augmentation techniques to the openly accessible brain tumor dataset. The curated data include a wide variety of cases, such as 2548 images of gliomas, 2658 images of pituitary, 2582 images of meningioma, and 2500 images of non-tumors. We included the Convolutional Block Attention Module (CBAM) attention mechanism into YOLOv7 to further enhance its feature extraction capabilities, allowing for better emphasis on salient regions linked with brain malignancies. To further improve the model's sensitivity, we have added a Spatial Pyramid Pooling Fast+ (SPPF+) layer to the network's core infrastructure. YOLOv7 now includes decoupled heads, which allow it to efficiently glean useful insights from a wide variety of data. In addition, a Bi-directional Feature Pyramid Network (BiFPN) is used to speed up multi-scale feature fusion and to better collect features associated with tumors. The outcomes verify the efficiency of our suggested method, which achieves a higher overall accuracy in tumor detection than previous state-of-the-art models. As a result, this framework has a lot of potential as a helpful decision-making tool for experts in the field of diagnosing brain tumors.
Collapse
Affiliation(s)
| | | | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Seongnam-si 13120, Republic of Korea;
| |
Collapse
|
9
|
Avazov K, Jamil MK, Muminov B, Abdusalomov AB, Cho YI. Fire Detection and Notification Method in Ship Areas Using Deep Learning and Computer Vision Approaches. SENSORS (BASEL, SWITZERLAND) 2023; 23:7078. [PMID: 37631614 PMCID: PMC10458310 DOI: 10.3390/s23167078] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/02/2023] [Accepted: 08/07/2023] [Indexed: 08/27/2023]
Abstract
Fire incidents occurring onboard ships cause significant consequences that result in substantial effects. Fires on ships can have extensive and severe wide-ranging impacts on matters such as the safety of the crew, cargo, the environment, finances, reputation, etc. Therefore, timely detection of fires is essential for quick responses and powerful mitigation. The study in this research paper presents a fire detection technique based on YOLOv7 (You Only Look Once version 7), incorporating improved deep learning algorithms. The YOLOv7 architecture, with an improved E-ELAN (extended efficient layer aggregation network) as its backbone, serves as the basis of our fire detection system. Its enhanced feature fusion technique makes it superior to all its predecessors. To train the model, we collected 4622 images of various ship scenarios and performed data augmentation techniques such as rotation, horizontal and vertical flips, and scaling. Our model, through rigorous evaluation, showcases enhanced capabilities of fire recognition to improve maritime safety. The proposed strategy successfully achieves an accuracy of 93% in detecting fires to minimize catastrophic incidents. Objects having visual similarities to fire may lead to false prediction and detection by the model, but this can be controlled by expanding the dataset. However, our model can be utilized as a real-time fire detector in challenging environments and for small-object detection. Advancements in deep learning models hold the potential to enhance safety measures, and our proposed model in this paper exhibits this potential. Experimental results proved that the proposed method can be used successfully for the protection of ships and in monitoring fires in ship port areas. Finally, we compared the performance of our method with those of recently reported fire-detection approaches employing widely used performance matrices to test the fire classification results achieved.
Collapse
Affiliation(s)
- Kuldoshbay Avazov
- Department of Computer Engineering, Gachon University, Seongnam-si 461-701, Republic of Korea; (K.A.)
| | - Muhammad Kafeel Jamil
- Department of Computer Engineering, Gachon University, Seongnam-si 461-701, Republic of Korea; (K.A.)
| | - Bahodir Muminov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | | | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Seongnam-si 461-701, Republic of Korea; (K.A.)
| |
Collapse
|
10
|
Kutlimuratov A, Khamzaev J, Kuchkorov T, Anwar MS, Choi A. Applying Enhanced Real-Time Monitoring and Counting Method for Effective Traffic Management in Tashkent. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115007. [PMID: 37299734 DOI: 10.3390/s23115007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/21/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
This study describes an applied and enhanced real-time vehicle-counting system that is an integral part of intelligent transportation systems. The primary objective of this study was to develop an accurate and reliable real-time system for vehicle counting to mitigate traffic congestion in a designated area. The proposed system can identify and track objects inside the region of interest and count detected vehicles. To enhance the accuracy of the system, we used the You Only Look Once version 5 (YOLOv5) model for vehicle identification owing to its high performance and short computing time. Vehicle tracking and the number of vehicles acquired used the DeepSort algorithm with the Kalman filter and Mahalanobis distance as the main components of the algorithm and the proposed simulated loop technique, respectively. Empirical results were obtained using video images taken from a closed-circuit television (CCTV) camera on Tashkent roads and show that the counting system can produce 98.1% accuracy in 0.2408 s.
Collapse
Affiliation(s)
- Alpamis Kutlimuratov
- Department of AI, Software Gachon University, Seongnam-si 13120, Republic of Korea
| | - Jamshid Khamzaev
- Department of Information-Computer Technologies and Programming, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan
| | - Temur Kuchkorov
- Department of Computer Systems, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan
| | | | - Ahyoung Choi
- Department of AI, Software Gachon University, Seongnam-si 13120, Republic of Korea
| |
Collapse
|
11
|
Norkobil Saydirasulovich S, Abdusalomov A, Jamil MK, Nasimov R, Kozhamzharova D, Cho YI. A YOLOv6-Based Improved Fire Detection Approach for Smart City Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:3161. [PMID: 36991872 PMCID: PMC10051218 DOI: 10.3390/s23063161] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/10/2023] [Accepted: 03/11/2023] [Indexed: 06/19/2023]
Abstract
Authorities and policymakers in Korea have recently prioritized improving fire prevention and emergency response. Governments seek to enhance community safety for residents by constructing automated fire detection and identification systems. This study examined the efficacy of YOLOv6, a system for object identification running on an NVIDIA GPU platform, to identify fire-related items. Using metrics such as object identification speed, accuracy research, and time-sensitive real-world applications, we analyzed the influence of YOLOv6 on fire detection and identification efforts in Korea. We conducted trials using a fire dataset comprising 4000 photos collected through Google, YouTube, and other resources to evaluate the viability of YOLOv6 in fire recognition and detection tasks. According to the findings, YOLOv6's object identification performance was 0.98, with a typical recall of 0.96 and a precision of 0.83. The system achieved an MAE of 0.302%. These findings suggest that YOLOv6 is an effective technique for detecting and identifying fire-related items in photos in Korea. Multi-class object recognition using random forests, k-nearest neighbors, support vector, logistic regression, naive Bayes, and XGBoost was performed on the SFSC data to evaluate the system's capacity to identify fire-related objects. The results demonstrate that for fire-related objects, XGBoost achieved the highest object identification accuracy, with values of 0.717 and 0.767. This was followed by random forest, with values of 0.468 and 0.510. Finally, we tested YOLOv6 in a simulated fire evacuation scenario to gauge its practicality in emergencies. The results show that YOLOv6 can accurately identify fire-related items in real time within a response time of 0.66 s. Therefore, YOLOv6 is a viable option for fire detection and recognition in Korea. The XGBoost classifier provides the highest accuracy when attempting to identify objects, achieving remarkable results. Furthermore, the system accurately identifies fire-related objects while they are being detected in real-time. This makes YOLOv6 an effective tool to use in fire detection and identification initiatives.
Collapse
Affiliation(s)
| | - Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| | - Muhammad Kafeel Jamil
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| | - Rashid Nasimov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Dinara Kozhamzharova
- Department of Information System, International Information Technology University, Almaty 050000, Kazakhstan
| | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| |
Collapse
|
12
|
Turimov Mustapoevich D, Muhamediyeva Tulkunovna D, Safarova Ulmasovna L, Primova H, Kim W. Improved Cattle Disease Diagnosis Based on Fuzzy Logic Algorithms. SENSORS (BASEL, SWITZERLAND) 2023; 23:2107. [PMID: 36850710 PMCID: PMC9965944 DOI: 10.3390/s23042107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 02/02/2023] [Accepted: 02/07/2023] [Indexed: 06/18/2023]
Abstract
The health and productivity of animals, as well as farmers' financial well-being, can be significantly impacted by cattle illnesses. Accurate and timely diagnosis is therefore essential for effective disease management and control. In this study, we consider the development of models and algorithms for diagnosing diseases in cattle based on Sugeno's fuzzy inference. To achieve this goal, an analytical review of mathematical methods for diagnosing animal diseases and soft computing methods for solving classification problems was performed. Based on the clinical signs of diseases, an algorithm was proposed to build a knowledge base to diagnose diseases in cattle. This algorithm serves to increase the reliability of informative features. Based on the proposed algorithm, a program for diagnosing diseases in cattle was developed. Afterward, a computational experiment was performed. The results of the computational experiment are additional tools for decision-making on the diagnosis of a disease in cattle. Using the developed program, a Sugeno fuzzy logic model was built for diagnosing diseases in cattle. The analysis of the adequacy of the results obtained from the Sugeno fuzzy logic model was performed. The processes of solving several existing (model) classification and evaluation problems and comparing the results with several existing algorithms are considered. The results obtained enable it to be possible to promptly diagnose and perform certain therapeutic measures as well as reduce the time of data analysis and increase the efficiency of diagnosing cattle. The scientific novelty of this study is the creation of an algorithm for building a knowledge base and improving the algorithm for constructing the Sugeno fuzzy logic model for diagnosing diseases in cattle. The findings of this study can be widely used in veterinary medicine in solving the problems of diagnosing diseases in cattle and substantiating decision-making in intelligent systems.
Collapse
Affiliation(s)
- Dilmurod Turimov Mustapoevich
- Department of IT Convergence Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| | - Dilnoz Muhamediyeva Tulkunovna
- Tashkent Institute of Irrigation and Agricultural Mechanization Engineers, National Research University, Tashkent 100000, Uzbekistan
| | - Lola Safarova Ulmasovna
- Samarkand State University of Veterinary Medicine, Livestock and Biotechnologies, Samarkand 140103, Uzbekistan
| | - Holida Primova
- Samarkand Branch of Tashkent University of Information Technologies, Samarkand 140100, Uzbekistan
| | - Wooseong Kim
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| |
Collapse
|
13
|
Abdusalomov AB, Islam BMDS, Nasimov R, Mukhiddinov M, Whangbo TK. An Improved Forest Fire Detection Method Based on the Detectron2 Model and a Deep Learning Approach. SENSORS (BASEL, SWITZERLAND) 2023; 23:1512. [PMID: 36772551 PMCID: PMC9920160 DOI: 10.3390/s23031512] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/19/2023] [Accepted: 01/23/2023] [Indexed: 06/18/2023]
Abstract
With an increase in both global warming and the human population, forest fires have become a major global concern. This can lead to climatic shifts and the greenhouse effect, among other adverse outcomes. Surprisingly, human activities have caused a disproportionate number of forest fires. Fast detection with high accuracy is the key to controlling this unexpected event. To address this, we proposed an improved forest fire detection method to classify fires based on a new version of the Detectron2 platform (a ground-up rewrite of the Detectron library) using deep learning approaches. Furthermore, a custom dataset was created and labeled for the training model, and it achieved higher precision than the other models. This robust result was achieved by improving the Detectron2 model in various experimental scenarios with a custom dataset and 5200 images. The proposed model can detect small fires over long distances during the day and night. The advantage of using the Detectron2 algorithm is its long-distance detection of the object of interest. The experimental results proved that the proposed forest fire detection method successfully detected fires with an improved precision of 99.3%.
Collapse
Affiliation(s)
| | - Bappy MD Siful Islam
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Republic of Korea
| | - Rashid Nasimov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Mukhriddin Mukhiddinov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Republic of Korea
| |
Collapse
|
14
|
Mamieva D, Abdusalomov AB, Mukhiddinov M, Whangbo TK. Improved Face Detection Method via Learning Small Faces on Hard Images Based on a Deep Learning Approach. SENSORS (BASEL, SWITZERLAND) 2023; 23:502. [PMID: 36617097 PMCID: PMC9824614 DOI: 10.3390/s23010502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/09/2022] [Accepted: 12/28/2022] [Indexed: 06/17/2023]
Abstract
Most facial recognition and face analysis systems start with facial detection. Early techniques, such as Haar cascades and histograms of directed gradients, mainly rely on features that had been manually developed from particular images. However, these techniques are unable to correctly synthesize images taken in untamed situations. However, deep learning's quick development in computer vision has also sped up the development of a number of deep learning-based face detection frameworks, many of which have significantly improved accuracy in recent years. When detecting faces in face detection software, the difficulty of detecting small, scale, position, occlusion, blurring, and partially occluded faces in uncontrolled conditions is one of the problems of face identification that has been explored for many years but has not yet been entirely resolved. In this paper, we propose Retina net baseline, a single-stage face detector, to handle the challenging face detection problem. We made network improvements that boosted detection speed and accuracy. In Experiments, we used two popular datasets, such as WIDER FACE and FDDB. Specifically, on the WIDER FACE benchmark, our proposed method achieves AP of 41.0 at speed of 11.8 FPS with a single-scale inference strategy and AP of 44.2 with multi-scale inference strategy, which are results among one-stage detectors. Then, we trained our model during the implementation using the PyTorch framework, which provided an accuracy of 95.6% for the faces, which are successfully detected. Visible experimental results show that our proposed model outperforms seamless detection and recognition results achieved using performance evaluation matrices.
Collapse
Affiliation(s)
- Dilnoza Mamieva
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Republic of Korea
| | | | - Mukhriddin Mukhiddinov
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Republic of Korea
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Republic of Korea
| |
Collapse
|
15
|
Safarov F, Temurbek K, Jamoljon D, Temur O, Chedjou JC, Abdusalomov AB, Cho YI. Improved Agricultural Field Segmentation in Satellite Imagery Using TL-ResUNet Architecture. SENSORS (BASEL, SWITZERLAND) 2022; 22:9784. [PMID: 36560151 PMCID: PMC9785557 DOI: 10.3390/s22249784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/10/2022] [Accepted: 12/10/2022] [Indexed: 06/17/2023]
Abstract
Currently, there is a growing population around the world, and this is particularly true in developing countries, where food security is becoming a major problem. Therefore, agricultural land monitoring, land use classification and analysis, and achieving high yields through efficient land use are important research topics in precision agriculture. Deep learning-based algorithms for the classification of satellite images provide more reliable and accurate results than traditional classification algorithms. In this study, we propose a transfer learning based residual UNet architecture (TL-ResUNet) model, which is a semantic segmentation deep neural network model of land cover classification and segmentation using satellite images. The proposed model combines the strengths of residual network, transfer learning, and UNet architecture. We tested the model on public datasets such as DeepGlobe, and the results showed that our proposed model outperforms the classic models initiated with random weights and pre-trained ImageNet coefficients. The TL-ResUNet model outperforms other models on several metrics commonly used as accuracy and performance measures for semantic segmentation tasks. Particularly, we obtained an IoU score of 0.81 on the validation subset of the DeepGlobe dataset for the TL-ResUNet model.
Collapse
Affiliation(s)
- Furkat Safarov
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| | - Kuchkorov Temurbek
- Department of Computer Systems, Tashkent University of Information Technologies named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan
| | - Djumanov Jamoljon
- Department of Computer Systems, Tashkent University of Information Technologies named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan
| | - Ochilov Temur
- Department of Computer Systems, Tashkent University of Information Technologies named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan
| | | | | | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| |
Collapse
|
16
|
Mukhiddinov M, Abdusalomov AB, Cho J. A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5. SENSORS (BASEL, SWITZERLAND) 2022; 22:9384. [PMID: 36502081 PMCID: PMC9740073 DOI: 10.3390/s22239384] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 11/30/2022] [Accepted: 11/30/2022] [Indexed: 06/17/2023]
Abstract
Wildfire is one of the most significant dangers and the most serious natural catastrophe, endangering forest resources, animal life, and the human economy. Recent years have witnessed a rise in wildfire incidents. The two main factors are persistent human interference with the natural environment and global warming. Early detection of fire ignition from initial smoke can help firefighters react to such blazes before they become difficult to handle. Previous deep-learning approaches for wildfire smoke detection have been hampered by small or untrustworthy datasets, making it challenging to extrapolate the performances to real-world scenarios. In this study, we propose an early wildfire smoke detection system using unmanned aerial vehicle (UAV) images based on an improved YOLOv5. First, we curated a 6000-wildfire image dataset using existing UAV images. Second, we optimized the anchor box clustering using the K-mean++ technique to reduce classification errors. Then, we improved the network's backbone using a spatial pyramid pooling fast-plus layer to concentrate small-sized wildfire smoke regions. Third, a bidirectional feature pyramid network was applied to obtain a more accessible and faster multi-scale feature fusion. Finally, network pruning and transfer learning approaches were implemented to refine the network architecture and detection speed, and correctly identify small-scale wildfire smoke areas. The experimental results proved that the proposed method achieved an average precision of 73.6% and outperformed other one- and two-stage object detectors on a custom image dataset.
Collapse
|
17
|
Farkhod A, Abdusalomov AB, Mukhiddinov M, Cho YI. Development of Real-Time Landmark-Based Emotion Recognition CNN for Masked Faces. SENSORS (BASEL, SWITZERLAND) 2022; 22:8704. [PMID: 36433303 PMCID: PMC9698760 DOI: 10.3390/s22228704] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/07/2022] [Accepted: 11/09/2022] [Indexed: 06/16/2023]
Abstract
Owing to the availability of a wide range of emotion recognition applications in our lives, such as for mental status calculation, the demand for high-performance emotion recognition approaches remains uncertain. Nevertheless, the wearing of facial masks has been indispensable during the COVID-19 pandemic. In this study, we propose a graph-based emotion recognition method that adopts landmarks on the upper part of the face. Based on the proposed approach, several pre-processing steps were applied. After pre-processing, facial expression features need to be extracted from facial key points. The main steps of emotion recognition on masked faces include face detection by using Haar-Cascade, landmark implementation through a media-pipe face mesh model, and model training on seven emotional classes. The FER-2013 dataset was used for model training. An emotion detection model was developed for non-masked faces. Thereafter, landmarks were applied to the upper part of the face. After the detection of faces and landmark locations were extracted, we captured coordinates of emotional class landmarks and exported to a comma-separated values (csv) file. After that, model weights were transferred to the emotional classes. Finally, a landmark-based emotion recognition model for the upper facial parts was tested both on images and in real time using a web camera application. The results showed that the proposed model achieved an overall accuracy of 91.2% for seven emotional classes in the case of an image application. Image based emotion detection of the proposed model accuracy showed relatively higher results than the real-time emotion detection.
Collapse
|
18
|
Kutlimuratov A, Abdusalomov AB, Oteniyazov R, Mirzakhalilov S, Whangbo TK. Modeling and Applying Implicit Dormant Features for Recommendation via Clustering and Deep Factorization. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22218224. [PMID: 36365921 PMCID: PMC9654534 DOI: 10.3390/s22218224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 10/20/2022] [Accepted: 10/24/2022] [Indexed: 06/12/2023]
Abstract
E-commerce systems experience poor quality of performance when the number of records in the customer database increases due to the gradual growth of customers and products. Applying implicit hidden features into the recommender system (RS) plays an important role in enhancing its performance due to the original dataset's sparseness. In particular, we can comprehend the relationship between products and customers by analyzing the hierarchically expressed hidden implicit features of them. Furthermore, the effectiveness of rating prediction and system customization increases when the customer-added tag information is combined with hierarchically structured hidden implicit features. For these reasons, we concentrate on early grouping of comparable customers using the clustering technique as a first step, and then, we further enhance the efficacy of recommendations by obtaining implicit hidden features and combining them via customer's tag information, which regularizes the deep-factorization procedure. The idea behind the proposed method was to cluster customers early via a customer rating matrix and deeply factorize a basic WNMF (weighted nonnegative matrix factorization) model to generate customers preference's hierarchically structured hidden implicit features and product characteristics in each cluster, which reveals a deep relationship between them and regularizes the prediction procedure via an auxiliary parameter (tag information). The testimonies and empirical findings supported the viability of the proposed approach. Especially, MAE of the rating prediction was 0.8011 with 60% training dataset size, while the error rate was equal to 0.7965 with 80% training dataset size. Moreover, MAE rates were 0.8781 and 0.9046 in new 50 and 100 customer cold-start scenarios, respectively. The proposed model outperformed other baseline models that independently employed the major properties of customers, products, or tags in the prediction process.
Collapse
Affiliation(s)
- Alpamis Kutlimuratov
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Korea
| | | | - Rashid Oteniyazov
- Department of Telecommunication Engineering, Nukus Branch of Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Nukus 230100, Uzbekistan
| | - Sanjar Mirzakhalilov
- Department of Information-Computer Technologies and Programming, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan
| | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Korea
| |
Collapse
|
19
|
Abdusalomov AB, Safarov F, Rakhimov M, Turaev B, Whangbo TK. Improved Feature Parameter Extraction from Speech Signals Using Machine Learning Algorithm. SENSORS (BASEL, SWITZERLAND) 2022; 22:8122. [PMID: 36365819 PMCID: PMC9654697 DOI: 10.3390/s22218122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 10/14/2022] [Accepted: 10/20/2022] [Indexed: 06/16/2023]
Abstract
Speech recognition refers to the capability of software or hardware to receive a speech signal, identify the speaker's features in the speech signal, and recognize the speaker thereafter. In general, the speech recognition process involves three main steps: acoustic processing, feature extraction, and classification/recognition. The purpose of feature extraction is to illustrate a speech signal using a predetermined number of signal components. This is because all information in the acoustic signal is excessively cumbersome to handle, and some information is irrelevant in the identification task. This study proposes a machine learning-based approach that performs feature parameter extraction from speech signals to improve the performance of speech recognition applications in real-time smart city environments. Moreover, the principle of mapping a block of main memory to the cache is used efficiently to reduce computing time. The block size of cache memory is a parameter that strongly affects the cache performance. In particular, the implementation of such processes in real-time systems requires a high computation speed. Processing speed plays an important role in speech recognition in real-time systems. It requires the use of modern technologies and fast algorithms that increase the acceleration in extracting the feature parameters from speech signals. Problems with overclocking during the digital processing of speech signals have yet to be completely resolved. The experimental results demonstrate that the proposed method successfully extracts the signal features and achieves seamless classification performance compared to other conventional speech recognition algorithms.
Collapse
Affiliation(s)
| | - Furkat Safarov
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Korea
| | - Mekhriddin Rakhimov
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan
| | - Boburkhon Turaev
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan
| | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Korea
| |
Collapse
|