1
|
Mumtaz N, Ejaz N, Rida I, Khan MA, Lee MY. Towards Real-world Violence Recognition via Efficient Deep Features and Sequential Patterns Analysis. MOBILE NETWORKS AND APPLICATIONS 2024; 29:1326-1335. [DOI: 10.1007/s11036-024-02319-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/31/2024] [Indexed: 08/25/2024]
|
2
|
Abbasi A, Queirós S, da Costa NMC, Fonseca JC, Borges J. Sensor Fusion Approach for Multiple Human Motion Detection for Indoor Surveillance Use-Case. SENSORS (BASEL, SWITZERLAND) 2023; 23:3993. [PMID: 37112337 PMCID: PMC10146993 DOI: 10.3390/s23083993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 04/07/2023] [Accepted: 04/11/2023] [Indexed: 06/19/2023]
Abstract
Multi-human detection and tracking in indoor surveillance is a challenging task due to various factors such as occlusions, illumination changes, and complex human-human and human-object interactions. In this study, we address these challenges by exploring the benefits of a low-level sensor fusion approach that combines grayscale and neuromorphic vision sensor (NVS) data. We first generate a custom dataset using an NVS camera in an indoor environment. We then conduct a comprehensive study by experimenting with different image features and deep learning networks, followed by a multi-input fusion strategy to optimize our experiments with respect to overfitting. Our primary goal is to determine the best input feature types for multi-human motion detection using statistical analysis. We find that there is a significant difference between the input features of optimized backbones, with the best strategy depending on the amount of available data. Specifically, under a low-data regime, event-based frames seem to be the preferred input feature type, while higher data availability benefits the combined use of grayscale and optical flow features. Our results demonstrate the potential of sensor fusion and deep learning techniques for multi-human tracking in indoor surveillance, although it is acknowledged that further studies are needed to confirm our findings.
Collapse
Affiliation(s)
- Ali Abbasi
- Algorithmic Center, University of Minho, 4800-058 Azurém, Portugal
| | - Sandro Queirós
- School of Medicine, University of Minho, 4710-057 Gualtar, Portugal
| | - Nuno M. C. da Costa
- Algorithmic Center, University of Minho, 4800-058 Azurém, Portugal
- 2Ai-School of Technology, IPCA, 4750-810 Barcelos, Portugal
| | - Jaime C. Fonseca
- Algorithmic Center, University of Minho, 4800-058 Azurém, Portugal
| | - João Borges
- Algorithmic Center, University of Minho, 4800-058 Azurém, Portugal
- 2Ai-School of Technology, IPCA, 4750-810 Barcelos, Portugal
| |
Collapse
|
3
|
Performance Optimization of Object Tracking Algorithms in OpenCV on GPUs. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Machine-learning-based computer vision is increasingly versatile and being leveraged by a wide range of smart devices. Due to the limited performance/energy budget of computing units in smart devices, the careful implementation of computer vision algorithms is critical. In this paper, we analyze the performance bottleneck of two well-known computer vision algorithms for object tracking: object detection and optical flow in the Open-source Computer Vision library (OpenCV). Based on our in-depth analysis of their implementation, we found the current implementation fails to utilize Open Computing Language (OpenCL) accelerators (e.g., GPUs). Based on the analysis, we propose several optimization strategies and apply them to the OpenCL implementation of object tracking algorithms. Our evaluation results demonstrate the performance of the object detection is improved by up to 86% and the performance of the optical flow by up to 10%. We believe our optimization strategies can be applied to other computer vision algorithms implemented in OpenCL.
Collapse
|
4
|
An Obstacle Detection and Distance Measurement Method for Sloped Roads Based on VIDAR. JOURNAL OF ROBOTICS 2022. [DOI: 10.1155/2022/5264347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Environmental perception systems can provide information on the environment around a vehicle, which is key to active vehicle safety systems. However, these systems underperform in cases of sloped roads. Real-time obstacle detection using monocular vision is a challenging problem in this situation. In this study, an obstacle detection and distance measurement method for sloped roads based on Vision-IMU based detection and range method (VIDAR) is proposed. First, the road images are collected and processed. Then, the road distance and slope information provided by a digital map is input into the VIDAR to detect and eliminate false obstacles (i.e., those for which no height can be calculated). The movement state of the obstacle is determined by tracking its lowest point. Finally, experimental analysis is carried out through simulation and real-vehicle experiments. The results show that the proposed method has higher detection accuracy than YOLO v5s in a sloped road environment and is not susceptible to interference from false obstacles. The most prominent contribution of this research work is to describe a sloped road obstacle detection method, which is capable of detecting all types of obstacles without prior knowledge to meet the needs of real-time and accurate detection of slope road obstacles.
Collapse
|
5
|
Guzman-Pando A, Chacon-Murguia MI. DeepFoveaNet: Deep Fovea Eagle-Eye Bioinspired Model to Detect Moving Objects. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:7090-7100. [PMID: 34351859 DOI: 10.1109/tip.2021.3101398] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Birds of prey especially eagles and hawks have a visual acuity two to five times better than humans. Among the peculiar characteristics of their biological vision are that they have two types of foveae; one shallow fovea used in their binocular vision, and a deep fovea for monocular vision. The deep fovea allows these birds to see objects at long distances and to identify them as possible prey. Inspired by the biological functioning of the deep fovea a model called DeepFoveaNet is proposed in this paper. DeepFoveaNet is a convolutional neural network model to detect moving objects in video sequences. DeepFoveaNet emulates the monocular vision of birds of prey through two Encoder-Decoder convolutional neural network modules. This model combines the capacity of magnification of the deep fovea and the context information of the peripheral vision. Unlike algorithms to detect moving objects, ranked in the first places of the Change Detection database (CDnet14), DeepFoveaNet does not depend on previously trained neural networks, neither on a huge number of training images for its training. Besides, its architecture allows it to learn spatiotemporal information of the video. DeepFoveaNet was evaluated in the CDnet14 database achieving high performance and was ranked as one of the ten best algorithms. The characteristics and results of DeepFoveaNet demonstrated that the model is comparable to the state-of-the-art algorithms to detect moving objects, and it can detect very small moving objects through its deep fovea model that other algorithms cannot detect.
Collapse
|
6
|
Application of the Gaussian Mixture Model to Classify Stages of Electrical Tree Growth in Epoxy Resin. SENSORS 2021; 21:s21072562. [PMID: 33917472 PMCID: PMC8038711 DOI: 10.3390/s21072562] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 03/31/2021] [Accepted: 04/02/2021] [Indexed: 11/17/2022]
Abstract
In high-voltage (HV) insulation, electrical trees are an important degradation phenomenon strongly linked to partial discharge (PD) activity. Their initiation and development have attracted the attention of the research community and better understanding and characterization of the phenomenon are needed. They are very damaging and develop through the insulation material forming a discharge conduction path. Therefore, it is important to adequately measure and characterize tree growth before it can lead to complete failure of the system. In this paper, the Gaussian mixture model (GMM) has been applied to cluster and classify the different growth stages of electrical trees in epoxy resin insulation. First, tree growth experiments were conducted, and PD data captured from the initial to breakdown stage of the tree growth in epoxy resin insulation. Second, the GMM was applied to categorize the different electrical tree stages into clusters. The results show that PD dynamics vary with different stress voltages and tree growth stages. The electrical tree patterns with shorter breakdown times had identical clusters throughout the degradation stages. The breakdown time can be a key factor in determining the degradation levels of PD patterns emanating from trees in epoxy resin. This is important in order to determine the severity of electrical treeing degradation, and, therefore, to perform efficient asset management. The novelty of the work presented in this paper is that for the first time the GMM has been applied for electrical tree growth classification and the optimal values for the hyperparameters, i.e., the number of clusters and the appropriate covariance structure, have been determined for the different electrical tree clusters.
Collapse
|
7
|
Borecki M, Rychlik A, Olejnik A, Prus P, Szmidt J, Korwin-Pawlowski ML. Application of Wireless Accelerometer Mounted on Wheel Rim for Parked Car Monitoring. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20216088. [PMID: 33114774 PMCID: PMC7662569 DOI: 10.3390/s20216088] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 10/04/2020] [Accepted: 10/22/2020] [Indexed: 06/11/2023]
Abstract
Damages of different kinds that can be inflicted to a parked car. Among them, loosening of the car wheel bolts is difficult to detect during normal use of the car and is at the same time very dangerous to the health and life of the driver. Moreover, in patents and publications, only little information is presented about electronic sensors available for activation from inside of the car to inform the driver about the mentioned dangerous situation. Thus, the main aim of this work is the proposition and examination of a sensing device using of a wireless accelerometer head to detect loosening of wheel fixing bolts before ride has been started. The proposed sensing device consists of a wireless accelerometer head, an assembly interface and a receiver unit. The assembly interface between the head and the inner part of the rim enables the correct operation of the system. The data processing algorithm developed for the receiver unit enables the proper detection of the unscrewing of bolts. Moreover, the tested algorithm is resistant to the interference signals generated in the accelerometer head by cars and men passing in close distance.
Collapse
Affiliation(s)
- Michal Borecki
- Institute of Microelectronics and Optoelectronics, Warsaw University of Technology, 00-662 Warszawa, Poland;
| | - Arkadiusz Rychlik
- Faculty of Technical Sciences, University of Warmia and Mazury in Olsztyn, 10-719 Olsztyn, Poland; (A.R.); (A.O.)
| | - Arkadiusz Olejnik
- Faculty of Technical Sciences, University of Warmia and Mazury in Olsztyn, 10-719 Olsztyn, Poland; (A.R.); (A.O.)
| | | | - Jan Szmidt
- Institute of Microelectronics and Optoelectronics, Warsaw University of Technology, 00-662 Warszawa, Poland;
| | - Michael L. Korwin-Pawlowski
- Département d’informatique et d’ingénierie, Université du Québec en Outaouais, Gatineau, QC J8X 3X7, Canada;
| |
Collapse
|
8
|
Area-Efficient Vision-Based Feature Tracker for Autonomous Hovering of Unmanned Aerial Vehicle. ELECTRONICS 2020. [DOI: 10.3390/electronics9101591] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In this paper, we propose a vision-based feature tracker for the autonomous hovering of an unmanned aerial vehicle (UAV) and present an area-efficient hardware architecture for its integration into a flight control system-on-chip, which is essential for small UAVs. The proposed feature tracker is based on the Shi–Tomasi algorithm for feature detection and the pyramidal Lucas–Kanade (PLK) algorithm for feature tracking. By applying an efficient hardware structure that leverages the common computations between the Shi–Tomasi and PLK algorithms, the proposed feature tracker offers good tracking performance with fewer hardware resources than existing feature tracker implementations. To evaluate the tracking performance of the proposed feature tracker, we compared it with the GPS-based trajectories of a drone in various flight environments, such as lawn, asphalt, and sidewalk blocks. The proposed tracker exhibited an average accuracy of 0.039 in terms of normalized root-mean-square error (NRMSE). The proposed feature tracker was designed using the Verilog hardware description language and implemented on a field-programmable gate array (FPGA). The proposed feature tracker has 2744 slices, 25 DSPs, and 93 Kbit memory and can support the real-time processing at 417 FPS and an operating frequency of 130 MHz for 640 × 480 VGA images.
Collapse
|
9
|
An FPGA Based Tracking Implementation for Parkinson's Patients. SENSORS 2020; 20:s20113189. [PMID: 32512749 PMCID: PMC7309050 DOI: 10.3390/s20113189] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Revised: 05/28/2020] [Accepted: 05/28/2020] [Indexed: 11/16/2022]
Abstract
This paper presents a study on the optimization of the tracking system designed for patients with Parkinson’s disease tested at a day hospital center. The work performed significantly improves the efficiency of the computer vision based system in terms of energy consumption and hardware requirements. More specifically, it optimizes the performances of the background subtraction by segmenting every frame previously characterized by a Gaussian mixture model (GMM). This module is the most demanding part in terms of computation resources, and therefore, this paper proposes a method for its implementation by means of a low-cost development board based on Zynq XC7Z020 SoC (system on chip). The platform used is the ZedBoard, which combines an ARM Processor unit and a FPGA. It achieves real-time performance and low power consumption while performing the target request accurately. The results and achievements of this study, validated in real medical settings, are discussed and analyzed within.
Collapse
|
10
|
Jiménez F. Perception Sensors for Road Applications. SENSORS (BASEL, SWITZERLAND) 2019; 19:s19235294. [PMID: 31805654 PMCID: PMC6928634 DOI: 10.3390/s19235294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Accepted: 11/27/2019] [Indexed: 06/10/2023]
Abstract
New assistance systems and the applications of autonomous driving of road vehicles imply ever-greater requirements for perception systems that are necessary in order to increase the robustness of decisions and to avoid false positives or false negatives [...].
Collapse
Affiliation(s)
- Felipe Jiménez
- Instituto Universitario de Investigación del Automóvil (INSIA), Universidad Politécnica de Madrid, 28031 Madrid, Spain
| |
Collapse
|
11
|
Cui Z, Jiang K, Wang T. Unsupervised Moving Object Segmentation from Stationary or Moving Camera based on Multi-frame Homography Constraints. SENSORS 2019; 19:s19194344. [PMID: 31597308 PMCID: PMC6806239 DOI: 10.3390/s19194344] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 09/29/2019] [Accepted: 10/01/2019] [Indexed: 12/02/2022]
Abstract
Moving object segmentation is the most fundamental task for many vision-based applications. In the past decade, it has been performed on the stationary camera, or moving camera, respectively. In this paper, we show that the moving object segmentation can be addressed in a unified framework for both type of cameras. The proposed method consists of two stages: (1) In the first stage, a novel multi-frame homography model is generated to describe the background motion. Then, the inliers and outliers of that model are classified as background trajectories and moving object trajectories by the designed cumulative acknowledgment strategy. (2) In the second stage, a super-pixel-based Markov Random Fields model is used to refine the spatial accuracy of initial segmentation and obtain final pixel level labeling, which has integrated trajectory classification information, a dynamic appearance model, and spatial temporal cues. The proposed method overcomes the limitations of existing object segmentation algorithms and resolves the difference between stationary and moving cameras. The algorithm is tested on several challenging open datasets. Experiments show that the proposed method presents significant performance improvement over state-of-the-art techniques quantitatively and qualitatively.
Collapse
Affiliation(s)
- Zhigao Cui
- Xi'an research institute of High-Tech, Xi'an 710025, China.
| | - Ke Jiang
- Xi'an research institute of High-Tech, Xi'an 710025, China
| | - Tao Wang
- Xi'an research institute of High-Tech, Xi'an 710025, China
| |
Collapse
|