1
|
BakhtariAghdam F, Aliasgharzadeh S, Sadeghi-Bazargani H, Harzand-Jadidi S. Pedestrians' unsafe road-crossing behaviors in Iran: An observational-based study in West Azerbaijan. TRAFFIC INJURY PREVENTION 2023; 24:638-644. [PMID: 37486258 DOI: 10.1080/15389588.2023.2237152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/11/2023] [Accepted: 07/12/2023] [Indexed: 07/25/2023]
Abstract
OBJECTIVE Pedestrians are one of the most vulnerable users in road traffic injuries (RTIs). The rate of pedestrians' fatality is high in Iran. It is worthwhile to investigate how pedestrians behave. This observational study aimed to investigate pedestrians' unsafe behaviors while crossing. METHODS This cross-sectional study examined the behavior of 1095 pedestrians (69.7% men) using videotaping when they crossed at two intersections and three non-intersections on a weekend and two working days in the morning, at noon, and in the evening. The information obtained was classified into 5 domains including adherence to traffic rule, violation, environmental barriers, visibility, and distraction. Data were analyzed using Stata version 17. RESULTS About 60% of the pedestrians ignored the crosswalk and crossed the street wherever they wanted. More than 30% ignored the vehicles passing and crossed the street inattentively. About 60% of the pedestrians committed violations. More than half of pedestrians crossed unsafe crossings diagonally or in a hurry. More than 35% wore dark clothing and had low visibility, and nearly 30% were distracted. Adolescent pedestrians did not adhere traffic rules about 6 times more than the young adult pedestrians. Pedestrians who did not adhere to traffic rules in the morning were significantly more than in the evening. Men committed a violation 1.47 times more than women. The results showed that the pedestrians committed a violation in the morning significantly more than in the evening. CONCLUSION The occurrence of pedestrians' unsafe behaviors in Maku was high. Unsafe behaviors were high among men and young adult pedestrians. Therefore, it's essential to implement educational interventions via different media as well as environmental interventions by different organizations to improve safe behavior among pedestrians.
Collapse
Affiliation(s)
- Fatemeh BakhtariAghdam
- Department of Health Education and Promotion, School of Health, Tabriz University of Medical Sciences, Tabriz, Iran
- Road Traffic Injury Research Centre, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Samaneh Aliasgharzadeh
- Department of Health Education and Promotion, School of Health, Tabriz University of Medical Sciences, Tabriz, Iran
| | | | - Sepideh Harzand-Jadidi
- Road Traffic Injury Research Centre, Tabriz University of Medical Sciences, Tabriz, Iran
| |
Collapse
|
2
|
Swathi HY, Shivakumar G. Audio-visual multi-modality driven hybrid feature learning model for crowd analysis and classification. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:12529-12561. [PMID: 37501454 DOI: 10.3934/mbe.2023558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
The high pace emergence in advanced software systems, low-cost hardware and decentralized cloud computing technologies have broadened the horizon for vision-based surveillance, monitoring and control. However, complex and inferior feature learning over visual artefacts or video streams, especially under extreme conditions confine majority of the at-hand vision-based crowd analysis and classification systems. Retrieving event-sensitive or crowd-type sensitive spatio-temporal features for the different crowd types under extreme conditions is a highly complex task. Consequently, it results in lower accuracy and hence low reliability that confines existing methods for real-time crowd analysis. Despite numerous efforts in vision-based approaches, the lack of acoustic cues often creates ambiguity in crowd classification. On the other hand, the strategic amalgamation of audio-visual features can enable accurate and reliable crowd analysis and classification. Considering it as motivation, in this research a novel audio-visual multi-modality driven hybrid feature learning model is developed for crowd analysis and classification. In this work, a hybrid feature extraction model was applied to extract deep spatio-temporal features by using Gray-Level Co-occurrence Metrics (GLCM) and AlexNet transferrable learning model. Once extracting the different GLCM features and AlexNet deep features, horizontal concatenation was done to fuse the different feature sets. Similarly, for acoustic feature extraction, the audio samples (from the input video) were processed for static (fixed size) sampling, pre-emphasis, block framing and Hann windowing, followed by acoustic feature extraction like GTCC, GTCC-Delta, GTCC-Delta-Delta, MFCC, Spectral Entropy, Spectral Flux, Spectral Slope and Harmonics to Noise Ratio (HNR). Finally, the extracted audio-visual features were fused to yield a composite multi-modal feature set, which is processed for classification using the random forest ensemble classifier. The multi-class classification yields a crowd-classification accurac12529y of (98.26%), precision (98.89%), sensitivity (94.82%), specificity (95.57%), and F-Measure of 98.84%. The robustness of the proposed multi-modality-based crowd analysis model confirms its suitability towards real-world crowd detection and classification tasks.
Collapse
Affiliation(s)
- H Y Swathi
- Department of Electronics and Communication Engineering, Malnad College of Engineering, Visvesvaraya Technological University, Belagavi, India
| | - G Shivakumar
- Department of Electronics and Communication Engineering, AMC Engineering College, Visvesvaraya Technological University, Belagavi, India
| |
Collapse
|
3
|
Maya-Martínez SU, Argüelles-Cruz AJ, Guzmán-Zavaleta ZJ, Ramírez-Cadena MDJ. Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance. Front Robot AI 2023; 10:1052509. [PMID: 37008985 PMCID: PMC10061079 DOI: 10.3389/frobt.2023.1052509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Accepted: 01/23/2023] [Indexed: 03/18/2023] Open
Abstract
Introduction: Wearable assistive devices for the visually impaired whose technology is based on video camera devices represent a challenge in rapid evolution, where one of the main problems is to find computer vision algorithms that can be implemented in low-cost embedded devices.Objectives and Methods: This work presents a Tiny You Only Look Once architecture for pedestrian detection, which can be implemented in low-cost wearable devices as an alternative for the development of assistive technologies for the visually impaired.Results: The recall results of the proposed refined model represent an improvement of 71% working with four anchor boxes and 66% with six anchor boxes compared to the original model. The accuracy achieved on the same data set shows an increase of 14% and 25%, respectively. The F1 calculation shows a refinement of 57% and 55%. The average accuracy of the models achieved an improvement of 87% and 99%. The number of correctly detected objects was 3098 and 2892 for four and six anchor boxes, respectively, whose performance is better by 77% and 65% compared to the original, which correctly detected 1743 objects.Discussion: Finally, the model was optimized for the Jetson Nano embedded system, a case study for low-power embedded devices, and in a desktop computer. In both cases, the graphics processing unit (GPU) and central processing unit were tested, and a documented comparison of solutions aimed at serving visually impaired people was performed.Conclusion: We performed the desktop tests with a RTX 2070S graphics card, and the image processing took about 2.8 ms. The Jetson Nano board could process an image in about 110 ms, offering the opportunity to generate alert notification procedures in support of visually impaired mobility.
Collapse
Affiliation(s)
| | | | | | - Miguel-de-Jesús Ramírez-Cadena
- School of Engineering and Science, Tecnológico de Monterrey, Mexico City, Mexico
- *Correspondence: Miguel-De-Jesús Ramírez-Cadena,
| |
Collapse
|
4
|
RHL-track: visual object tracking based on recurrent historical localization. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08422-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
5
|
Chen B, Meng F, Tang H, Tong G. Two-Level Attention Module Based on Spurious-3D Residual Networks for Human Action Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:1707. [PMID: 36772770 PMCID: PMC9919151 DOI: 10.3390/s23031707] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/19/2023] [Accepted: 02/02/2023] [Indexed: 06/18/2023]
Abstract
In recent years, deep learning techniques have excelled in video action recognition. However, currently commonly used video action recognition models minimize the importance of different video frames and spatial regions within some specific frames when performing action recognition, which makes it difficult for the models to adequately extract spatiotemporal features from the video data. In this paper, an action recognition method based on improved residual convolutional neural networks (CNNs) for video frames and spatial attention modules is proposed to address this problem. The network can guide what and where to emphasize or suppress with essentially little computational cost using the video frame attention module and the spatial attention module. It also employs a two-level attention module to emphasize feature information along the temporal and spatial dimensions, respectively, highlighting the more important frames in the overall video sequence and the more important spatial regions in some specific frames. Specifically, we create the video frame and spatial attention map by successively adding the video frame attention module and the spatial attention module to aggregate the spatial and temporal dimensions of the intermediate feature maps of the CNNs to obtain different feature descriptors, thus directing the network to focus more on important video frames and more contributing spatial regions. The experimental results further show that the network performs well on the UCF-101 and HMDB-51 datasets.
Collapse
Affiliation(s)
- Bo Chen
- Science and Technology on Microsystem Laboratory, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 201800, China
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Fangzhou Meng
- Science and Technology on Microsystem Laboratory, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 201800, China
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Hongying Tang
- Science and Technology on Microsystem Laboratory, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 201800, China
| | - Guanjun Tong
- Science and Technology on Microsystem Laboratory, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 201800, China
| |
Collapse
|
6
|
Zhao X, Wang G, He Z, Jiang H. A survey of moving object detection methods: a practical perspective. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.06.104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
7
|
SRAI-LSTM: A Social Relation Attention-based Interaction-aware LSTM for human trajectory prediction. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.11.089] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
8
|
Gong Y, Yang K, Liu Y, Lim KP, Ling N, Wu HR. Quantization Parameter Cascading for Surveillance Video Coding Considering All Inter Reference Frames. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:5692-5707. [PMID: 34125681 DOI: 10.1109/tip.2021.3087413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Video surveillance and its applications have become increasingly ubiquitous in modern daily life. In video surveillance system, video coding as a critical enabling technology determines the effective transmission and storage of surveillance videos. In order to meet the real-time or time-critical transmission requirements of video surveillance systems, the low-delay (LD) configuration of the advanced high efficiency video coding (HEVC) standard is usually used to encode surveillance videos. The coding efficiency of the LD configuration is closely related to the quantization parameter (QP) cascading technique which selects or determines the QPs for encoding. However, the quantization parameter cascading (QPC) technique currently adopted for the LD configuration in HEVC test model (i.e., HM) is not optimized since it has not taken full account of the reference dependency in coding. In this paper, an efficient QPC technique for surveillance video coding, referred to as QPC-SV, is proposed, considering all inter reference frames under the LD configuration. Experimental results demonstrate the efficacy of the proposed QPC-SV. Compared with the default configuration of QPC in the HM, the QPC-SV achieves significant rate-distortion performance gain with average BD-rates of -9.35% and -9.76% for the LDP and LDB configurations, respectively.
Collapse
|
9
|
Dotti D, Popa M, Asteriadis S. Being the Center of Attention. ACM T INTERACT INTEL 2020. [DOI: 10.1145/3338245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
This article proposes a novel study on personality recognition using video data from different scenarios. Our goal is to jointly model nonverbal behavioral cues with contextual information for a robust, multi-scenario, personality recognition system. Therefore, we build a novel multi-stream Convolutional Neural Network (CNN) framework, which considers multiple sources of information. From a given scenario, we extract spatio-temporal motion descriptors from every individual in the scene, spatio-temporal motion descriptors encoding social group dynamics, and proxemics descriptors to encode the interaction with the surrounding context. All the proposed descriptors are mapped to the same feature space facilitating the overall learning effort. Experiments on two public datasets demonstrate the effectiveness of jointly modeling the mutual Person-Context information, outperforming the state-of-the art-results for personality recognition in two different scenarios. Last, we present CNN class activation maps for each personality trait, shedding light on behavioral patterns linked with personality attributes.
Collapse
|
10
|
Rudenko A, Palmieri L, Herman M, Kitani KM, Gavrila DM, Arras KO. Human motion trajectory prediction: a survey. Int J Rob Res 2020. [DOI: 10.1177/0278364920917446] [Citation(s) in RCA: 175] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand, and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots, and advanced surveillance systems. This article provides a survey of human motion trajectory prediction. We review, analyze, and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.
Collapse
Affiliation(s)
- Andrey Rudenko
- Robert Bosch GmbH, Corporate Research, Germany
- Mobile Robotics and Olfaction Lab, Örebro University, Sweden
| | | | | | | | | | - Kai O Arras
- Robert Bosch GmbH, Corporate Research, Germany
| |
Collapse
|
11
|
Yang H, Yuan C, Zhang L, Sun Y, Hu W, Maybank SJ. STA-CNN: Convolutional Spatial-Temporal Attention Learning for Action Recognition. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5783-5793. [PMID: 32275599 DOI: 10.1109/tip.2020.2984904] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Convolutional Neural Networks have achieved excellent successes for object recognition in still images. However, the improvement of Convolutional Neural Networks over the traditional methods for recognizing actions in videos is not so significant, because the raw videos usually have much more redundant or irrelevant information than still images. In this paper, we propose a Spatial-Temporal Attentive Convolutional Neural Network (STA-CNN) which selects the discriminative temporal segments and focuses on the informative spatial regions automatically. The STA-CNN model incorporates a Temporal Attention Mechanism and a Spatial Attention Mechanism into a unified convolutional network to recognize actions in videos. The novel Temporal Attention Mechanism automatically mines the discriminative temporal segments from long and noisy videos. The Spatial Attention Mechanism firstly exploits the instantaneous motion information in optical flow features to locate the motion salient regions and it is then trained by an auxiliary classification loss with a Global Average Pooling layer to focus on the discriminative non-motion regions in the video frame. The STA-CNN model achieves the state-of-the-art performance on two of the most challenging datasets, UCF-101 (95.8%) and HMDB-51 (71.5%).
Collapse
|
12
|
Wang Q, Chen M, Nie F, Li X. Detecting Coherent Groups in Crowd Scenes by Multiview Clustering. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:46-58. [PMID: 30307858 DOI: 10.1109/tpami.2018.2875002] [Citation(s) in RCA: 79] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Detecting coherent groups is fundamentally important for crowd behavior analysis. In the past few decades, plenty of works have been conducted on this topic, but most of them have limitations due to the insufficient utilization of crowd properties and the arbitrary processing of individuals. In this study, a Multiview-based Parameter Free framework (MPF) is proposed. Based on the L1-norm and L2-norm, we design two versions of the multiview clustering method, which is the main part of the proposed framework. This paper presents the contributions on three aspects: (1) a new structural context descriptor is designed to characterize the structural properties of individuals in crowd scenes; (2) a self-weighted multiview clustering method is proposed to cluster feature points by incorporating their orientation and context similarities; and (3) a novel framework is introduced for group detection, which is able to determine the group number automatically without any parameter or threshold to be tuned. The effectiveness of the proposed framework is evaluated on real-world crowd videos, and the experimental results show its promising performance on group detection. In addition, the proposed multiview clustering method is also evaluated on a synthetic dataset and several standard benchmarks, and its superiority over the state-of-the-art competitors is demonstrated.
Collapse
|
13
|
|
14
|
Jiménez AC, Anzola J, Jimenez-Triana A. Pedestrian counting estimation based on fractal dimension. Heliyon 2019; 5:e01449. [PMID: 31008391 PMCID: PMC6458473 DOI: 10.1016/j.heliyon.2019.e01449] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 02/04/2019] [Accepted: 03/26/2019] [Indexed: 12/01/2022] Open
Abstract
Counting the number of pedestrians in urban environments has become an area of interest over the past few years. Its applications include studies to control vehicular traffic lights, urban planning, market studies, and detection of abnormal behaviors. However, these tasks require the use of intelligent algorithms of high computational demand that need to be trained in the environment being studied. This article presents a novel method to estimate pedestrian flow in uncontrolled environments by using the fractal dimension measured through the box-counting algorithm, which does not require the use of image pre-processing and intelligent algorithms. Four scenarios were used to validate the method presented in this article, of which the last scene was a low-light surveillance video, showing experimental results with a mean relative error of 4.92% when counting pedestrians. After comparing the results with other techniques that depend on intelligent algorithms, we can confirm that this method achieves improved performance in the estimation of pedestrian traffic.
Collapse
Affiliation(s)
- Andrés C Jiménez
- Department of Electronic Engineering, Fundación Universitaria Los Libertadores, Carrera 16 No. 63 A - 68, Bogotá, Colombia
| | - John Anzola
- Department of Electronic Engineering, Fundación Universitaria Los Libertadores, Carrera 16 No. 63 A - 68, Bogotá, Colombia
| | - Alexander Jimenez-Triana
- Department of Electronic Engineering, Fundación Universitaria Los Libertadores, Carrera 16 No. 63 A - 68, Bogotá, Colombia.,Department of Control Engineering, Universidad Distrital Francisco José de Caldas, Cll 74 Sur No. 68A - 20, Bogotá, Colombia
| |
Collapse
|
15
|
Tamaki T, Ogawa D, Raytchev B, Kaneda K. Semantic segmentation of trajectories with improved agent models for pedestrian behavior analysis. Adv Robot 2018. [DOI: 10.1080/01691864.2018.1554508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Toru Tamaki
- Faculty of Engineering, Department of Information Engineering, Hiroshima University, Hiroshima, Japan
| | - Daisuke Ogawa
- Faculty of Engineering, Department of Information Engineering, Hiroshima University, Hiroshima, Japan
| | - Bisser Raytchev
- Faculty of Engineering, Department of Information Engineering, Hiroshima University, Hiroshima, Japan
| | - Kazufumi Kaneda
- Faculty of Engineering, Department of Information Engineering, Hiroshima University, Hiroshima, Japan
| |
Collapse
|
16
|
Lambert J, Liang L, Morales LY, Akai N, Carballo A, Takeuchi E, Narksri P, Seiya S, Takeda K. Tsukuba Challenge 2017 Dynamic Object Tracks Dataset for Pedestrian Behavior Analysis. JOURNAL OF ROBOTICS AND MECHATRONICS 2018. [DOI: 10.20965/jrm.2018.p0598] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Navigation in social environments, in the absence of traffic rules, is the difficult task at the core of the annual Tsukuba Challenge. In this context, a better understanding of the soft rules that influence social dynamics is key to improve robot navigation. Prior research attempts to model social behavior through microscopic interactions, but the resulting emergent behavior depends heavily on the initial conditions, in particular the macroscopic setting. As such, data-driven studies of pedestrian behavior in a fixed environment may provide key insight into this macroscopic aspect, but appropriate data is scarcely available. To support this stream of research, we release an open-source dataset of dynamic object trajectories localized in a map of 2017 Tsukuba Challenge environment. A data collection platform equipped with lidar, camera, IMU, and odometry repeatedly navigated the challenge’s course, recording observations of passersby. Using a background map, we localized ourselves in the environment, removed the static background from the point cloud data, clustered the remaining points into dynamic objects and tracked their movements over time. In this work, we present the Tsukuba Challenge Dynamic Object Tracks dataset, which features nearly 10,000 trajectories of pedestrians, cyclists, and other dynamic agents, in particular autonomous robots. We provide a 3D map of the environment used as global frame for all trajectories. For each trajectory, we provide at regular time intervals an estimated position, velocity, heading, and rotational velocity, as well as bounding boxes for the objects and segmented lidar point clouds. As additional contribution, we provide a discussion which focuses on some discernible macroscopic patterns in the data.
Collapse
|
17
|
Ullah H, Altamimi AB, Uzair M, Ullah M. Anomalous entities detection and localization in pedestrian flows. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.02.045] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
18
|
|
19
|
Jacobs HO, Hughes OK, Johnson-Roberson M, Vasudevan R. Real-Time Certified Probabilistic Pedestrian Forecasting. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2719762] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|