1
|
Guo L, Ge P, Shi Z. Multi-Object Trajectory Prediction Based on Lane Information and Generative Adversarial Network. SENSORS (BASEL, SWITZERLAND) 2024; 24:1280. [PMID: 38400437 PMCID: PMC10893212 DOI: 10.3390/s24041280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 02/09/2024] [Accepted: 02/12/2024] [Indexed: 02/25/2024]
Abstract
Nowadays, most trajectory prediction algorithms have difficulty simulating actual traffic behavior, and there is still a problem of large prediction errors. Therefore, this paper proposes a multi-object trajectory prediction algorithm based on lane information and foresight information. A Hybrid Dilated Convolution module based on the Channel Attention mechanism (CA-HDC) is developed to extract features, which improves the lane feature extraction in complicated environments and solves the problem of poor robustness of the traditional PINet. A lane information fusion module and a trajectory adjustment module based on the foresight information are developed. A socially acceptable trajectory with Generative Adversarial Networks (S-GAN) is developed to reduce the error of the trajectory prediction algorithm. The lane detection accuracy in special scenarios such as crowded, shadow, arrow, crossroad, and night are improved on the CULane dataset. The average F1-measure of the proposed lane detection has been increased by 4.1% compared to the original PINet. The trajectory prediction test based on D2-City indicates that the average displacement error of the proposed trajectory prediction algorithm is reduced by 4.27%, and the final displacement error is reduced by 7.53%. The proposed algorithm can achieve good results in lane detection and multi-object trajectory prediction tasks.
Collapse
Affiliation(s)
- Lie Guo
- School of Mechanical Engineering, Dalian University of Technology, Dalian 116024, China; (L.G.); (Z.S.)
- Ningbo Institute, Dalian University of Technology, Ningbo 315016, China
| | - Pingshu Ge
- College of Mechanical & Electronic Engineering, Dalian Minzu University, Dalian 116600, China
| | - Zhenzhou Shi
- School of Mechanical Engineering, Dalian University of Technology, Dalian 116024, China; (L.G.); (Z.S.)
| |
Collapse
|
2
|
Piadyk Y, Rulff J, Brewer E, Hosseini M, Ozbay K, Sankaradas M, Chakradhar S, Silva C. StreetAware: A High-Resolution Synchronized Multimodal Urban Scene Dataset. SENSORS (BASEL, SWITZERLAND) 2023; 23:3710. [PMID: 37050773 PMCID: PMC10099242 DOI: 10.3390/s23073710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 03/20/2023] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
Access to high-quality data is an important barrier in the digital analysis of urban settings, including applications within computer vision and urban design. Diverse forms of data collected from sensors in areas of high activity in the urban environment, particularly at street intersections, are valuable resources for researchers interpreting the dynamics between vehicles, pedestrians, and the built environment. In this paper, we present a high-resolution audio, video, and LiDAR dataset of three urban intersections in Brooklyn, New York, totaling almost 8 unique hours. The data were collected with custom Reconfigurable Environmental Intelligence Platform (REIP) sensors that were designed with the ability to accurately synchronize multiple video and audio inputs. The resulting data are novel in that they are inclusively multimodal, multi-angular, high-resolution, and synchronized. We demonstrate four ways the data could be utilized - (1) to discover and locate occluded objects using multiple sensors and modalities, (2) to associate audio events with their respective visual representations using both video and audio modes, (3) to track the amount of each type of object in a scene over time, and (4) to measure pedestrian speed using multiple synchronized camera views. In addition to these use cases, our data are available for other researchers to carry out analyses related to applying machine learning to understanding the urban environment (in which existing datasets may be inadequate), such as pedestrian-vehicle interaction modeling and pedestrian attribute recognition. Such analyses can help inform decisions made in the context of urban sensing and smart cities, including accessibility-aware urban design and Vision Zero initiatives.
Collapse
Affiliation(s)
- Yurii Piadyk
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA
| | - Joao Rulff
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA
| | - Ethan Brewer
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA
| | - Maryam Hosseini
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA
| | - Kaan Ozbay
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA
| | | | | | - Claudio Silva
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA
| |
Collapse
|
3
|
Reyes-Muñoz A, Guerrero-Ibáñez J. Vulnerable Road Users and Connected Autonomous Vehicles Interaction: A Survey. SENSORS 2022; 22:s22124614. [PMID: 35746397 PMCID: PMC9229412 DOI: 10.3390/s22124614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 06/14/2022] [Accepted: 06/15/2022] [Indexed: 11/16/2022]
Abstract
There is a group of users within the vehicular traffic ecosystem known as Vulnerable Road Users (VRUs). VRUs include pedestrians, cyclists, motorcyclists, among others. On the other hand, connected autonomous vehicles (CAVs) are a set of technologies that combines, on the one hand, communication technologies to stay always ubiquitous connected, and on the other hand, automated technologies to assist or replace the human driver during the driving process. Autonomous vehicles are being visualized as a viable alternative to solve road accidents providing a general safe environment for all the users on the road specifically to the most vulnerable. One of the problems facing autonomous vehicles is to generate mechanisms that facilitate their integration not only within the mobility environment, but also into the road society in a safe and efficient way. In this paper, we analyze and discuss how this integration can take place, reviewing the work that has been developed in recent years in each of the stages of the vehicle-human interaction, analyzing the challenges of vulnerable users and proposing solutions that contribute to solving these challenges.
Collapse
Affiliation(s)
- Angélica Reyes-Muñoz
- Computer Architecture Department, Polytechnic University of Catalonia, 08860 Barcelona, Spain
- Correspondence:
| | | |
Collapse
|
4
|
Xiao Z, Wang J, Han L, Guo S, Cui Q. Application of Machine Vision System in Food Detection. Front Nutr 2022; 9:888245. [PMID: 35634395 PMCID: PMC9131190 DOI: 10.3389/fnut.2022.888245] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 04/22/2022] [Indexed: 11/21/2022] Open
Abstract
Food processing technology is an important part of modern life globally and will undoubtedly play an increasingly significant role in future development of industry. Food quality and safety are societal concerns, and food health is one of the most important aspects of food processing. However, ensuring food quality and safety is a complex process that necessitates huge investments in labor. Currently, machine vision system based image analysis is widely used in the food industry to monitor food quality, greatly assisting researchers and industry in improving food inspection efficiency. Meanwhile, the use of deep learning in machine vision has significantly improved food identification intelligence. This paper reviews the application of machine vision in food detection from the hardware and software of machine vision systems, introduces the current state of research on various forms of machine vision, and provides an outlook on the challenges that machine vision system faces.
Collapse
Affiliation(s)
- Zhifei Xiao
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, School of Mechanical Engineering, National Demonstration Center for Experimental Mechanical Engineering Education, Shandong University, Jinan, China
| | - Jilai Wang
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, School of Mechanical Engineering, National Demonstration Center for Experimental Mechanical Engineering Education, Shandong University, Jinan, China
| | - Lu Han
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, School of Mechanical Engineering, National Demonstration Center for Experimental Mechanical Engineering Education, Shandong University, Jinan, China
| | - Shubiao Guo
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, School of Mechanical Engineering, National Demonstration Center for Experimental Mechanical Engineering Education, Shandong University, Jinan, China
| | - Qinghao Cui
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, School of Mechanical Engineering, National Demonstration Center for Experimental Mechanical Engineering Education, Shandong University, Jinan, China
| |
Collapse
|
5
|
Zhou X, Ren H, Zhang T, Mou X, He Y, Chan CY. Prediction of Pedestrian Crossing Behavior Based on Surveillance Video. SENSORS (BASEL, SWITZERLAND) 2022; 22:1467. [PMID: 35214369 PMCID: PMC8876527 DOI: 10.3390/s22041467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 02/09/2022] [Accepted: 02/10/2022] [Indexed: 06/14/2023]
Abstract
Prediction of pedestrian crossing behavior is an important issue faced by the realization of autonomous driving. The current research on pedestrian crossing behavior prediction is mainly based on vehicle camera. However, the sight line of vehicle camera may be blocked by other vehicles or the road environment, making it difficult to obtain key information in the scene. Pedestrian crossing behavior prediction based on surveillance video can be used in key road sections or accident-prone areas to provide supplementary information for vehicle decision-making, thereby reducing the risk of accidents. To this end, we propose a pedestrian crossing behavior prediction network for surveillance video. The network integrates pedestrian posture, local context and global context features through a new cross-stacked gated recurrence unit (GRU) structure to achieve accurate prediction of pedestrian crossing behavior. Applied onto the surveillance video dataset from the University of California, Berkeley to predict the pedestrian crossing behavior, our model achieves the best results regarding accuracy, F1 parameter, etc. In addition, we conducted experiments to study the effects of time to prediction and pedestrian speed on the prediction accuracy. This paper proves the feasibility of pedestrian crossing behavior prediction based on surveillance video. It provides a reference for the application of edge computing in the safety guarantee of automatic driving.
Collapse
Affiliation(s)
- Xiao Zhou
- School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China; (X.Z.); (H.R.); (T.Z.); (X.M.)
| | - Hongyu Ren
- School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China; (X.Z.); (H.R.); (T.Z.); (X.M.)
| | - Tingting Zhang
- School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China; (X.Z.); (H.R.); (T.Z.); (X.M.)
| | - Xingang Mou
- School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China; (X.Z.); (H.R.); (T.Z.); (X.M.)
| | - Yi He
- Intelligent Transport Systems Research Center, Wuhan University of Technology, Wuhan 430063, China
| | - Ching-Yao Chan
- California Partners for Advanced Transportation Technology (PATH), University of California, Berkeley, Richmond, CA 94720-5800, USA;
| |
Collapse
|