1
|
Ušinskis V, Makulavičius M, Petkevičius S, Dzedzickis A, Bučinskas V. Towards Autonomous Driving: Technologies and Data for Vehicles-to-Everything Communication. SENSORS (BASEL, SWITZERLAND) 2024; 24:3411. [PMID: 38894203 PMCID: PMC11174970 DOI: 10.3390/s24113411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 05/17/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024]
Abstract
Autonomous systems are becoming increasingly relevant in our everyday life. The transportation field is no exception and the smart cities concept raises new tasks and challenges for the development of autonomous systems development which has been progressively researched in literature. One of the main challenges is communication between different traffic objects. For instance, a mobile robot system can work as a standalone autonomous system reacting to a static environment and avoiding obstacles to reach a target. Nevertheless, more intensive communication and decision making is needed when additional dynamic objects and other autonomous systems are present in the same working environment. Traffic is a complicated environment consisting of vehicles, pedestrians, and various infrastructure elements. To apply autonomous systems in this kind of environment it is important to integrate object localization and to guarantee functional and trustworthy communication between each element. To achieve this, various sensors, communication standards, and equipment are integrated via the application of sensor fusion and AI machine learning methods. In this work review of vehicular communication systems is presented. The main focus is the researched sensors, communication standards, devices, machine learning methods, and vehicular-related data to find existing gaps for future vehicular communication system development. In the end, discussion and conclusions are presented.
Collapse
Affiliation(s)
| | | | | | | | - Vytautas Bučinskas
- Department of Mechatronics, Robotics and Digital Manufacturing, Vilnius Gediminas Technical University, LT-10105 Vilnius, Lithuania; (V.U.); (M.M.); (S.P.); (A.D.)
| |
Collapse
|
2
|
Liang L, Ma H, Zhao L, Xie X, Hua C, Zhang M, Zhang Y. Vehicle Detection Algorithms for Autonomous Driving: A Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:3088. [PMID: 38793942 PMCID: PMC11125132 DOI: 10.3390/s24103088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 05/07/2024] [Accepted: 05/10/2024] [Indexed: 05/26/2024]
Abstract
Autonomous driving, as a pivotal technology in modern transportation, is progressively transforming the modalities of human mobility. In this domain, vehicle detection is a significant research direction that involves the intersection of multiple disciplines, including sensor technology and computer vision. In recent years, many excellent vehicle detection methods have been reported, but few studies have focused on summarizing and analyzing these algorithms. This work provides a comprehensive review of existing vehicle detection algorithms and discusses their practical applications in the field of autonomous driving. First, we provide a brief description of the tasks, evaluation metrics, and datasets for vehicle detection. Second, more than 200 classical and latest vehicle detection algorithms are summarized in detail, including those based on machine vision, LiDAR, millimeter-wave radar, and sensor fusion. Finally, this article discusses the strengths and limitations of different algorithms and sensors, and proposes future trends.
Collapse
Affiliation(s)
- Liang Liang
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
| | - Haihua Ma
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
- Key Laboratory of Grain Information Processing and Control of Ministry of Education, Henan University of Technology, Zhengzhou 450001, China
| | - Le Zhao
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
- Key Laboratory of Grain Information Processing and Control of Ministry of Education, Henan University of Technology, Zhengzhou 450001, China
| | - Xiaopeng Xie
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
| | - Chengxin Hua
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
| | - Miao Zhang
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
- Key Laboratory of Grain Information Processing and Control of Ministry of Education, Henan University of Technology, Zhengzhou 450001, China
| | - Yonghui Zhang
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China; (L.L.); (L.Z.); (X.X.)
| |
Collapse
|
3
|
Zhang Z, Huang J, Hei G, Wang W. YOLO-IR-Free: An Improved Algorithm for Real-Time Detection of Vehicles in Infrared Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:8723. [PMID: 37960423 PMCID: PMC10648278 DOI: 10.3390/s23218723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 10/17/2023] [Accepted: 10/23/2023] [Indexed: 11/15/2023]
Abstract
In the field of object detection algorithms, the task of infrared vehicle detection holds significant importance. By utilizing infrared sensors, this approach detects the thermal radiation emitted by vehicles, enabling robust vehicle detection even during nighttime or adverse weather conditions, thus enhancing traffic safety and the efficiency of intelligent driving systems. Current techniques for infrared vehicle detection encounter difficulties in handling low contrast, detecting small objects, and ensuring real-time performance. In the domain of lightweight object detection algorithms, certain existing methodologies face challenges in effectively balancing detection speed and accuracy for this specific task. In order to address this quandary, this paper presents an improved algorithm, called YOLO-IR-Free, an anchor-free approach based on improved attention mechanism YOLOv7 algorithm for real-time detection of infrared vehicles, to tackle these issues. We introduce a new attention mechanism and network module to effectively capture subtle textures and low-contrast features in infrared images. The use of an anchor-free detection head instead of an anchor-based detection head is employed to enhance detection speed. Experimental results demonstrate that YOLO-IR-Free outperforms other methods in terms of accuracy, recall rate, and average precision scores, while maintaining good real-time performance.
Collapse
Affiliation(s)
- Zixuan Zhang
- College of Automation, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Jiong Huang
- Business School, The Chinese University of Hong Kong, Hong Kong 999077, China
| | - Gawen Hei
- School of Physics, Mathematics and Computing, The University of Western Australia, Crawley, WA 6009, Australia
| | - Wei Wang
- Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science & Technology, Nanjing 210044, China
| |
Collapse
|
4
|
Zou J, Zheng H, Wang F. Real-Time Target Detection System for Intelligent Vehicles Based on Multi-Source Data Fusion. SENSORS (BASEL, SWITZERLAND) 2023; 23:1823. [PMID: 36850421 PMCID: PMC9962490 DOI: 10.3390/s23041823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/28/2023] [Accepted: 02/01/2023] [Indexed: 06/18/2023]
Abstract
To improve the identification accuracy of target detection for intelligent vehicles, a real-time target detection system based on the multi-source fusion method is proposed. Based on the ROS melodic software development environment and the NVIDIA Xavier hardware development platform, this system integrates sensing devices such as millimeter-wave radar and camera, and it can realize functions such as real-time target detection and tracking. At first, the image data can be processed by the You Only Look Once v5 network, which can increase the speed and accuracy of identification; secondly, the millimeter-wave radar data are processed to provide a more accurate distance and velocity of the targets. Meanwhile, in order to improve the accuracy of the system, the sensor fusion method is used. The radar point cloud is projected onto the image, then through space-time synchronization, region of interest (ROI) identification, and data association, the target-tracking information is presented. At last, field tests of the system are conducted, the results of which indicate that the system has a more accurate recognition effect and scene adaptation ability in complex scenes.
Collapse
|
5
|
Ding Y, Zhou K, He L, Zhang J, Yang H. Dynamic Simulation and Experiment of Marching Small Unmanned Ground Vehicles with Small Arms. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07443-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
6
|
Generalized Single-Vehicle-Based Graph Reinforcement Learning for Decision-Making in Autonomous Driving. SENSORS 2022; 22:s22134935. [PMID: 35808428 PMCID: PMC9269790 DOI: 10.3390/s22134935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/28/2022] [Accepted: 06/28/2022] [Indexed: 11/17/2022]
Abstract
In the autonomous driving process, the decision-making system is mainly used to provide macro-control instructions based on the information captured by the sensing system. Learning-based algorithms have apparent advantages in information processing and understanding for an increasingly complex driving environment. To incorporate the interactive information between agents in the environment into the decision-making process, this paper proposes a generalized single-vehicle-based graph neural network reinforcement learning algorithm (SGRL algorithm). The SGRL algorithm introduces graph convolution into the traditional deep neural network (DQN) algorithm, adopts the training method for a single agent, designs a more explicit incentive reward function, and significantly improves the dimension of the action space. The SGRL algorithm is compared with the traditional DQN algorithm (NGRL) and the multi-agent training algorithm (MGRL) in the highway ramp scenario. Results show that the SGRL algorithm has outstanding advantages in network convergence, decision-making effect, and training efficiency.
Collapse
|
7
|
Gao X, Li X, Liu Q, Li Z, Yang F, Luan T. Multi-Agent Decision-Making Modes in Uncertain Interactive Traffic Scenarios via Graph Convolution-Based Deep Reinforcement Learning. SENSORS 2022; 22:s22124586. [PMID: 35746364 PMCID: PMC9230819 DOI: 10.3390/s22124586] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 06/09/2022] [Accepted: 06/15/2022] [Indexed: 02/04/2023]
Abstract
As one of the main elements of reinforcement learning, the design of the reward function is often not given enough attention when reinforcement learning is used in concrete applications, which leads to unsatisfactory performances. In this study, a reward function matrix is proposed for training various decision-making modes with emphasis on decision-making styles and further emphasis on incentives and punishments. Additionally, we model a traffic scene via graph model to better represent the interaction between vehicles, and adopt the graph convolutional network (GCN) to extract the features of the graph structure to help the connected autonomous vehicles perform decision-making directly. Furthermore, we combine GCN with deep Q-learning and multi-step double deep Q-learning to train four decision-making modes, which are named the graph convolutional deep Q-network (GQN) and the multi-step double graph convolutional deep Q-network (MDGQN). In the simulation, the superiority of the reward function matrix is proved by comparing it with the baseline, and evaluation metrics are proposed to verify the performance differences among decision-making modes. Results show that the trained decision-making modes can satisfy various driving requirements, including task completion rate, safety requirements, comfort level, and completion efficiency, by adjusting the weight values in the reward function matrix. Finally, the decision-making modes trained by MDGQN had better performance in an uncertain highway exit scene than those trained by GQN.
Collapse
Affiliation(s)
- Xin Gao
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100080, China; (X.G.); (Z.L.); (F.Y.); (T.L.)
| | - Xueyuan Li
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100080, China; (X.G.); (Z.L.); (F.Y.); (T.L.)
- Correspondence: (X.L.); (Q.L.)
| | - Qi Liu
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100080, China; (X.G.); (Z.L.); (F.Y.); (T.L.)
- Correspondence: (X.L.); (Q.L.)
| | - Zirui Li
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100080, China; (X.G.); (Z.L.); (F.Y.); (T.L.)
- Department of Transport and Planning, Faculty of Civil Engineering and Geosciences, Delft University of Technology, Stevinweg 1, 2628 CN Delft, The Netherlands
| | - Fan Yang
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100080, China; (X.G.); (Z.L.); (F.Y.); (T.L.)
| | - Tian Luan
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100080, China; (X.G.); (Z.L.); (F.Y.); (T.L.)
| |
Collapse
|
8
|
Abstract
The overall safety of a building can be effectively evaluated through regular inspection of the indoor walls by unmanned ground vehicles (UGVs). However, when the UGV performs line patrol inspections according to the specified path, it is easy to be affected by obstacles. This paper presents an obstacle avoidance strategy for unmanned ground vehicles in indoor environments. The proposed method is based on monocular vision. Through the obtained environmental information in front of the unmanned vehicle, the obstacle orientation is determined, and the moving direction and speed of the mobile robot are determined based on the neural network output and confidence. This paper also innovatively adopts the method of collecting indoor environment images based on camera array and realizes the automatic classification of data sets by arranging cameras with different directions and focal lengths. In the training of a transfer neural network, aiming at the problem that it is difficult to set the learning rate factor of the new layer, the improved bat algorithm is used to find the optimal learning rate factor on a small sample data set. The simulation results show that the accuracy can reach 94.84%. Single-frame evaluation and continuous obstacle avoidance evaluation are used to verify the effectiveness of the obstacle avoidance algorithm. The experimental results show that an unmanned wheeled robot with a bionic transfer-convolution neural network as the control command output can realize autonomous obstacle avoidance in complex indoor scenes.
Collapse
|