1
|
Mo Y, Vijay R, Rufus R, de Boer N, Kim J, Yu M. Enhanced Perception for Autonomous Vehicles at Obstructed Intersections: An Implementation of Vehicle to Infrastructure (V2I) Collaboration. SENSORS (BASEL, SWITZERLAND) 2024; 24:936. [PMID: 38339653 PMCID: PMC10856888 DOI: 10.3390/s24030936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 01/25/2024] [Accepted: 01/27/2024] [Indexed: 02/12/2024]
Abstract
In urban intersections, the sensory capabilities of autonomous vehicles (AVs) are often hindered by visual obstructions, posing significant challenges to their robust and safe operation. This paper presents an implementation study focused on enhancing the safety and robustness of Connected Automated Vehicles (CAVs) in scenarios with occluded visibility at urban intersections. A novel LiDAR Infrastructure System is established for roadside sensing, combined with Baidu Apollo's Automated Driving System (ADS) and Cohda Wireless V2X communication hardware, and an integrated platform is established for roadside perception enhancement in autonomous driving. The field tests were conducted at the Singapore CETRAN (Centre of Excellence for Testing & Research of Autonomous Vehicles-NTU) autonomous vehicle test track, with the communication protocol adhering to SAE J2735 V2X communication standards. Communication latency and packet delivery ratio were analyzed as the evaluation metrics. The test results showed that the system can help CAV detect obstacles in advance under urban occluded scenarios.
Collapse
Affiliation(s)
- Yanghui Mo
- Energy Research Institute, Nanyang Technological University, Singapore 637141, Singapore; (Y.M.); (R.V.); (R.R.)
| | - Roshan Vijay
- Energy Research Institute, Nanyang Technological University, Singapore 637141, Singapore; (Y.M.); (R.V.); (R.R.)
| | - Raphael Rufus
- Energy Research Institute, Nanyang Technological University, Singapore 637141, Singapore; (Y.M.); (R.V.); (R.R.)
| | - Niels de Boer
- Energy Research Institute, Nanyang Technological University, Singapore 637141, Singapore; (Y.M.); (R.V.); (R.R.)
| | - Jungdae Kim
- Autonomous a2z, Anyang-si 14067, Republic of Korea; (J.K.); (M.Y.)
| | - Minsang Yu
- Autonomous a2z, Anyang-si 14067, Republic of Korea; (J.K.); (M.Y.)
| |
Collapse
|
2
|
Malik S, Khan MJ, Khan MA, El-Sayed H. Collaborative Perception-The Missing Piece in Realizing Fully Autonomous Driving. SENSORS (BASEL, SWITZERLAND) 2023; 23:7854. [PMID: 37765911 PMCID: PMC10535382 DOI: 10.3390/s23187854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 05/23/2023] [Accepted: 05/25/2023] [Indexed: 09/29/2023]
Abstract
Environment perception plays a crucial role in enabling collaborative driving automation, which is considered to be the ground-breaking solution to tackling the safety, mobility, and sustainability challenges of contemporary transportation systems. Despite the fact that computer vision for object perception is undergoing an extraordinary evolution, single-vehicle systems' constrained receptive fields and inherent physical occlusion make it difficult for state-of-the-art perception techniques to cope with complex real-world traffic settings. Collaborative perception (CP) based on various geographically separated perception nodes was developed to break the perception bottleneck for driving automation. CP leverages vehicle-to-vehicle and vehicle-to-infrastructure communication to enable vehicles and infrastructure to combine and share information to comprehend the surrounding environment beyond the line of sight and field of view to enhance perception accuracy, lower latency, and remove perception blind spots. In this article, we highlight the need for an evolved version of the collaborative perception that should address the challenges hindering the realization of level 5 AD use cases by comprehensively studying the transition from classical perception to collaborative perception. In particular, we discuss and review perception creation at two different levels: vehicle and infrastructure. Furthermore, we also study the communication technologies and three different collaborative perception message-sharing models, their comparison analyzing the trade-off between the accuracy of the transmitted data and the communication bandwidth used for data transmission, and the challenges therein. Finally, we discuss a range of crucial challenges and future directions of collaborative perception that need to be addressed before a higher level of autonomy hits the roads.
Collapse
Affiliation(s)
- Sumbal Malik
- College of Information Technology, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
- Emirates Center for Mobility Research (ECMR), United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
| | - Muhammad Jalal Khan
- College of Information Technology, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
- Emirates Center for Mobility Research (ECMR), United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
| | - Manzoor Ahmed Khan
- College of Information Technology, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
- Emirates Center for Mobility Research (ECMR), United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
| | - Hesham El-Sayed
- College of Information Technology, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
- Emirates Center for Mobility Research (ECMR), United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
| |
Collapse
|
3
|
Huang L, Huang W. RD-YOLO: An Effective and Efficient Object Detector for Roadside Perception System. SENSORS (BASEL, SWITZERLAND) 2022; 22:8097. [PMID: 36365793 PMCID: PMC9658208 DOI: 10.3390/s22218097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 10/16/2022] [Accepted: 10/19/2022] [Indexed: 06/16/2023]
Abstract
In recent years, intelligent driving technology based on vehicle-road cooperation has gradually become a research hotspot in the field of intelligent transportation. There are many studies regarding vehicle perception, but fewer studies regarding roadside perception. As sensors are installed at different heights, the roadside object scale varies violently, which burdens the optimization of networks. Moreover, there is a large amount of overlapping and occlusion in complex road environments, which leads to a great challenge of object distinction. To solve the two problems raised above, we propose RD-YOLO. Based on YOLOv5s, we reconstructed the feature fusion layer to increase effective feature extraction and improve the detection capability of small targets. Then, we replaced the original pyramid network with a generalized feature pyramid network (GFPN) to improve the adaptability of the network to different scale features. We also integrated a coordinate attention (CA) mechanism to find attention regions in scenarios with dense objects. Finally, we replaced the original Loss with Focal-EIOU Loss to improve the speed of the bounding box regression and the positioning accuracy of the anchor box. Compared to the YOLOv5s, the RD-YOLO improves the mean average precision (mAP) by 5.5% on the Rope3D dataset and 2.9% on the UA-DETRAC dataset. Meanwhile, by modifying the feature fusion layer, the weight of RD-YOLO is decreased by 55.9% while the detection speed is almost unchanged. Nevertheless, the proposed algorithm is capable of real-time detection at faster than 71.9 frames/s (FPS) and achieves higher accuracy than the previous approaches with a similar FPS.
Collapse
|
4
|
Towards Cooperative Perception Services for ITS: Digital Twin in the Automotive Edge Cloud. ENERGIES 2021. [DOI: 10.3390/en14185930] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We demonstrate a working functional prototype of a cooperative perception system that maintains a real-time digital twin of the traffic environment, providing a more accurate and more reliable model than any of the participant subsystems—in this case, smart vehicles and infrastructure stations—would manage individually. The importance of such technology is that it can facilitate a spectrum of new derivative services, including cloud-assisted and cloud-controlled ADAS functions, dynamic map generation with analytics for traffic control and road infrastructure monitoring, a digital framework for operating vehicle testing grounds, logistics facilities, etc. In this paper, we constrain our discussion on the viability of the core concept and implement a system that provides a single service: the live visualization of our digital twin in a 3D simulation, which instantly and reliably matches the state of the real-world environment and showcases the advantages of real-time fusion of sensory data from various traffic participants. We envision this prototype system as part of a larger network of local information processing and integration nodes, i.e., the logically centralized digital twin is maintained in a physically distributed edge cloud.
Collapse
|
5
|
Automated Driving with Cooperative Perception Based on CVFH and Millimeter-Wave V2I Communications for Safe and Efficient Passing through Intersections. SENSORS 2021; 21:s21175854. [PMID: 34502745 PMCID: PMC8433959 DOI: 10.3390/s21175854] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 08/19/2021] [Accepted: 08/26/2021] [Indexed: 11/16/2022]
Abstract
The development of automated driving is actively progressing, and connected cars are also under development. Connected cars are the technology of connecting vehicles to networks so that connected vehicles can enhance their services. Safety services are among the main services expected in connected car society. Cooperative perception belongs to safety services and improves safety by visualizing blind spots. This visualization is achieved by sharing sensor data via wireless communications. Therefore, the number of visualized blind spots highly depends upon the performance of wireless communications. In this paper, we analyzed the required sensor data rate to be shared for the cooperative perception in order to realize safe and reliable automated driving in an intersection scenario. The required sensor data rate was calculated by the combination of recognition and crossing decisions of an automated driving vehicle to adopt realistic assumptions. In this calculation, CVFH was used to derive tight requirements, and the minimum required braking aims to alleviate the traffic congestion around the intersection. At the end of the paper, we compare the required sensor data rate with the outage data rate realized by conventional and millimeter-wave communications, and show that millimeter-wave communications can support safe crossing at a realistic velocity.
Collapse
|
6
|
Gu B, Liu J, Xiong H, Li T, Pan Y. ECPC-ICP: A 6D Vehicle Pose Estimation Method by Fusing the Roadside Lidar Point Cloud and Road Feature. SENSORS 2021; 21:s21103489. [PMID: 34067737 PMCID: PMC8156169 DOI: 10.3390/s21103489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 05/10/2021] [Accepted: 05/11/2021] [Indexed: 11/16/2022]
Abstract
In the vehicle pose estimation task based on roadside Lidar in cooperative perception, the measurement distance, angle, and laser resolution directly affect the quality of the target point cloud. For incomplete and sparse point clouds, current methods are either less accurate in correspondences solved by local descriptors or not robust enough due to the reduction of effective boundary points. In response to the above weakness, this paper proposed a registration algorithm Environment Constraint Principal Component-Iterative Closest Point (ECPC-ICP), which integrated road information constraints. The road normal feature was extracted, and the principal component of the vehicle point cloud matrix under the road normal constraint was calculated as the initial pose result. Then, an accurate 6D pose was obtained through point-to-point ICP registration. According to the measurement characteristics of the roadside Lidars, this paper defined the point cloud sparseness description. The existing algorithms were tested on point cloud data with different sparseness. The simulated experimental results showed that the positioning MAE of ECPC-ICP was about 0.5% of the vehicle scale, the orientation MAE was about 0.26°, and the average registration success rate was 95.5%, which demonstrated an improvement in accuracy and robustness compared with current methods. In the real test environment, the positioning MAE was about 2.6% of the vehicle scale, and the average time cost was 53.19 ms, proving the accuracy and effectiveness of ECPC-ICP in practical applications.
Collapse
Affiliation(s)
- Bo Gu
- School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510006, China; (B.G.); (J.L.)
- Guangdong Provincial Key Laboratory of Fire Science and Technology, Guangzhou 510006, China
| | - Jianxun Liu
- School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510006, China; (B.G.); (J.L.)
| | - Huiyuan Xiong
- School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510006, China; (B.G.); (J.L.)
- Correspondence:
| | - Tongtong Li
- China Nuclear Power Engineering Co., Ltd., Shenzhen 518124, China; (T.L.); (Y.P.)
| | - Yuelong Pan
- China Nuclear Power Engineering Co., Ltd., Shenzhen 518124, China; (T.L.); (Y.P.)
| |
Collapse
|
7
|
Kasture P, Nishimura H. Analysis of Cooperative Perception in Ant Traffic and Its Effects on Transportation System by Using a Congestion-Free Ant-Trail Model. SENSORS 2021; 21:s21072393. [PMID: 33808325 PMCID: PMC8038084 DOI: 10.3390/s21072393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 03/23/2021] [Accepted: 03/26/2021] [Indexed: 11/16/2022]
Abstract
We investigated agent-based model simulations that mimic an ant transportation system to analyze the cooperative perception and communication in the system. On a trail, ants use cooperative perception through chemotaxis to maintain a constant average velocity irrespective of their density, thereby avoiding traffic jams. Using model simulations and approximate mathematical representations, we analyzed various aspects of the communication system and their effects on cooperative perception in ant traffic. Based on the analysis, insights about the cooperative perception of ants which facilitate decentralized self-organization is presented. We also present values of communication-parameters in ant traffic, where the system conveys traffic conditions to individual ants, which ants use to self-organize and avoid traffic-jams. The mathematical analysis also verifies our findings and provides a better understanding of various model parameters leading to model improvements.
Collapse
|
8
|
Shan M, Narula K, Wong YF, Worrall S, Khan M, Alexander P, Nebot E. Demonstrations of Cooperative Perception: Safety and Robustness in Connected and Automated Vehicle Operations. SENSORS 2020; 21:s21010200. [PMID: 33396804 PMCID: PMC7794841 DOI: 10.3390/s21010200] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Revised: 12/21/2020] [Accepted: 12/24/2020] [Indexed: 11/26/2022]
Abstract
Cooperative perception, or collective perception (CP), is an emerging and promising technology for intelligent transportation systems (ITS). It enables an ITS station (ITS-S) to share its local perception information with others by means of vehicle-to-X (V2X) communication, thereby achieving improved efficiency and safety in road transportation. In this paper, we present our recent progress on the development of a connected and automated vehicle (CAV) and intelligent roadside unit (IRSU). The main contribution of the work lies in investigating and demonstrating the use of CP service within intelligent infrastructure to improve awareness of vulnerable road users (VRU) and thus safety for CAVs in various traffic scenarios. We demonstrate in experiments that a connected vehicle (CV) can “see” a pedestrian around the corners. More importantly, we demonstrate how CAVs can autonomously and safely interact with walking and running pedestrians, relying only on the CP information from the IRSU through vehicle-to-infrastructure (V2I) communication. This is one of the first demonstrations of urban vehicle automation using only CP information. We also address in the paper the handling of collective perception messages (CPMs) received from the IRSU, and passing them through a pipeline of CP information coordinate transformation with uncertainty, multiple road user tracking, and eventually path planning/decision-making within the CAV. The experimental results were obtained with manually driven CV, fully autonomous CAV, and an IRSU retrofitted with vision and laser sensors and a road user tracking system.
Collapse
Affiliation(s)
- Mao Shan
- Australian Centre for Field Robotics, The University of Sydney, Sydney, NSW 2006, Australia; (K.N.); (S.W.); (E.N.)
- Correspondence: ; Tel.: +61-2-8627-4496
| | - Karan Narula
- Australian Centre for Field Robotics, The University of Sydney, Sydney, NSW 2006, Australia; (K.N.); (S.W.); (E.N.)
| | - Yung Fei Wong
- Cohda Wireless, 27 Greenhill Road, Wayville, SA 5034, Australia; (Y.F.W.); (M.K.); (P.A.)
| | - Stewart Worrall
- Australian Centre for Field Robotics, The University of Sydney, Sydney, NSW 2006, Australia; (K.N.); (S.W.); (E.N.)
| | - Malik Khan
- Cohda Wireless, 27 Greenhill Road, Wayville, SA 5034, Australia; (Y.F.W.); (M.K.); (P.A.)
| | - Paul Alexander
- Cohda Wireless, 27 Greenhill Road, Wayville, SA 5034, Australia; (Y.F.W.); (M.K.); (P.A.)
| | - Eduardo Nebot
- Australian Centre for Field Robotics, The University of Sydney, Sydney, NSW 2006, Australia; (K.N.); (S.W.); (E.N.)
| |
Collapse
|