1
|
Prochowski L, Szwajkowski P, Ziubiński M. Research Scenarios of Autonomous Vehicles, the Sensors and Measurement Systems Used in Experiments. SENSORS (BASEL, SWITZERLAND) 2022; 22:6586. [PMID: 36081043 PMCID: PMC9460663 DOI: 10.3390/s22176586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 08/26/2022] [Accepted: 08/29/2022] [Indexed: 06/15/2023]
Abstract
Automated and autonomous vehicles are in an intensive development phase. It is a phase that requires a lot of modelling and experimental research. Experimental research into these vehicles is in its initial state. There is a lack of findings and standardized recommendations for the organization and creation of research scenarios. There are also many difficulties in creating research scenarios. The main difficulties are the large number of systems for simultaneous checking. Additionally, the vehicles have a very complicated structure. A review of current publications allowed for systematization of the research scenarios of vehicles and their components as well as the measurement systems used. These include perception systems, automated response to threats, and critical situations in the area of road safety. The scenarios analyzed ensure that the planned research tasks can be carried out, including the investigation of systems that enable autonomous driving. This study uses passenger cars equipped with highly sophisticated sensor systems and localization devices. Perception systems are the necessary equipment during the conducted study. They provide recognition of the environment, mainly through vision sensors (cameras) and lidars. The research tasks include autonomous driving along a detected road lane on a curvilinear track. The effective maintenance of the vehicle in this lane is assessed. The location used in the study is a set of specialized research tracks on which stationary or moving obstacles are often placed.
Collapse
Affiliation(s)
- Leon Prochowski
- Institute of Vehicles and Transportation, Military University of Technology (WAT), ul. gen. Sylwestra Kaliskiego 2, 00-908 Warsaw, Poland
- Łukasiewicz Research Network—Automotive Industry Institute (Łukasiewicz-PIMOT), ul. Jagiellońska 55, 03-301 Warsaw, Poland
| | - Patryk Szwajkowski
- Łukasiewicz Research Network—Automotive Industry Institute (Łukasiewicz-PIMOT), ul. Jagiellońska 55, 03-301 Warsaw, Poland
- Doctoral School, Military University of Technology (WAT), ul. gen. Sylwestra Kaliskiego 2, 00-908 Warsaw, Poland
| | - Mateusz Ziubiński
- Institute of Vehicles and Transportation, Military University of Technology (WAT), ul. gen. Sylwestra Kaliskiego 2, 00-908 Warsaw, Poland
| |
Collapse
|
2
|
Determination of Point-to-Point 3D Routing Algorithm Using LiDAR Data for Noise Prediction. APPLIED SYSTEM INNOVATION 2022. [DOI: 10.3390/asi5030058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Urban planning, noise propagation modelling, viewshed analysis, etc., require determination of routes or supply lines for propagation. A point-to-point routing algorithm is required to determine the best routes for the propagation of noise levels from source to destination. Various optimization algorithms are present in the literature to determine the shortest route, e.g., Dijkstra, Ant-Colony algorithms, etc. However, these algorithms primarily work over 2D maps and multiple routes. The shortest route determination in 3D from unlabeled data (e.g., precise LiDAR terrain point cloud) is very challenging. The prediction of noise data for a place necessitates extraction of all possible principal routes between every source of noise and its destination, e.g., direct route, the route over the top of the building (or obstruction), routes around the sides of the building, and the reflected routes. It is thus required to develop an algorithm that will determine all the possible routes for propagation, using LiDAR data. The algorithm uses the novel cutting plane technique customized to work with LiDAR data to extract all the principal routes between every pair of noise source and destination. Terrain parameters are determined from routes for modeling. The terrain parameters, and noise data when integrated with a sophisticated noise model give an accurate prediction of noise for a place. The novel point-to-point routing algorithm is developed using LiDAR data of the RGIPT campus. All the shortest routes were tested for their spatial accuracy and efficacy to predict the noise levels accurately. Various routes are found to be accurate within ±9 cm, while predicted noise levels are found to be accurate within ±6 dBA at an instantaneous scale. The novel accurate 3D routing algorithm can improve the other urban applications too.
Collapse
|
3
|
Zhang Q. Target-based calibration of 3D LiDAR and binocular camera on unmanned vehicles. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-211827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Unmanned vehicles need to make a comprehensive perception of the surrounding environmental information during driving. Perception of automotive information is one of the important information. In the field of automotive perception, the stere-vision of car-detection plays a vital role and stere-vision can calculate the length, width, and height of a car, making the car more specific. However, under the existing technology, it is impossible to obtain accurate detection in a complex environment by relying on a single sensor. Therefore, it is particularly important to study the complex sensing technology based on multi-sensor fusion. This paper proposes a method based on feature point pair matching. Two rectangular planks are used to extract the 3D point cloud of the edge of the board in stereo vision and LiDAR coordinate systems, which is then used to obtain the corner coordinates. Finally, the Kabsch algorithm is used to solve the coordinate transformation between the paired feature points, and a clustering method is used to remove outliers from the multiple measurements and obtain the average value. By setting up an experiment, this method can be implemented on the Nvidia Jetson Tx2 embedded development board, and accurate registration parameters can be obtained, thus verifying the theoretical method’s feasibility. It finishes calibration of the LiDAR and binocular camera based on present methods. The result shows that, it can reduce the effects of noise, and acquire registration parameters accurately of LiDAR and cameras. Compared with the approved method of the same type, our proposed method has less error and good practical value.
Collapse
Affiliation(s)
- Qiang Zhang
- School of Transportation, Southeast University, Nanjing, China
| |
Collapse
|
4
|
Pole-Like Object Extraction and Pole-Aided GNSS/IMU/LiDAR-SLAM System in Urban Area. SENSORS 2020; 20:s20247145. [PMID: 33322184 PMCID: PMC7763439 DOI: 10.3390/s20247145] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 12/01/2020] [Accepted: 12/09/2020] [Indexed: 11/17/2022]
Abstract
Vision-based sensors such as LiDAR (Light Detection and Ranging) are adopted in the SLAM (Simultaneous Localization and Mapping) system. In the 16-beam LiDAR aided SLAM system, due to the difficulty of object detection by sparse laser data, neither the grid-based nor feature point-based solution can avoid the interference of moving objects. In an urban environment, the pole-like objects are common, invariant and have distinguishing characteristics. Therefore, it is suitable to bring more robust and reliable positioning results as auxiliary information in the process of vehicle positioning and navigation. In this work, we proposed a scheme of a SLAM system using a GNSS (Global Navigation Satellite System), IMU (Inertial Measurement Unit) and LiDAR sensor using the position of pole-like objects as the features for SLAM. The scheme combines a traditional preprocessing method and a small scale artificial neural network to extract the pole-like objects in environment. Firstly, the threshold-based method is used to extract the pole-like object candidates from the point cloud, and then, the neural network is applied for training and inference to obtain pole-like objects. The result shows that the accuracy and recall rate are sufficient to provide stable observation for the following SLAM process. After extracting the poles from the LiDAR point cloud, their coordinates are added to the feature map, and the nonlinear optimization of the front end is carried out by utilizing the distance constraints corresponding to the pole coordinates; then, the heading angle and horizontal plane translation are estimated. The ground feature points are used to enhance the elevation, pitch and roll angle accuracy. The performance of the proposed navigation system is evaluated through field experiments by checking the position drift and attitude errors during multiple two-min mimic GNSS outages without additional IMU motion constrain such as NHC (Nonholonomic Constrain). The experimental results show that the performance of the proposed scheme is superior to that of the conventional feature point grid-based SLAM with the same back end, especially in congested crossroads where slow-moving vehicles are surrounded and pole-like objects are rich in the environment. The mean plane position error during two-min GNSS outages was reduced by 38.5%, and the root mean square error was reduced by 35.3%. Therefore, the proposed pole-like feature-based GNSS/IMU/LiDAR SLAM system can fuse condensed information from those sensors effectively to mitigate positioning and orientation errors, even in a short-time GNSS denied environment.
Collapse
|
5
|
3D car-detection based on a Mobile Deep Sensor Fusion Model and real-scene applications. PLoS One 2020; 15:e0236947. [PMID: 32881926 PMCID: PMC7470372 DOI: 10.1371/journal.pone.0236947] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 07/16/2020] [Indexed: 11/19/2022] Open
Abstract
Unmanned vehicles need to make a comprehensive perception of the surrounding environmental information during driving. Perception of automotive information is of significance. In the field of automotive perception, the sterevision of car-detection plays a vital role and sterevision can calculate the length, width, and height of a car, making the car more specific. However, under the existing technology, it is impossible to obtain accurate detection in a complex environment by relying on a single sensor. Therefore, it is particularly important to study the complex sensing technology based on multi-sensor fusion. Recently, with the development of deep learning in the field of vision, a mobile sensor-fusion method based on deep learning is proposed and applied in this paper——Mobile Deep Sensor Fusion Model (MDSFM). The content of this article is as follows. It does a data processing that projects 3D data to 2D data, which can form a dataset suitable for the model, thereby training data more efficiently. In the modules of LiDAR, it uses a revised squeezeNet structure to lighten the model and reduce parameters. In the modules of cameras, it uses the improved design of detecting module in R-CNN with a Mobile Spatial Attention Module (MSAM). In the fused part, it uses a dual-view deep fusing structure. And then it selects images from the KITTI’s datasets for validation to test this model. Compared with other recognized methods, it shows that our model has a fairly good performance. Finally, it implements a ROS program on the experimental car and our model is in good condition. The result shows that it can improve performance of detecting easy cars significantly through MDSFM. It increases the quality of the detected data and improves the generalized ability of car-detection model. It improves contextual relevance and preserves background information. It remains stable in driverless environments. It is applied in the realistic scenario and proves that the model has a good practical value.
Collapse
|
6
|
Om K, Boukoros S, Nugaliyadde A, McGill T, Dixon M, Koutsakis P, Wong KW. Modelling email traffic workloads with RNN and LSTM models. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES 2020. [DOI: 10.1186/s13673-020-00242-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
AbstractAnalysis of time series data has been a challenging research subject for decades. Email traffic has recently been modelled as a time series function using a Recurrent Neural Network (RNN) and RNNs were shown to provide higher prediction accuracy than previous probabilistic models from the literature. Given the exponential rise of email workloads which need to be handled by email servers, in this paper we first present and discuss the literature on modelling email traffic. We then explain the advantages and limitations of different approaches as well as their points of agreement and disagreement. Finally, we present a comprehensive comparison between the performance of RNN and Long Short Term Memory (LSTM) models. Our experimental results demonstrate that both approaches can achieve high accuracy over four large datasets acquired from different universities’ servers, outperforming existing work, and show that the use of LSTM and RNN is very promising for modelling email traffic.
Collapse
|
7
|
Kim T, Jung IY, Hu YC. Automatic, location-privacy preserving dashcam video sharing using blockchain and deep learning. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES 2020. [DOI: 10.1186/s13673-020-00244-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
AbstractToday, many people use dashcams, and videos recorded on dashcams are often used as evidence of accident fault. People can upload videos of dashcam recordings with specific accident clips and share the videos with others who request them, by providing the time or location of an accident. However, dashcam videos are erased when the dashcam memory is full, so periodic backup is necessary for video sharing. It is inconvenient for dashcam owners to search for and transmit a requested video clip from backup videos. In addition, anonymity is not ensured, which may reduce location privacy by exposing the video owner’s location. To solve this problem, we propose a video sharing scheme with accident detection using deep learning coupled with automatic transfer to the cloud; we also propose ensuring data and operational integrity along with location privacy by using blockchain smart contracts. Furthermore, our proposed system uses proxy re-encryption to enhance the confidentiality of a shared video. Our experiments show that our proposed automatic video sharing system is cost-effective enough to be acceptable for deployment.
Collapse
|
8
|
Zhang M, Fu R, Guo Y, Wang L, Wang P, Deng H. Cyclist detection and tracking based on multi-layer laser scanner. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES 2020. [DOI: 10.1186/s13673-020-00225-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Abstract
The technology of Artificial Intelligence (AI) brings tremendous possibilities for autonomous vehicle applications. One of the essential tasks of autonomous vehicle is environment perception using machine learning algorithms. Since the cyclists are the vulnerable road users, cyclist detection and tracking are important perception sub-tasks for autonomous vehicles to avoid vehicle-cyclist collision. In this paper, a robust method for cyclist detection and tracking is presented based on multi-layer laser scanner, i.e., IBEO LUX 4L, which obtains four-layer point cloud from local environment. First, the laser points are partitioned into individual clusters using Density-Based Spatial Clustering of Applications with Noise (DBSCAN) method based on subarea. Then, 37-dimensional feature set is optimized by Relief algorithm and Principal Component Analysis (PCA) to produce two new feature sets. Support Vector Machine (SVM) and Decision Tree (DT) classifiers are further combined with three feature sets, respectively. Moreover, Multiple Hypothesis Tracking (MHT) algorithm and Kalman filter based on Current Statistical (CS) model are applied to track moving cyclists and estimate the motion state. The performance of the proposed cyclist detection and tracking method is validated in real road environment.
Collapse
|
9
|
Cao D, Chen Z, Gao L. An improved object detection algorithm based on multi-scaled and deformable convolutional neural networks. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES 2020. [DOI: 10.1186/s13673-020-00219-9] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Abstract
Object detection methods aim to identify all target objects in the target image and determine the categories and position information in order to achieve machine vision understanding. Numerous approaches have been proposed to solve this problem, mainly inspired by methods of computer vision and deep learning. However, existing approaches always perform poorly for the detection of small, dense objects, and even fail to detect objects with random geometric transformations. In this study, we compare and analyse mainstream object detection algorithms and propose a multi-scaled deformable convolutional object detection network to deal with the challenges faced by current methods. Our analysis demonstrates a strong performance on par, or even better, than state of the art methods. We use deep convolutional networks to obtain multi-scaled features, and add deformable convolutional structures to overcome geometric transformations. We then fuse the multi-scaled features by up sampling, in order to implement the final object recognition and region regress. Experiments prove that our suggested framework improves the accuracy of detecting small target objects with geometric deformation, showing significant improvements in the trade-off between accuracy and speed.
Collapse
|
10
|
Park J, Wen M, Sung Y, Cho K. Multiple Event-Based Simulation Scenario Generation Approach for Autonomous Vehicle Smart Sensors and Devices. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4456. [PMID: 31615164 PMCID: PMC6833086 DOI: 10.3390/s19204456] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 10/06/2019] [Accepted: 10/06/2019] [Indexed: 11/29/2022]
Abstract
Nowadays, deep learning methods based on a virtual environment are widely applied to research and technology development for autonomous vehicle's smart sensors and devices. Learning various driving environments in advance is important to handle unexpected situations that can exist in the real world and to continue driving without accident. For training smart sensors and devices of an autonomous vehicle well, a virtual simulator should create scenarios of various possible real-world situations. To create reality-based scenarios, data on the real environment must be collected from a real driving vehicle or a scenario analysis process conducted by experts. However, these two approaches increase the period and the cost of scenario generation as more scenarios are created. This paper proposes a scenario generation method based on deep learning to create scenarios automatically for training autonomous vehicle smart sensors and devices. To generate various scenarios, the proposed method extracts multiple events from a video which is taken on a real road by using deep learning and generates the multiple event in a virtual simulator. First, Faster-region based convolution neural network (Faster-RCNN) extracts bounding boxes of each object in a driving video. Second, the high-level event bounding boxes are calculated. Third, long-term recurrent convolution networks (LRCN) classify each type of extracted event. Finally, all multiple event classification results are combined into one scenario. The generated scenarios can be used in an autonomous driving simulator to teach multiple events that occur during real-world driving. To verify the performance of the proposed scenario generation method, experiments using real driving video data and a virtual simulator were conducted. The results for deep learning model show an accuracy of 95.6%; furthermore, multiple high-level events were extracted, and various scenarios were generated in a virtual simulator for smart sensors and devices of an autonomous vehicle.
Collapse
Affiliation(s)
- Jisun Park
- Department of Multimedia Engineering, Dongguk University-Seoul, Seoul 04620, Korea.
| | - Mingyun Wen
- Department of Multimedia Engineering, Dongguk University-Seoul, Seoul 04620, Korea.
| | - Yunsick Sung
- Department of Multimedia Engineering, Dongguk University-Seoul, Seoul 04620, Korea.
| | - Kyungeun Cho
- Department of Multimedia Engineering, Dongguk University-Seoul, Seoul 04620, Korea.
| |
Collapse
|
11
|
Abstract
Future Sustainability Computing (FSC) is an emerging concept that holds various types of paradigms, rules, procedures, and policies to support breadth and length of the deployment of Information Technology (IT) for abundant life. However, advanced IT-based FCS is facing several sustainability problems in different information processing and computing environments. Solutions to these problems can call upon various computational and algorithmic frameworks that employ optimization, integration, generation, and utilization technique within cloud, mobile, and cluster computing, such as meta-heuristics, decision support systems, prediction and control, dynamical systems, machine learning, and so on. Therefore, this special issue deals with various software and hardware design, novel architectures and frameworks, specific mathematical models, and efficient modeling-simulation for advance IT-based FCS. We accepted eighteen articles in the six different IT dimensions: machine learning, blockchain, optimized resource provision, communication network, IT governance, and information security. All accepted articles contribute to the applications and research in the FCS, such as software and information processing, cloud storage organization, smart devices, efficient algorithmic information processing and distribution.
Collapse
|