1
|
Matos F, Bernardino J, Durães J, Cunha J. A Survey on Sensor Failures in Autonomous Vehicles: Challenges and Solutions. SENSORS (BASEL, SWITZERLAND) 2024; 24:5108. [PMID: 39204805 PMCID: PMC11360603 DOI: 10.3390/s24165108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 08/04/2024] [Accepted: 08/05/2024] [Indexed: 09/04/2024]
Abstract
Autonomous vehicles (AVs) rely heavily on sensors to perceive their surrounding environment and then make decisions and act on them. However, these sensors have weaknesses, and are prone to failure, resulting in decision errors by vehicle controllers that pose significant challenges to their safe operation. To mitigate sensor failures, it is necessary to understand how they occur and how they affect the vehicle's behavior so that fault-tolerant and fault-masking strategies can be applied. This survey covers 108 publications and presents an overview of the sensors used in AVs today, categorizes the sensor's failures that can occur, such as radar interferences, ambiguities detection, or camera image failures, and provides an overview of mitigation strategies such as sensor fusion, redundancy, and sensor calibration. It also provides insights into research areas critical to improving safety in the autonomous vehicle industry, so that new or more in-depth research may emerge.
Collapse
Affiliation(s)
| | | | - João Durães
- Polytechnic University of Coimbra, Rua da Misericórdia, Lagar dos Cortiços, S. Martinho do Bispo, 3045-093 Coimbra, Portugal; (F.M.); (J.B.); (J.C.)
| | | |
Collapse
|
2
|
Fayyad J, Alijani S, Najjaran H. Empirical validation of Conformal Prediction for trustworthy skin lesions classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 253:108231. [PMID: 38820714 DOI: 10.1016/j.cmpb.2024.108231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/15/2024] [Accepted: 05/14/2024] [Indexed: 06/02/2024]
Abstract
BACKGROUND AND OBJECTIVE Uncertainty quantification is a pivotal field that contributes to realizing reliable and robust systems. It becomes instrumental in fortifying safe decisions by providing complementary information, particularly within high-risk applications. existing studies have explored various methods that often operate under specific assumptions or necessitate substantial modifications to the network architecture to effectively account for uncertainties. The objective of this paper is to study Conformal Prediction, an emerging distribution-free uncertainty quantification technique, and provide a comprehensive understanding of the advantages and limitations inherent in various methods within the medical imaging field. METHODS In this study, we developed Conformal Prediction, Monte Carlo Dropout, and Evidential Deep Learning approaches to assess uncertainty quantification in deep neural networks. The effectiveness of these methods is evaluated using three public medical imaging datasets focused on detecting pigmented skin lesions and blood cell types. RESULTS The experimental results demonstrate a significant enhancement in uncertainty quantification with the utilization of the Conformal Prediction method, surpassing the performance of the other two methods. Furthermore, the results present insights into the effectiveness of each uncertainty method in handling Out-of-Distribution samples from domain-shifted datasets. Our code is available at: github.com/jfayyad/ConformalDx. CONCLUSIONS Our conclusion highlights a robust and consistent performance of conformal prediction across diverse testing conditions. This positions it as the preferred choice for decision-making in safety-critical applications.
Collapse
Affiliation(s)
- Jamil Fayyad
- University of Victoria, 800 Finnerty Road, Victoria, V8P 5C2, BC, Canada; Cognia AI, 2031 Store street, Victoria, V8T 5L9, BC, Canada.
| | - Shadi Alijani
- University of Victoria, 800 Finnerty Road, Victoria, V8P 5C2, BC, Canada; Cognia AI, 2031 Store street, Victoria, V8T 5L9, BC, Canada.
| | - Homayoun Najjaran
- University of Victoria, 800 Finnerty Road, Victoria, V8P 5C2, BC, Canada; Cognia AI, 2031 Store street, Victoria, V8T 5L9, BC, Canada.
| |
Collapse
|
3
|
Ogunrinde I, Bernadin S. Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network. SENSORS (BASEL, SWITZERLAND) 2024; 24:4692. [PMID: 39066088 PMCID: PMC11280926 DOI: 10.3390/s24144692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 07/05/2024] [Accepted: 07/17/2024] [Indexed: 07/28/2024]
Abstract
The presence of fog in the background can prevent small and distant objects from being detected, let alone tracked. Under safety-critical conditions, multi-object tracking models require faster tracking speed while maintaining high object-tracking accuracy. The original DeepSORT algorithm used YOLOv4 for the detection phase and a simple neural network for the deep appearance descriptor. Consequently, the feature map generated loses relevant details about the track being matched with a given detection in fog. Targets with a high degree of appearance similarity on the detection frame are more likely to be mismatched, resulting in identity switches or track failures in heavy fog. We propose an improved multi-object tracking model based on the DeepSORT algorithm to improve tracking accuracy and speed under foggy weather conditions. First, we employed our camera-radar fusion network (CR-YOLOnet) in the detection phase for faster and more accurate object detection. We proposed an appearance feature network to replace the basic convolutional neural network. We incorporated GhostNet to take the place of the traditional convolutional layers to generate more features and reduce computational complexities and costs. We adopted a segmentation module and fed the semantic labels of the corresponding input frame to add rich semantic information to the low-level appearance feature maps. Our proposed method outperformed YOLOv5 + DeepSORT with a 35.15% increase in multi-object tracking accuracy, a 32.65% increase in multi-object tracking precision, a speed increase by 37.56%, and identity switches decreased by 46.81%.
Collapse
Affiliation(s)
- Isaac Ogunrinde
- Department of Electrical and Computer Engineering, FAMU-FSU College of Engineering, Tallahassee, FL 32310, USA;
| | | |
Collapse
|
4
|
Chiominto L, Natale E, D’Emilia G, Grieco SA, Prato A, Facello A, Schiavi A. Responsiveness and Precision of Digital IMUs under Linear and Curvilinear Motion Conditions for Local Navigation and Positioning in Advanced Smart Mobility. MICROMACHINES 2024; 15:727. [PMID: 38930697 PMCID: PMC11205907 DOI: 10.3390/mi15060727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 05/17/2024] [Accepted: 05/24/2024] [Indexed: 06/28/2024]
Abstract
Sensors based on MEMS technology, in particular Inertial Measurement Units (IMUs), when installed on vehicles, provide a real-time full estimation of vehicles' state vector (e.g., position, velocity, yaw angle, angular rate, acceleration), which is required for the planning and control of cars' trajectories, as well as managing the in-car local navigation and positioning tasks. Moreover, data provided by the IMUs, integrated with the data of multiple inputs from other sensing systems (such as Lidar, cameras, and GPS) within the vehicle, and with the surrounding information exchanged in real time (vehicle to vehicle, vehicle to infrastructure, or vehicle to other entities), can be exploited to actualize the full implementation of "smart mobility" on a large scale. On the other hand, "smart mobility" (which is expected to improve road safety, reduce traffic congestion and environmental burden, and enhance the sustainability of mobility as a whole), to be safe and functional on a large scale, should be supported by highly accurate and trustworthy technologies based on precise and reliable sensors and systems. It is known that the accuracy and precision of data supplied by appropriately in-lab-calibrated IMUs (with respect to the primary or secondary standard in order to provide traceability to the International System of Units) allow guaranteeing high quality, reliable information managed by processing systems, since they are reproducible, repeatable, and traceable. In this work, the effective responsiveness and the related precision of digital IMUs, under sinusoidal linear and curvilinear motion conditions at 5 Hz, 10 Hz, and 20 Hz, are investigated on the basis of metrological approaches in laboratory standard conditions only. As a first step, in-lab calibrations allow one to reduce the variables of uncontrolled boundary conditions (e.g., occurring in vehicles in on-site tests) in order to identify the IMUs' sensitivity in a stable and reproducible environment. For this purpose, a new calibration system, based on an oscillating rotating table was developed to reproduce the dynamic conditions of use in the field, and the results are compared with calibration data obtained on linear calibration benches.
Collapse
Affiliation(s)
- Luciano Chiominto
- Department of Industrial and Information Engineering and Economics, University of L’Aquila, 67100 L’Aquila, Italy; (L.C.); (E.N.); (G.D.)
| | - Emanuela Natale
- Department of Industrial and Information Engineering and Economics, University of L’Aquila, 67100 L’Aquila, Italy; (L.C.); (E.N.); (G.D.)
| | - Giulio D’Emilia
- Department of Industrial and Information Engineering and Economics, University of L’Aquila, 67100 L’Aquila, Italy; (L.C.); (E.N.); (G.D.)
| | - Sante Alessandro Grieco
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia, 41125 Modena, Italy;
- Division of Applied Metrology and Engineering INRiM—National Institute of Metrological Research, 10135 Turin, Italy; (A.F.); (A.S.)
| | - Andrea Prato
- Division of Applied Metrology and Engineering INRiM—National Institute of Metrological Research, 10135 Turin, Italy; (A.F.); (A.S.)
| | - Alessio Facello
- Division of Applied Metrology and Engineering INRiM—National Institute of Metrological Research, 10135 Turin, Italy; (A.F.); (A.S.)
| | - Alessandro Schiavi
- Division of Applied Metrology and Engineering INRiM—National Institute of Metrological Research, 10135 Turin, Italy; (A.F.); (A.S.)
| |
Collapse
|
5
|
Rosero LA, Gomes IP, da Silva JAR, Przewodowski CA, Wolf DF, Osório FS. Integrating Modular Pipelines with End-to-End Learning: A Hybrid Approach for Robust and Reliable Autonomous Driving Systems. SENSORS (BASEL, SWITZERLAND) 2024; 24:2097. [PMID: 38610309 PMCID: PMC11014112 DOI: 10.3390/s24072097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 03/04/2024] [Accepted: 03/22/2024] [Indexed: 04/14/2024]
Abstract
Autonomous driving navigation relies on diverse approaches, each with advantages and limitations depending on various factors. For HD maps, modular systems excel, while end-to-end methods dominate mapless scenarios. However, few leverage the strengths of both. This paper innovates by proposing a hybrid architecture that seamlessly integrates modular perception and control modules with data-driven path planning. This innovative design leverages the strengths of both approaches, enabling a clear understanding and debugging of individual components while simultaneously harnessing the learning power of end-to-end approaches. Our proposed architecture achieved first and second place in the 2023 CARLA Autonomous Driving Challenge's SENSORS and MAP tracks, respectively. These results demonstrate the architecture's effectiveness in both map-based and mapless navigation. We achieved a driving score of 41.56 and the highest route completion of 86.03 in the MAP track of the CARLA Challenge leaderboard 1, and driving scores of 35.36 and 1.23 in the CARLA Challenge SENSOR track with route completions of 85.01 and 9.55, for, respectively, leaderboard 1 and 2. The results of leaderboard 2 raised the hybrid architecture to the first position, winning the edition of the 2023 CARLA Autonomous Driving Competition.
Collapse
Affiliation(s)
- Luis Alberto Rosero
- Institute of Mathematics and Computer Science, University of São Paulo, Ave. Trabalhador São-Carlense, 400, São Carlos 13564-002, SP, Brazil; (I.P.G.); (J.A.R.d.S.); (C.A.P.); (D.F.W.)
| | | | | | | | | | - Fernando Santos Osório
- Institute of Mathematics and Computer Science, University of São Paulo, Ave. Trabalhador São-Carlense, 400, São Carlos 13564-002, SP, Brazil; (I.P.G.); (J.A.R.d.S.); (C.A.P.); (D.F.W.)
| |
Collapse
|
6
|
Dong W, Lu C, Bao L, Li W, Shin K, Han C. A Planar Multi-Inertial Navigation Strategy for Autonomous Systems for Signal-Variable Environments. SENSORS (BASEL, SWITZERLAND) 2024; 24:1064. [PMID: 38400221 PMCID: PMC10893360 DOI: 10.3390/s24041064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 02/02/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024]
Abstract
The challenge of precise dynamic positioning for mobile robots is addressed through the development of a multi-inertial navigation system (M-INSs). The inherent cumulative sensor errors prevalent in traditional single inertial navigation systems (INSs) under dynamic conditions are mitigated by a novel algorithm, integrating multiple INS units in a predefined planar configuration, utilizing fixed distances between the units as invariant constraints. An extended Kalman filter (EKF) is employed to significantly enhance the positioning accuracy. Dynamic experimental validation of the proposed 3INS EKF algorithm reveals a marked improvement over individual INS units, with the positioning errors reduced and stability increased, resulting in an average accuracy enhancement rate exceeding 60%. This advancement is particularly critical for mobile robot applications that demand high precision, such as autonomous driving and disaster search and rescue. The findings from this study not only demonstrate the potential of M-INSs to improve dynamic positioning accuracy but also to provide a new research direction for future advancements in robotic navigation systems.
Collapse
Affiliation(s)
- Wenbin Dong
- Department of Mechatronics Engineering, Hanyang University, Ansan 15588, Republic of Korea; (W.D.); (L.B.); (W.L.); (C.H.)
- School of Mechanical Engineering, Anhui Science and Technology University, Chuzhou 233100, China;
| | - Cheng Lu
- School of Mechanical Engineering, Anhui Science and Technology University, Chuzhou 233100, China;
| | - Le Bao
- Department of Mechatronics Engineering, Hanyang University, Ansan 15588, Republic of Korea; (W.D.); (L.B.); (W.L.); (C.H.)
| | - Wenqi Li
- Department of Mechatronics Engineering, Hanyang University, Ansan 15588, Republic of Korea; (W.D.); (L.B.); (W.L.); (C.H.)
| | - Kyoosik Shin
- Department of Mechatronics Engineering, Hanyang University, Ansan 15588, Republic of Korea; (W.D.); (L.B.); (W.L.); (C.H.)
| | - Changsoo Han
- Department of Mechatronics Engineering, Hanyang University, Ansan 15588, Republic of Korea; (W.D.); (L.B.); (W.L.); (C.H.)
| |
Collapse
|
7
|
Skrickij V, Kojis P, Šabanovič E, Shyrokau B, Ivanov V. Review of Integrated Chassis Control Techniques for Automated Ground Vehicles. SENSORS (BASEL, SWITZERLAND) 2024; 24:600. [PMID: 38257691 PMCID: PMC10819876 DOI: 10.3390/s24020600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 01/14/2024] [Accepted: 01/15/2024] [Indexed: 01/24/2024]
Abstract
Integrated chassis control systems represent a significant advancement in the dynamics of ground vehicles, aimed at enhancing overall performance, comfort, handling, and stability. As vehicles transition from internal combustion to electric platforms, integrated chassis control systems have evolved to meet the demands of electrification and automation. This paper analyses the overall control structure of automated vehicles with integrated chassis control systems. Integration of longitudinal, lateral, and vertical systems presents complexities due to the overlapping control regions of various subsystems. The presented methodology includes a comprehensive examination of state-of-the-art technologies, focusing on algorithms to manage control actions and prevent interference between subsystems. The results underscore the importance of control allocation to exploit the additional degrees of freedom offered by over-actuated systems. This paper systematically overviews the various control methods applied in integrated chassis control and path tracking. This includes a detailed examination of perception and decision-making, parameter estimation techniques, reference generation strategies, and the hierarchy of controllers, encompassing high-level, middle-level, and low-level control components. By offering this systematic overview, this paper aims to facilitate a deeper understanding of the diverse control methods employed in automated driving with integrated chassis control, providing insights into their applications, strengths, and limitations.
Collapse
Affiliation(s)
- Viktor Skrickij
- Transport and Logistics Competence Centre, Transport Engineering Faculty, Vilnius Gediminas Technical University, 10223 Vilnius, Lithuania
| | - Paulius Kojis
- Department of Mobile Machinery and Railway Transport, Transport Engineering Faculty, Vilnius Gediminas Technical University, 10105 Vilnius, Lithuania
| | - Eldar Šabanovič
- Transport and Logistics Competence Centre, Transport Engineering Faculty, Vilnius Gediminas Technical University, 10223 Vilnius, Lithuania
| | - Barys Shyrokau
- Department of Cognitive Robotics, Delft University of Technology, 2628 CD Delft, The Netherlands
| | - Valentin Ivanov
- Smart Vehicle Systems—Working Group, Technische Universität Ilmenau, Ehrenbergstr, 15, 98693 Ilmenau, Germany
| |
Collapse
|
8
|
Kang H, Lee C, Kang SJ. A smart device for non-invasive ADL estimation through multi-environmental sensor fusion. Sci Rep 2023; 13:17246. [PMID: 37821665 PMCID: PMC10567750 DOI: 10.1038/s41598-023-44436-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 10/08/2023] [Indexed: 10/13/2023] Open
Abstract
This research paper introduces the Smart Plug Hub (SPH), a non-invasive system designed to accurately estimating a patient's Activities of Daily Living (ADL). Traditional methods for measuring ADL include interviews, remote video systems, and wearable devices that track behavior. However, these approaches have limitations, such as patient memory dependency, privacy violations, and careless device management. To address these limitations, SPH utilizes sensor fusion to analyze time-series environmental signals and accurately estimate a patient's ADL. We have effectively optimized the utilization of computing resources through the implementation of "device collaboration" in SPH to receive event data and segments portions of the time-series environmental signal. By segmenting the data into smaller segments, we extracted an analyzable dataset, which was processed by an edge device-SPH. We have conducted several experiments with the SPH, and our research has resulted in a significant 75% accuracy in the classification of patients' kitchen ADLs and an 85% accuracy in the classification of toilet ADLs. These activities include actions such as eating activities in the kitchen and typical activities performed in the toilet. These findings have substantial implications for the progress of healthcare and patient care, highlighting the potential uses of the SPH technology in the monitoring and improvement of daily living activities.
Collapse
Affiliation(s)
- Homin Kang
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu, 41566, Republic of Korea
| | - Cheolhwan Lee
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu, 41566, Republic of Korea
| | - Soon Ju Kang
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu, 41566, Republic of Korea.
| |
Collapse
|
9
|
Lazar RG, Pauca O, Maxim A, Caruntu CF. Control Architecture for Connected Vehicle Platoons: From Sensor Data to Controller Design Using Vehicle-to-Everything Communication. SENSORS (BASEL, SWITZERLAND) 2023; 23:7576. [PMID: 37688028 PMCID: PMC10490767 DOI: 10.3390/s23177576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 08/24/2023] [Accepted: 08/29/2023] [Indexed: 09/10/2023]
Abstract
A suitable control architecture for connected vehicle platoons may be seen as a promising solution for today's traffic problems, by improving road safety and traffic flow, reducing emissions and fuel consumption, and increasing driver comfort. This paper provides a comprehensive overview concerning the defining levels of a general control architecture for connected vehicle platoons, intending to illustrate the options available in terms of sensor technologies, in-vehicle networks, vehicular communication, and control solutions. Moreover, starting from the proposed control architecture, a solution that implements a Cooperative Adaptive Cruise Control (CACC) functionality for a vehicle platoon is designed. Also, two control algorithms based on the distributed model-based predictive control (DMPC) strategy and the feedback gain matrix method for the control level of the CACC functionality are proposed. The designed architecture was tested in a simulation scenario, and the obtained results show the control performances achieved using the proposed solutions suitable for the longitudinal dynamics of vehicle platoons.
Collapse
Affiliation(s)
| | | | | | - Constantin-Florin Caruntu
- Department of Automatic Control and Applied Informatics, “Gheorghe Asachi” Technical University of Iasi, 700050 Iasi, Romania; (R.-G.L.); (O.P.); (A.M.)
| |
Collapse
|
10
|
Aminosharieh Najafi T, Affanni A, Rinaldo R, Zontone P. Drivers' Mental Engagement Analysis Using Multi-Sensor Fusion Approaches Based on Deep Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:7346. [PMID: 37687801 PMCID: PMC10490517 DOI: 10.3390/s23177346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 08/15/2023] [Accepted: 08/16/2023] [Indexed: 09/10/2023]
Abstract
In this paper, we present a comprehensive assessment of individuals' mental engagement states during manual and autonomous driving scenarios using a driving simulator. Our study employed two sensor fusion approaches, combining the data and features of multimodal signals. Participants in our experiment were equipped with Electroencephalogram (EEG), Skin Potential Response (SPR), and Electrocardiogram (ECG) sensors, allowing us to collect their corresponding physiological signals. To facilitate the real-time recording and synchronization of these signals, we developed a custom-designed Graphical User Interface (GUI). The recorded signals were pre-processed to eliminate noise and artifacts. Subsequently, the cleaned data were segmented into 3 s windows and labeled according to the drivers' high or low mental engagement states during manual and autonomous driving. To implement sensor fusion approaches, we utilized two different architectures based on deep Convolutional Neural Networks (ConvNets), specifically utilizing the Braindecode Deep4 ConvNet model. The first architecture consisted of four convolutional layers followed by a dense layer. This model processed the synchronized experimental data as a 2D array input. We also proposed a novel second architecture comprising three branches of the same ConvNet model, each with four convolutional layers, followed by a concatenation layer for integrating the ConvNet branches, and finally, two dense layers. This model received the experimental data from each sensor as a separate 2D array input for each ConvNet branch. Both architectures were evaluated using a Leave-One-Subject-Out (LOSO) cross-validation approach. For both cases, we compared the results obtained when using only EEG signals with the results obtained by adding SPR and ECG signals. In particular, the second fusion approach, using all sensor signals, achieved the highest accuracy score, reaching 82.0%. This outcome demonstrates that our proposed architecture, particularly when integrating EEG, SPR, and ECG signals at the feature level, can effectively discern the mental engagement of drivers.
Collapse
Affiliation(s)
- Taraneh Aminosharieh Najafi
- Polytechnic Department of Engineering and Architecture, University of Udine, Via Delle Scienze 206, 33100 Udine, Italy; (A.A.); (R.R.); (P.Z.)
| | | | | | | |
Collapse
|
11
|
Gu J, Lind A, Chhetri TR, Bellone M, Sell R. End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles. SENSORS (BASEL, SWITZERLAND) 2023; 23:6783. [PMID: 37571566 PMCID: PMC10422220 DOI: 10.3390/s23156783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 07/13/2023] [Accepted: 07/20/2023] [Indexed: 08/13/2023]
Abstract
Autonomous driving vehicles rely on sensors for the robust perception of their surroundings. Such vehicles are equipped with multiple perceptive sensors with a high level of redundancy to ensure safety and reliability in any driving condition. However, multi-sensor, such as camera, LiDAR, and radar systems raise requirements related to sensor calibration and synchronization, which are the fundamental blocks of any autonomous system. On the other hand, sensor fusion and integration have become important aspects of autonomous driving research and directly determine the efficiency and accuracy of advanced functions such as object detection and path planning. Classical model-based estimation and data-driven models are two mainstream approaches to achieving such integration. Most recent research is shifting to the latter, showing high robustness in real-world applications but requiring large quantities of data to be collected, synchronized, and properly categorized. However, there are two major research gaps in existing works: (i) they lack fusion (and synchronization) of multi-sensors, camera, LiDAR and radar; and (ii) generic scalable, and user-friendly end-to-end implementation. To generalize the implementation of the multi-sensor perceptive system, we introduce an end-to-end generic sensor dataset collection framework that includes both hardware deploying solutions and sensor fusion algorithms. The framework prototype integrates a diverse set of sensors, such as camera, LiDAR, and radar. Furthermore, we present a universal toolbox to calibrate and synchronize three types of sensors based on their characteristics. The framework also includes the fusion algorithms, which utilize the merits of three sensors, namely, camera, LiDAR, and radar, and fuse their sensory information in a manner that is helpful for object detection and tracking research. The generality of this framework makes it applicable in any robotic or autonomous applications and suitable for quick and large-scale practical deployment.
Collapse
Affiliation(s)
- Junyi Gu
- Department of Mechanical and Industrial Engineering, Tallinn University of Technology Tallinn, 12616 Tallinn, Estonia;
| | - Artjom Lind
- Intelligent Transportation Systems Lab, Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia;
| | - Tek Raj Chhetri
- Semantic Technology Institute (STI) Innsbruck, Department of Computer Science, Universität Innsbruck, 6020 Innsbruck, Austria;
- Center for Artificial Intelligence (AI) Research Nepal, Sundarharaincha 56604, Nepal
| | - Mauro Bellone
- FinEst Centre for Smart Cities, Tallinn University of Technology, 19086 Tallinn, Estonia;
| | - Raivo Sell
- Department of Mechanical and Industrial Engineering, Tallinn University of Technology Tallinn, 12616 Tallinn, Estonia;
| |
Collapse
|
12
|
Abdelaziz N, El-Rabbany A. Deep Learning-Aided Inertial/Visual/LiDAR Integration for GNSS-Challenging Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:6019. [PMID: 37447870 DOI: 10.3390/s23136019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 06/12/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023]
Abstract
This research develops an integrated navigation system, which fuses the measurements of the inertial measurement unit (IMU), LiDAR, and monocular camera using an extended Kalman filter (EKF) to provide accurate positioning during prolonged GNSS signal outages. The system features the use of an integrated INS/monocular visual simultaneous localization and mapping (SLAM) navigation system that takes advantage of LiDAR depth measurements to correct the scale ambiguity that results from monocular visual odometry. The proposed system was tested using two datasets, namely, the KITTI and the Leddar PixSet, which cover a wide range of driving environments. The system yielded an average reduction in the root-mean-square error (RMSE) of about 80% and 92% in the horizontal and upward directions, respectively. The proposed system was compared with an INS/monocular visual SLAM/LiDAR SLAM integration and to some state-of-the-art SLAM algorithms.
Collapse
Affiliation(s)
- Nader Abdelaziz
- Department of Civil Engineering, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
- Department of Civil Engineering, Tanta University, Tanta 31527, Egypt
| | - Ahmed El-Rabbany
- Department of Civil Engineering, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
| |
Collapse
|
13
|
Goldenholz DM. Can machine learning solve this one? Clinical pitfalls in surgical outcome prediction. Epilepsia 2023; 64:1190-1194. [PMID: 36825988 PMCID: PMC10175174 DOI: 10.1111/epi.17559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 02/20/2023] [Accepted: 02/22/2023] [Indexed: 02/25/2023]
|
14
|
Plascencia AC, García-Gómez P, Perez EB, DeMas-Giménez G, Casas JR, Royo S. A Preliminary Study of Deep Learning Sensor Fusion for Pedestrian Detection. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23084167. [PMID: 37112506 PMCID: PMC10144184 DOI: 10.3390/s23084167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 03/21/2023] [Accepted: 04/07/2023] [Indexed: 05/14/2023]
Abstract
Most pedestrian detection methods focus on bounding boxes based on fusing RGB with lidar. These methods do not relate to how the human eye perceives objects in the real world. Furthermore, lidar and vision can have difficulty detecting pedestrians in scattered environments, and radar can be used to overcome this problem. Therefore, the motivation of this work is to explore, as a preliminary step, the feasibility of fusing lidar, radar, and RGB for pedestrian detection that potentially can be used for autonomous driving that uses a fully connected convolutional neural network architecture for multimodal sensors. The core of the network is based on SegNet, a pixel-wise semantic segmentation network. In this context, lidar and radar were incorporated by transforming them from 3D pointclouds into 2D gray images with 16-bit depths, and RGB images were incorporated with three channels. The proposed architecture uses a single SegNet for each sensor reading, and the outputs are then applied to a fully connected neural network to fuse the three modalities of sensors. Afterwards, an up-sampling network is applied to recover the fused data. Additionally, a custom dataset of 60 images was proposed for training the architecture, with an additional 10 for evaluation and 10 for testing, giving a total of 80 images. The experiment results show a training mean pixel accuracy of 99.7% and a training mean intersection over union of 99.5%. Also, the testing mean of the IoU was 94.4%, and the testing pixel accuracy was 96.2%. These metric results have successfully demonstrated the effectiveness of using semantic segmentation for pedestrian detection under the modalities of three sensors. Despite some overfitting in the model during experimentation, it performed well in detecting people in test mode. Therefore, it is worth emphasizing that the focus of this work is to show that this method is feasible to be used, as it works regardless of the size of the dataset. Also, a bigger dataset would be necessary to achieve a more appropiate training. This method gives the advantage of detecting pedestrians as the human eye does, thereby resulting in less ambiguity. Additionally, this work has also proposed an extrinsic calibration matrix method for sensor alignment between radar and lidar based on singular value decomposition.
Collapse
Affiliation(s)
- Alfredo Chávez Plascencia
- Centre for Sensors, Instrumentation and Systems Development (CD6), Polytechnic University of Catalonia (UPC), Rambla de Sant Nebridi 10, 08222 Terrassa, Spain
- Correspondence:
| | | | - Eduardo Bernal Perez
- Centre for Sensors, Instrumentation and Systems Development (CD6), Polytechnic University of Catalonia (UPC), Rambla de Sant Nebridi 10, 08222 Terrassa, Spain
| | - Gerard DeMas-Giménez
- Centre for Sensors, Instrumentation and Systems Development (CD6), Polytechnic University of Catalonia (UPC), Rambla de Sant Nebridi 10, 08222 Terrassa, Spain
| | - Josep R. Casas
- Image Processing Group, TSC Department, Polytechnic University of Catalonia (UPC), Carrer de Jordi Girona 1-3, 08034 Barcelona, Spain
| | - Santiago Royo
- Centre for Sensors, Instrumentation and Systems Development (CD6), Polytechnic University of Catalonia (UPC), Rambla de Sant Nebridi 10, 08222 Terrassa, Spain
- Beamagine S.L. Carrer de Bellesguard 16, 08755 Castellbisbal, Spain
| |
Collapse
|
15
|
Hasanujjaman M, Chowdhury MZ, Jang YM. Sensor Fusion in Autonomous Vehicle with Traffic Surveillance Camera System: Detection, Localization, and AI Networking. SENSORS (BASEL, SWITZERLAND) 2023; 23:3335. [PMID: 36992043 PMCID: PMC10055109 DOI: 10.3390/s23063335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 03/17/2023] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
Complete autonomous systems such as self-driving cars to ensure the high reliability and safety of humans need the most efficient combination of four-dimensional (4D) detection, exact localization, and artificial intelligent (AI) networking to establish a fully automated smart transportation system. At present, multiple integrated sensors such as light detection and ranging (LiDAR), radio detection and ranging (RADAR), and car cameras are frequently used for object detection and localization in the conventional autonomous transportation system. Moreover, the global positioning system (GPS) is used for the positioning of autonomous vehicles (AV). These individual systems' detection, localization, and positioning efficiency are insufficient for AV systems. In addition, they do not have any reliable networking system for self-driving cars carrying us and goods on the road. Although the sensor fusion technology of car sensors came up with good efficiency for detection and location, the proposed convolutional neural networking approach will assist to achieve a higher accuracy of 4D detection, precise localization, and real-time positioning. Moreover, this work will establish a strong AI network for AV far monitoring and data transmission systems. The proposed networking system efficiency remains the same on under-sky highways as well in various tunnel roads where GPS does not work properly. For the first time, modified traffic surveillance cameras have been exploited in this conceptual paper as an external image source for AV and anchor sensing nodes to complete AI networking transportation systems. This work approaches a model that solves AVs' fundamental detection, localization, positioning, and networking challenges with advanced image processing, sensor fusion, feathers matching, and AI networking technology. This paper also provides an experienced AI driver concept for a smart transportation system with deep learning technology.
Collapse
Affiliation(s)
- Muhammad Hasanujjaman
- Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh
| | - Mostafa Zaman Chowdhury
- Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh
| | - Yeong Min Jang
- Department of Electronics Engineering, Kookmin University, Seoul 02707, Republic of Korea
| |
Collapse
|
16
|
LiDAR-Based Sensor Fusion SLAM and Localization for Autonomous Driving Vehicles in Complex Scenarios. J Imaging 2023; 9:jimaging9020052. [PMID: 36826971 PMCID: PMC9961341 DOI: 10.3390/jimaging9020052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/21/2023] [Accepted: 02/13/2023] [Indexed: 02/22/2023] Open
Abstract
LiDAR-based simultaneous localization and mapping (SLAM) and online localization methods are widely used in autonomous driving, and are key parts of intelligent vehicles. However, current SLAM algorithms have limitations in map drift and localization algorithms based on a single sensor have poor adaptability to complex scenarios. A SLAM and online localization method based on multi-sensor fusion is proposed and integrated into a general framework in this paper. In the mapping process, constraints consisting of normal distributions transform (NDT) registration, loop closure detection and real time kinematic (RTK) global navigation satellite system (GNSS) position for the front-end and the pose graph optimization algorithm for the back-end, which are applied to achieve an optimized map without drift. In the localization process, the error state Kalman filter (ESKF) fuses LiDAR-based localization position and vehicle states to realize more robust and precise localization. The open-source KITTI dataset and field tests are used to test the proposed method. The method effectiveness shown in the test results achieves 5-10 cm mapping accuracy and 20-30 cm localization accuracy, and it realizes online autonomous driving in complex scenarios.
Collapse
|
17
|
Multiple vehicle cooperation and collision avoidance in automated vehicles: survey and an AI-enabled conceptual framework. Sci Rep 2023; 13:603. [PMID: 36635336 PMCID: PMC9837199 DOI: 10.1038/s41598-022-27026-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 12/23/2022] [Indexed: 01/14/2023] Open
Abstract
Prospective customers are becoming more concerned about safety and comfort as the automobile industry swings toward automated vehicles (AVs). A comprehensive evaluation of recent AVs collision data indicates that modern automated driving systems are prone to rear-end collisions, usually leading to multiple-vehicle collisions. Moreover, most investigations into severe traffic conditions are confined to single-vehicle collisions. This work reviewed diverse techniques of existing literature to provide planning procedures for multiple vehicle cooperation and collision avoidance (MVCCA) strategies in AVs while also considering their performance and social impact viewpoints. Firstly, we investigate and tabulate the existing MVCCA techniques associated with single-vehicle collision avoidance perspectives. Then, current achievements are extensively evaluated, challenges and flows are identified, and remedies are intelligently formed to exploit a taxonomy. This paper also aims to give readers an AI-enabled conceptual framework and a decision-making model with a concrete structure of the training network settings to bridge the gaps between current investigations. These findings are intended to shed insight into the benefits of the greater efficiency of AVs set-up for academics and policymakers. Lastly, the open research issues discussed in this survey will pave the way for the actual implementation of driverless automated traffic systems.
Collapse
|
18
|
Dauptain X, Koné A, Grolleau D, Cerezo V, Gennesseaux M, Do MT. Conception of a High-Level Perception and Localization System for Autonomous Driving. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22249661. [PMID: 36560030 PMCID: PMC9783250 DOI: 10.3390/s22249661] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 12/02/2022] [Accepted: 12/06/2022] [Indexed: 05/27/2023]
Abstract
This paper describes the conception of a high level, compact, scalable, and long autonomy perception and localization system for autonomous driving applications. Our benchmark is composed of a high resolution lidar (128 channels), a stereo global shutter camera, an inertial navigation system, a time server, and an embedded computer. In addition, in order to acquire data and build multi-modal datasets, this system embeds two perception algorithms (RBNN detection, DCNN detection) and one localization algorithm (lidar-based localization) to provide real-time advanced information such as object detection and localization in challenging environments (lack of GPS). In order to train and evaluate the perception algorithms, a dataset is built from 10,000 annotated lidar frames from various drives carried out under different weather conditions and different traffic and population densities. The performances of the three algorithms are competitive with the state-of-the-art. Moreover, the processing time of these algorithms are compatible with real-time autonomous driving applications. By providing directly accurate advanced outputs, this system might significantly facilitate the work of researchers and engineers with respect to planning and control modules. Thus, this study intends to contribute to democratizing access to autonomous vehicle research platforms.
Collapse
Affiliation(s)
- Xavier Dauptain
- AME-EASE, Université Gustave Eiffel, IFSTTAR, F-44344 Bouguenais, France
- Sherpa Engineering, Site Nantes, 2 Rue Alfred Kastler, F-44307 Nantes, France
| | - Aboubakar Koné
- AME-EASE, Université Gustave Eiffel, IFSTTAR, F-44344 Bouguenais, France
- Sherpa Engineering, Site Nantes, 2 Rue Alfred Kastler, F-44307 Nantes, France
| | - Damien Grolleau
- Sherpa Engineering, Site Nantes, 2 Rue Alfred Kastler, F-44307 Nantes, France
| | - Veronique Cerezo
- AME-EASE, Université Gustave Eiffel, IFSTTAR, F-44344 Bouguenais, France
| | | | - Minh-Tan Do
- AME-EASE, Université Gustave Eiffel, IFSTTAR, F-44344 Bouguenais, France
| |
Collapse
|
19
|
El-Yabroudi MZ, Abdel-Qader I, Bazuin BJ, Abudayyeh O, Chabaan RC. Guided Depth Completion with Instance Segmentation Fusion in Autonomous Driving Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:9578. [PMID: 36559946 PMCID: PMC9781309 DOI: 10.3390/s22249578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/02/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
Pixel-level depth information is crucial to many applications, such as autonomous driving, robotics navigation, 3D scene reconstruction, and augmented reality. However, depth information, which is usually acquired by sensors such as LiDAR, is sparse. Depth completion is a process that predicts missing pixels' depth information from a set of sparse depth measurements. Most of the ongoing research applies deep neural networks on the entire sparse depth map and camera scene without utilizing any information about the available objects, which results in more complex and resource-demanding networks. In this work, we propose to use image instance segmentation to detect objects of interest with pixel-level locations, along with sparse depth data, to support depth completion. The framework utilizes a two-branch encoder-decoder deep neural network. It fuses information about scene available objects, such as objects' type and pixel-level location, LiDAR, and RGB camera, to predict dense accurate depth maps. Experimental results on the KITTI dataset showed faster training and improved prediction accuracy. The proposed method reaches a convergence state faster and surpasses the baseline model in all evaluation metrics.
Collapse
Affiliation(s)
- Mohammad Z. El-Yabroudi
- Electrical and Computer Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Ikhlas Abdel-Qader
- Electrical and Computer Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Bradley J. Bazuin
- Electrical and Computer Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Osama Abudayyeh
- Civil and Construction Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Rakan C. Chabaan
- Hyundai America Technical Center, Inc., Superior Charter Township, MI 48198, USA
| |
Collapse
|
20
|
Jadaun P, Cui C, Liu S, Incorvia JAC. Adaptive cognition implemented with a context-aware and flexible neuron for next-generation artificial intelligence. PNAS NEXUS 2022; 1:pgac206. [PMID: 36712357 PMCID: PMC9802372 DOI: 10.1093/pnasnexus/pgac206] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 09/27/2022] [Indexed: 06/18/2023]
Abstract
Neuromorphic computing mimics the organizational principles of the brain in its quest to replicate the brain's intellectual abilities. An impressive ability of the brain is its adaptive intelligence, which allows the brain to regulate its functions "on the fly" to cope with myriad and ever-changing situations. In particular, the brain displays three adaptive and advanced intelligence abilities of context-awareness, cross frequency coupling, and feature binding. To mimic these adaptive cognitive abilities, we design and simulate a novel, hardware-based adaptive oscillatory neuron using a lattice of magnetic skyrmions. Charge current fed to the neuron reconfigures the skyrmion lattice, thereby modulating the neuron's state, its dynamics and its transfer function "on the fly." This adaptive neuron is used to demonstrate the three cognitive abilities, of which context-awareness and cross-frequency coupling have not been previously realized in hardware neurons. Additionally, the neuron is used to construct an adaptive artificial neural network (ANN) and perform context-aware diagnosis of breast cancer. Simulations show that the adaptive ANN diagnoses cancer with higher accuracy while learning faster and using a more compact and energy-efficient network than a nonadaptive ANN. The work further describes how hardware-based adaptive neurons can mitigate several critical challenges facing contemporary ANNs. Modern ANNs require large amounts of training data, energy, and chip area, and are highly task-specific; conversely, hardware-based ANNs built with adaptive neurons show faster learning, compact architectures, energy-efficiency, fault-tolerance, and can lead to the realization of broader artificial intelligence.
Collapse
Affiliation(s)
| | | | - Sam Liu
- Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712, USA
| | | |
Collapse
|
21
|
Jekal S, Kim J, Kim DH, Noh J, Kim MJ, Kim HY, Kim MS, Oh WC, Yoon CM. Synthesis of LiDAR-Detectable True Black Core/Shell Nanomaterial and Its Practical Use in LiDAR Applications. NANOMATERIALS (BASEL, SWITZERLAND) 2022; 12:3689. [PMID: 36296878 PMCID: PMC9610704 DOI: 10.3390/nano12203689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 10/12/2022] [Accepted: 10/19/2022] [Indexed: 06/16/2023]
Abstract
Light detection and ranging (LiDAR) sensors utilize a near-infrared (NIR) laser with a wavelength of 905 nm. However, LiDAR sensors have weakness in detecting black or dark-tone materials with light-absorbing properties. In this study, SiO2/black TiO2 core/shell nanoparticles (SBT CSNs) were designed as LiDAR-detectable black materials. The SBT CSNs, with sizes of 140, 170, and 200 nm, were fabricated by a series of Stöber, TTIP sol-gel, and modified NaBH4 reduction methods. These SBT CSNs are detectable by a LiDAR sensor and, owing to their core/shell structure with intrapores on the shell (ca. 2−6 nm), they can effectively function as both color and NIR-reflective materials. Moreover, the LiDAR-detectable SBT CSNs exhibited high NIR reflectance (28.2 R%) in a monolayer system and true blackness (L* < 20), along with ecofriendliness and hydrophilicity, making them highly suitable for use in autonomous vehicles.
Collapse
Affiliation(s)
- Suk Jekal
- Department of Chemical and Biological Engineering, Hanbat National University, Yuseong-gu, Daejeon 34158, Korea
| | - Jiwon Kim
- Department of Chemical and Biological Engineering, Hanbat National University, Yuseong-gu, Daejeon 34158, Korea
| | - Dong-Hyun Kim
- Department of Chemical and Biological Engineering, Hanbat National University, Yuseong-gu, Daejeon 34158, Korea
| | - Jungchul Noh
- McKetta Department of Chemical Engineering and Texas Material Institute, The University of Texas at Austin, Austin, TX 78712, USA
| | - Min-Jeong Kim
- Department of Chemical and Biological Engineering, Hanbat National University, Yuseong-gu, Daejeon 34158, Korea
| | - Ha-Yeong Kim
- Department of Chemical and Biological Engineering, Hanbat National University, Yuseong-gu, Daejeon 34158, Korea
| | - Min-Sang Kim
- Department of Chemical and Biological Engineering, Hanbat National University, Yuseong-gu, Daejeon 34158, Korea
| | - Won-Chun Oh
- Department of Advanced Materials Science and Engineering, Hanseo University, Seosan-si 31962, Korea
| | - Chang-Min Yoon
- Department of Chemical and Biological Engineering, Hanbat National University, Yuseong-gu, Daejeon 34158, Korea
| |
Collapse
|
22
|
Khan MA, El Sayed H, Malik S, Zia MT, Alkaabi N, Khan J. A journey towards fully autonomous driving - fueled by a smart communication system. VEHICULAR COMMUNICATIONS 2022; 36:100476. [DOI: 10.1016/j.vehcom.2022.100476] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
23
|
Shi Y, Ying X, Yang J. Deep Unsupervised Domain Adaptation with Time Series Sensor Data: A Survey. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22155507. [PMID: 35898010 PMCID: PMC9371201 DOI: 10.3390/s22155507] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Revised: 07/20/2022] [Accepted: 07/21/2022] [Indexed: 05/03/2023]
Abstract
Sensors are devices that output signals for sensing physical phenomena and are widely used in all aspects of our social production activities. The continuous recording of physical parameters allows effective analysis of the operational status of the monitored system and prediction of unknown risks. Thanks to the development of deep learning, the ability to analyze temporal signals collected by sensors has been greatly improved. However, models trained in the source domain do not perform well in the target domain due to the presence of domain gaps. In recent years, many researchers have used deep unsupervised domain adaptation techniques to address the domain gap between signals collected by sensors in different scenarios, i.e., using labeled data in the source domain and unlabeled data in the target domain to improve the performance of models in the target domain. This survey first summarizes the background of recent research on unsupervised domain adaptation with time series sensor data, the types of sensors used, the domain gap between the source and target domains, and commonly used datasets. Then, the paper classifies and compares different unsupervised domain adaptation methods according to the way of adaptation and summarizes different adaptation settings based on the number of source and target domains. Finally, this survey discusses the challenges of the current research and provides an outlook on future work. This survey systematically reviews and summarizes recent research on unsupervised domain adaptation for time series sensor data to provide the reader with a systematic understanding of the field.
Collapse
|
24
|
Vision-Based Autonomous Vehicle Systems Based on Deep Learning: A Systematic Literature Review. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12146831] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In the past decade, autonomous vehicle systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. However, the AVS is still far away from mass production because of the high cost of sensor fusion and a lack of combination of top-tier solutions to tackle uncertainty on roads. To reduce sensor dependency and to increase manufacturing along with enhancing research, deep learning-based approaches could be the best alternative for developing practical AVS. With this vision, in this systematic review paper, we broadly discussed the literature of deep learning for AVS from the past decade for real-life implementation in core fields. The systematic review on AVS implementing deep learning is categorized into several modules that cover activities including perception analysis (vehicle detection, traffic signs and light identification, pedestrian detection, lane and curve detection, road object localization, traffic scene analysis), decision making, end-to-end controlling and prediction, path and motion planning and augmented reality-based HUD, analyzing research works from 2011 to 2021 that focus on RGB camera vision. The literature is also analyzed for final representative outcomes as visualization in augmented reality-based head-up display (AR-HUD) with categories such as early warning, road markings for improved navigation and enhanced safety with overlapping on vehicles and pedestrians in extreme visual conditions to reduce collisions. The contribution of the literature review includes detailed analysis of current state-of-the-art deep learning methods that only rely on RGB camera vision rather than complex sensor fusion. It is expected to offer a pathway for the rapid development of cost-efficient and more secure practical autonomous vehicle systems.
Collapse
|
25
|
Real-Time 3D Mapping in Isolated Industrial Terrain with Use of Mobile Robotic Vehicle. ELECTRONICS 2022. [DOI: 10.3390/electronics11132086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Simultaneous localization and mapping (SLAM) is a dual process responsible for the ability of a robotic vehicle to build a map of its surroundings and estimate its position on that map. This paper presents the novel concept of creating a 3D map based on the adaptive Monte-Carlo location (AMCL) and the extended Kalman filter (EKF). This approach is intended for inspection or rescue operations in a closed or isolated area where there is a risk to humans. The proposed solution uses particle filters together with data from on-board sensors to estimate the local position of the robot. Its global position is determined through the Rao–Blackwellized technique. The developed system was implemented on a wheeled mobile robot equipped with a sensing system consisting of a laser scanner (LIDAR) and an inertial measurement unit (IMU), and was tested in the real conditions of an underground mine. One of the contributions of this work is to propose a low-complexity and low-cost solution to real-time 3D-map creation. The conducted experimental trials confirmed that the performance of the three-dimensional mapping was characterized by high accuracy and usefulness for recognition and inspection tasks in an unknown industrial environment.
Collapse
|
26
|
Bai A, Carty C, Dai S. Performance of deep-learning artificial intelligence algorithms in detecting retinopathy of prematurity: A systematic review. SAUDI JOURNAL OF OPHTHALMOLOGY : OFFICIAL JOURNAL OF THE SAUDI OPHTHALMOLOGICAL SOCIETY 2022; 36:296-307. [PMID: 36276252 DOI: 10.4103/sjopt.sjopt_219_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/09/2021] [Accepted: 11/12/2021] [Indexed: 11/04/2022]
Abstract
PURPOSE Artificial intelligence (AI) offers considerable promise for retinopathy of prematurity (ROP) screening and diagnosis. The development of deep-learning algorithms to detect the presence of disease may contribute to sufficient screening, early detection, and timely treatment for this preventable blinding disease. This review aimed to systematically examine the literature in AI algorithms in detecting ROP. Specifically, we focused on the performance of deep-learning algorithms through sensitivity, specificity, and area under the receiver operating curve (AUROC) for both the detection and grade of ROP. METHODS We searched Medline OVID, PubMed, Web of Science, and Embase for studies published from January 1, 2012, to September 20, 2021. Studies evaluating the diagnostic performance of deep-learning models based on retinal fundus images with expert ophthalmologists' judgment as reference standard were included. Studies which did not investigate the presence or absence of disease were excluded. Risk of bias was assessed using the QUADAS-2 tool. RESULTS Twelve studies out of the 175 studies identified were included. Five studies measured the performance of detecting the presence of ROP and seven studies determined the presence of plus disease. The average AUROC out of 11 studies was 0.98. The average sensitivity and specificity for detecting ROP was 95.72% and 98.15%, respectively, and for detecting plus disease was 91.13% and 95.92%, respectively. CONCLUSION The diagnostic performance of deep-learning algorithms in published studies was high. Few studies presented externally validated results or compared performance to expert human graders. Large scale prospective validation alongside robust study design could improve future studies.
Collapse
Affiliation(s)
- Amelia Bai
- Department of Ophthalmology, Queensland Children's Hospital, Brisbane, Australia.,Centre for Children's Health Research, Brisbane, Australia.,School of Medical Science, Griffith University, Gold Coast, Australia
| | - Christopher Carty
- Griffith Centre of Biomedical and Rehabilitation Engineering (GCORE), Menzies Health Institute Queensland, Griffith University Gold Coast, Australia.,Department of Orthopaedics, Children's Health Queensland Hospital and Health Service, Queensland Children's Hospital, Brisbane, Australia
| | - Shuan Dai
- Department of Ophthalmology, Queensland Children's Hospital, Brisbane, Australia.,School of Medical Science, Griffith University, Gold Coast, Australia.,University of Queensland, Australia
| |
Collapse
|
27
|
Rahimi M, Liu H, Cardenas ID, Starr A, Hall A, Anderson R. A Review on Technologies for Localisation and Navigation in Autonomous Railway Maintenance Systems. SENSORS 2022; 22:s22114185. [PMID: 35684804 PMCID: PMC9185565 DOI: 10.3390/s22114185] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 05/24/2022] [Accepted: 05/28/2022] [Indexed: 11/16/2022]
Abstract
Smart maintenance is essential to achieving a safe and reliable railway, but traditional maintenance deployment is costly and heavily human-involved. Ineffective job execution or failure in preventive maintenance can lead to railway service disruption and unsafe operations. The deployment of robotic and autonomous systems was proposed to conduct these maintenance tasks with higher accuracy and reliability. In order for these systems to be capable of detecting rail flaws along millions of mileages they must register their location with higher accuracy. A prerequisite of an autonomous vehicle is its possessing a high degree of accuracy in terms of its positional awareness. This paper first reviews the importance and demands of preventive maintenance in railway networks and the related techniques. Furthermore, this paper investigates the strategies, techniques, architecture, and references used by different systems to resolve the location along the railway network. Additionally, this paper discusses the advantages and applicability of on-board-based and infrastructure-based sensing, respectively. Finally, this paper analyses the uncertainties which contribute to a vehicle’s position error and influence on positioning accuracy and reliability with corresponding technique solutions. This study therefore provides an overall direction for the development of further autonomous track-based system designs and methods to deal with the challenges faced in the railway network.
Collapse
Affiliation(s)
- Masoumeh Rahimi
- School of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UK; (M.R.); (I.D.C.); (A.S.)
| | - Haochen Liu
- School of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UK; (M.R.); (I.D.C.); (A.S.)
- Correspondence:
| | - Isidro Durazo Cardenas
- School of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UK; (M.R.); (I.D.C.); (A.S.)
| | - Andrew Starr
- School of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UK; (M.R.); (I.D.C.); (A.S.)
| | - Amanda Hall
- Network Rail, Milton Keynes MK9 1EN, UK; (A.H.); (R.A.)
| | | |
Collapse
|
28
|
Yang M, Sun X, Jia F, Rushworth A, Dong X, Zhang S, Fang Z, Yang G, Liu B. Sensors and Sensor Fusion Methodologies for Indoor Odometry: A Review. Polymers (Basel) 2022; 14:polym14102019. [PMID: 35631899 PMCID: PMC9143447 DOI: 10.3390/polym14102019] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/05/2022] [Accepted: 05/11/2022] [Indexed: 02/04/2023] Open
Abstract
Although Global Navigation Satellite Systems (GNSSs) generally provide adequate accuracy for outdoor localization, this is not the case for indoor environments, due to signal obstruction. Therefore, a self-contained localization scheme is beneficial under such circumstances. Modern sensors and algorithms endow moving robots with the capability to perceive their environment, and enable the deployment of novel localization schemes, such as odometry, or Simultaneous Localization and Mapping (SLAM). The former focuses on incremental localization, while the latter stores an interpretable map of the environment concurrently. In this context, this paper conducts a comprehensive review of sensor modalities, including Inertial Measurement Units (IMUs), Light Detection and Ranging (LiDAR), radio detection and ranging (radar), and cameras, as well as applications of polymers in these sensors, for indoor odometry. Furthermore, analysis and discussion of the algorithms and the fusion frameworks for pose estimation and odometry with these sensors are performed. Therefore, this paper straightens the pathway of indoor odometry from principle to application. Finally, some future prospects are discussed.
Collapse
Affiliation(s)
- Mengshen Yang
- Department of Mechanical, Materials and Manufacturing Engineering, The Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China; (M.Y.); (F.J.); (B.L.)
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201, China;
- Zhejiang Key Laboratory of Robotics and Intelligent Manufacturing Equipment Technology, Ningbo 315201, China
| | - Xu Sun
- Department of Mechanical, Materials and Manufacturing Engineering, The Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China; (M.Y.); (F.J.); (B.L.)
- Nottingham Ningbo China Beacons of Excellence Research and Innovation Institute, University of Nottingham Ningbo China, Ningbo 315100, China
- Correspondence: (X.S.); (A.R.); (G.Y.)
| | - Fuhua Jia
- Department of Mechanical, Materials and Manufacturing Engineering, The Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China; (M.Y.); (F.J.); (B.L.)
| | - Adam Rushworth
- Department of Mechanical, Materials and Manufacturing Engineering, The Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China; (M.Y.); (F.J.); (B.L.)
- Correspondence: (X.S.); (A.R.); (G.Y.)
| | - Xin Dong
- Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, Nottingham NG7 2RD, UK;
| | - Sheng Zhang
- Ningbo Research Institute, Zhejiang University, Ningbo 315100, China;
| | - Zaojun Fang
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201, China;
- Zhejiang Key Laboratory of Robotics and Intelligent Manufacturing Equipment Technology, Ningbo 315201, China
| | - Guilin Yang
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201, China;
- Zhejiang Key Laboratory of Robotics and Intelligent Manufacturing Equipment Technology, Ningbo 315201, China
- Correspondence: (X.S.); (A.R.); (G.Y.)
| | - Bingjian Liu
- Department of Mechanical, Materials and Manufacturing Engineering, The Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China; (M.Y.); (F.J.); (B.L.)
| |
Collapse
|
29
|
Modeling and Fault Detection of Brushless Direct Current Motor by Deep Learning Sensor Data Fusion. SENSORS 2022; 22:s22093516. [PMID: 35591209 PMCID: PMC9099980 DOI: 10.3390/s22093516] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 04/20/2022] [Accepted: 04/21/2022] [Indexed: 12/02/2022]
Abstract
Only with new sensor concepts in a network, which go far beyond what the current state-of-the-art can offer, can current and future requirements for flexibility, safety, and security be met. The combination of data from many sensors allows a richer representation of the observed phenomenon, e.g., system degradation, which can facilitate analysis and decision-making processes. This work addresses the topic of predictive maintenance by exploiting sensor data fusion and artificial intelligence-based analysis. With a dataset such as vibration and sound from sensors, we focus on studying paradigms that orchestrate the most optimal combination of sensors with deep learning sensor fusion algorithms to enable predictive maintenance. In our experimental setup, we used raw data obtained from two sensors, a microphone, and an accelerometer installed on a brushless direct current (BLDC) motor. The data from each sensor were processed individually and, in a second step, merged to create a solid base for analysis. To diagnose BLDC motor faults, this work proposes to use data-level sensor fusion with deep learning methods such as deep convolutional neural networks (DCNNs) for their ability to automatically extract relevant information from the input data, the long short-term memory method (LSTM), and convolutional long short-term memory (CNN-LSTM), a combination of the two previous methods. The results show that in our setup, sound signals outperform vibrations when used individually for training. However, without any feature selection/extraction step, the accuracy of the models improves with data fusion and reaches 98.8%, 93.5%, and 73.6% for the DCNN, CNN-LSTM, and LSTM methods, respectively, 98.8% being a performance that, according to our reading, has never been reached in the analysis of the faults of a BLDC motor without first going through the extraction of the characteristics and their fusion by traditional methods. These results show that it is possible to work with raw data from multiple sensors and achieve good results using deep learning methods without spending time and resources on selecting appropriate features to extract and methods to use for feature extraction and data fusion.
Collapse
|
30
|
Multi-Sensor Data Fusion Approach for Kinematic Quantities. ENERGIES 2022. [DOI: 10.3390/en15082916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
A theoretical framework to implement multi-sensor data fusion methods for kinematic quantities is proposed. All methods defined through the framework allow the combination of signals obtained from position, velocity and acceleration sensors addressing the same target, and improvement in the observation of the kinematics of the target. Differently from several alternative methods, the considered ones need no dynamic and/or error models to operate and can be implemented with low computational burden. In fact, they gain measurements by summing filtered versions of the heterogeneous kinematic quantities. In particular, in the case of position measurement, the use of filters with finite impulse responses, all characterized by finite gain throughout the bandwidth, in place of straightforward time-integrative operators, prevents the drift that is typically produced by the offset and low-frequency noise affecting velocity and acceleration data. A simulated scenario shows that the adopted method keeps the error in a position measurement, obtained indirectly from an accelerometer affected by an offset equal to 1 ppm on the full scale, within a few ppm of the full-scale position. If the digital output of the accelerometer undergoes a second-order time integration, instead, the measurement error would theoretically rise up to 12n(n+1) ppm in the full scale at the n-th discrete time instant. The class of methods offered by the proposed framework is therefore interesting in those applications in which the direct position measurements are characterized by poor accuracy and one has also to look at the velocity and acceleration data to improve the tracking of a target.
Collapse
|
31
|
Abstract
Precise localization plays a crucial role in autonomous driving applications. As Global Position System (GPS) signals are often susceptible to interference or even not fully available, odometry sensors can precisely calculate positions in urban environments. However, the cumulative error is thus originated with time increasing. This paper proposes an effective empirical formula to model such unbounded cumulative errors from noisy relative measurements. Furthermore, a recursive cumulative error expression has been established by calculating the first and second moments of the Ackermann model. Finally, based on the developed formula, numerical experiments have also been conducted to verify the validity of the proposed model.
Collapse
|
32
|
A Collision Relationship-Based Driving Behavior Decision-Making Method for an Intelligent Land Vehicle at a Disorderly Intersection via DRQN. SENSORS 2022; 22:s22020636. [PMID: 35062596 PMCID: PMC8780178 DOI: 10.3390/s22020636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/31/2021] [Accepted: 01/10/2022] [Indexed: 12/02/2022]
Abstract
An intelligent land vehicle utilizes onboard sensors to acquire observed states at a disorderly intersection. However, partial observation of the environment occurs due to sensor noise. This causes decision failure easily. A collision relationship-based driving behavior decision-making method via deep recurrent Q network (CR-DRQN) is proposed for intelligent land vehicles. First, the collision relationship between the intelligent land vehicle and surrounding vehicles is designed as the input. The collision relationship is extracted from the observed states with the sensor noise. This avoids a CR-DRQN dimension explosion and speeds up the network training. Then, DRQN is utilized to attenuate the impact of the input noise and achieve driving behavior decision-making. Finally, some comparative experiments are conducted to verify the effectiveness of the proposed method. CR-DRQN maintains a high decision success rate at a disorderly intersection with partially observable states. In addition, the proposed method is outstanding in the aspects of safety, the ability of collision risk prediction, and comfort.
Collapse
|
33
|
Present and future challenges in therapeutic designing using computational approaches. COMPUTATIONAL APPROACHES FOR NOVEL THERAPEUTIC AND DIAGNOSTIC DESIGNING TO MITIGATE SARS-COV-2 INFECTION 2022. [PMCID: PMC9300749 DOI: 10.1016/b978-0-323-91172-6.00020-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Currently, various computational methods are being used for the purpose of therapeutic design. The advent of the Coronavirus disease-2019 (COVID-19) pandemic has created a lot of problems due to which the development of effective treatment options is urgently needed. Computational intelligence is used in the control, prevention, prediction, diagnosis, and treatment of the disease. Several important drug targets have been identified in severe acute respiratory syndrome-Coronavirus-2 using in silico methods. Computer-aided drug design includes a variety of theoretical and computational approaches that are part of modern drug discovery. Advances in machine learning methods and their applications speed up the drug discovery process. Exploration of nucleic acid-based therapeutics is playing an important role in healthcare also. But a lot of challenges have also been seen that complicate the therapeutic design. Therefore, investigation of challenges associated with therapeutic design is important, and the present chapter is aimed to cover various therapeutic design approaches and challenges associated with them. Moreover, the role of computational strategies in the exploration of potential therapeutics against COVID-19 has been investigated.
Collapse
|
34
|
Tran NK, Albahra S, May L, Waldman S, Crabtree S, Bainbridge S, Rashidi H. Evolving Applications of Artificial Intelligence and Machine Learning in Infectious Diseases Testing. Clin Chem 2021; 68:125-133. [PMID: 34969102 PMCID: PMC9383167 DOI: 10.1093/clinchem/hvab239] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 10/15/2021] [Indexed: 12/31/2022]
Abstract
Background Artificial intelligence (AI) and machine learning (ML) are poised to transform infectious disease testing. Uniquely, infectious disease testing is technologically diverse spaces in laboratory medicine, where multiple platforms and approaches may be required to support clinical decision-making. Despite advances in laboratory informatics, the vast array of infectious disease data is constrained by human analytical limitations. Machine learning can exploit multiple data streams, including but not limited to laboratory information and overcome human limitations to provide physicians with predictive and actionable results. As a quickly evolving area of computer science, laboratory professionals should become aware of AI/ML applications for infectious disease testing as more platforms are become commercially available. Content In this review we: (a) define both AI/ML, (b) provide an overview of common ML approaches used in laboratory medicine, (c) describe the current AI/ML landscape as it relates infectious disease testing, and (d) discuss the future evolution AI/ML for infectious disease testing in both laboratory and point-of-care applications. Summary The review provides an important educational overview of AI/ML technique in the context of infectious disease testing. This includes supervised ML approaches, which are frequently used in laboratory medicine applications including infectious diseases, such as COVID-19, sepsis, hepatitis, malaria, meningitis, Lyme disease, and tuberculosis. We also apply the concept of “data fusion” describing the future of laboratory testing where multiple data streams are integrated by AI/ML to provide actionable clinical knowledge.
Collapse
Affiliation(s)
- Nam K Tran
- Department of Pathology and Laboratory Medicine, UC Davis School of Medicine, CA
| | - Samer Albahra
- Department of Pathology and Laboratory Medicine, UC Davis School of Medicine, CA
| | - Larissa May
- Department of Emergency Medicine, UC Davis School of Medicine, CA
| | - Sarah Waldman
- Department of Internal Medicine, Division of Infectious Diseases, UC Davis School of Medicine, CA
| | - Scott Crabtree
- Department of Internal Medicine, Division of Infectious Diseases, UC Davis School of Medicine, CA
| | - Scott Bainbridge
- Department of Pathology and Laboratory Medicine, UC Davis School of Medicine, CA
| | - Hooman Rashidi
- Department of Pathology and Laboratory Medicine, UC Davis School of Medicine, CA
| |
Collapse
|
35
|
A Survey of Localization Methods for Autonomous Vehicles in Highway Scenarios. SENSORS 2021; 22:s22010247. [PMID: 35009790 PMCID: PMC8749843 DOI: 10.3390/s22010247] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 12/17/2021] [Accepted: 12/20/2021] [Indexed: 11/16/2022]
Abstract
In the context of autonomous vehicles on highways, one of the first and most important tasks is to localize the vehicle on the road. For this purpose, the vehicle needs to be able to take into account the information from several sensors and fuse them with data coming from road maps. The localization problem on highways can be distilled into three main components. The first one consists of inferring on which road the vehicle is currently traveling. Indeed, Global Navigation Satellite Systems are not precise enough to deduce this information by themselves, and thus a filtering step is needed. The second component consists of estimating the vehicle’s position in its lane. Finally, the third and last one aims at assessing on which lane the vehicle is currently driving. These two last components are mandatory for safe driving as actions such as overtaking a vehicle require precise information about the current localization of the vehicle. In this survey, we introduce a taxonomy of the localization methods for autonomous vehicles in highway scenarios. We present each main component of the localization process, and discuss the advantages and drawbacks of the associated state-of-the-art methods.
Collapse
|
36
|
Brodzicki A, Piekarski M, Jaworek-Korjakowska J. The Whale Optimization Algorithm Approach for Deep Neural Networks. SENSORS 2021; 21:s21238003. [PMID: 34884004 PMCID: PMC8659805 DOI: 10.3390/s21238003] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Revised: 11/26/2021] [Accepted: 11/28/2021] [Indexed: 11/22/2022]
Abstract
One of the biggest challenge in the field of deep learning is the parameter selection and optimization process. In recent years different algorithms have been proposed including bio-inspired solutions to solve this problem, however, there are many challenges including local minima, saddle points, and vanishing gradients. In this paper, we introduce the Whale Optimisation Algorithm (WOA) based on the swarm foraging behavior of humpback whales to optimise neural network hyperparameters. We wish to stress that to the best of our knowledge this is the first attempt that uses Whale Optimisation Algorithm for the optimisation task of hyperparameters. After a detailed description of the WOA algorithm we formulate and explain the application in deep learning, present the implementation, and compare the proposed algorithm with other well-known algorithms including widely used Grid and Random Search methods. Additionally, we have implemented a third dimension feature analysis to the original WOA algorithm to utilize 3D search space (3D-WOA). Simulations show that the proposed algorithm can be successfully used for hyperparameters optimization, achieving accuracy of 89.85% and 80.60% for Fashion MNIST and Reuters datasets, respectively.
Collapse
|
37
|
Knowledge-Based Approach for the Perception Enhancement of a Vehicle. JOURNAL OF SENSOR AND ACTUATOR NETWORKS 2021. [DOI: 10.3390/jsan10040066] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
An autonomous vehicle relies on sensors in order to perceive its surroundings. However, there are multiple causes that would hinder a sensor’s proper functioning, such as bad weather or lighting conditions. Studies have shown that rainfall and fog lead to a reduced visibility, which is one of the main causes of accidents. This work proposes the use of a drone in order to enhance the vehicle’s perception, making use of both embedded sensors and its advantageous 3D positioning. The environment perception and vehicle/Unmanned Aerial Vehicle (UAV) interactions are managed by a knowledge base in the form of an ontology, and logical rules are used in order to detect and infer the environmental context and UAV management. The model was tested and validated in a simulation made on Unity.
Collapse
|
38
|
Oladele DA, Markus ED, Abu-Mahfouz AM. Adaptability of Assistive Mobility Devices and the Role of the Internet of Medical Things: Comprehensive Review. JMIR Rehabil Assist Technol 2021; 8:e29610. [PMID: 34779786 PMCID: PMC8663709 DOI: 10.2196/29610] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 06/29/2021] [Accepted: 09/12/2021] [Indexed: 01/22/2023] Open
Abstract
Background With the projected upsurge in the percentage of people with some form of disability, there has been a significant increase in the need for assistive mobility devices. However, for mobility aids to be effective, such devices should be adapted to the user’s needs. This can be achieved by improving the confidence of the acquired information (interaction between the user, the environment, and the device) following design specifications. Therefore, there is a need for literature review on the adaptability of assistive mobility devices. Objective In this study, we aim to review the adaptability of assistive mobility devices and the role of the internet of medical things in terms of the acquired information for assistive mobility devices. We review internet-enabled assistive mobility technologies and non–internet of things (IoT) assistive mobility devices. These technologies will provide awareness of the status of adaptive mobility technology and serve as a source and reference regarding information to health care professionals and researchers. Methods We performed a literature review search on the following databases of academic references and journals: Google Scholar, ScienceDirect, Institute of Electrical and Electronics Engineers, Springer, and websites of assistive mobility and foundations presenting studies on assistive mobility found through a generic Google search (including the World Health Organization website). The following keywords were used: assistive mobility OR assistive robots, assistive mobility devices, internet-enabled assistive mobility technologies, IoT Framework OR IoT Architecture AND for Healthcare, assisted navigation OR autonomous navigation, mobility AND aids OR devices, adaptability of assistive technology, adaptive mobility devices, pattern recognition, autonomous navigational systems, human-robot interfaces, motor rehabilitation devices, perception, and ambient assisted living. Results We identified 13,286 results (excluding titles that were not relevant to this study). Then, through a narrative review, we selected 189 potential studies (189/13,286, 1.42%) from the existing literature on the adaptability of assistive mobility devices and IoT frameworks for assistive mobility and conducted a critical analysis. Of the 189 potential studies, 82 (43.4%) were selected for analysis after meeting the inclusion criteria. On the basis of the type of technologies presented in the reviewed articles, we proposed a categorization of the adaptability of smart assistive mobility devices in terms of their interaction with the user (user system interface), perception techniques, and communication and sensing frameworks. Conclusions We discussed notable limitations of the reviewed literature studies. The findings revealed that an improvement in the adaptation of assistive mobility systems would require a reduction in training time and avoidance of cognitive overload. Furthermore, sensor fusion and classification accuracy are critical for achieving real-world testing requirements. Finally, the trade-off between cost and performance should be considered in the commercialization of these devices.
Collapse
Affiliation(s)
- Daniel Ayo Oladele
- Department of Electrical, Electronic and Computer Engineering, Central University of Technology, Bloemfontein, South Africa
| | - Elisha Didam Markus
- Department of Electrical, Electronic and Computer Engineering, Central University of Technology, Bloemfontein, South Africa
| | | |
Collapse
|
39
|
Caltagirone L, Bellone M, Svensson L, Wahde M, Sell R. Lidar-Camera Semi-Supervised Learning for Semantic Segmentation. SENSORS (BASEL, SWITZERLAND) 2021; 21:4813. [PMID: 34300551 PMCID: PMC8309822 DOI: 10.3390/s21144813] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 06/24/2021] [Accepted: 06/26/2021] [Indexed: 11/24/2022]
Abstract
In this work, we investigated two issues: (1) How the fusion of lidar and camera data can improve semantic segmentation performance compared with the individual sensor modalities in a supervised learning context; and (2) How fusion can also be leveraged for semi-supervised learning in order to further improve performance and to adapt to new domains without requiring any additional labelled data. A comparative study was carried out by providing an experimental evaluation on networks trained in different setups using various scenarios from sunny days to rainy night scenes. The networks were tested for challenging, and less common, scenarios where cameras or lidars individually would not provide a reliable prediction. Our results suggest that semi-supervised learning and fusion techniques increase the overall performance of the network in challenging scenarios using less data annotations.
Collapse
Affiliation(s)
- Luca Caltagirone
- Applied Artificial Intelligence Research Group, Department of Mechanics and Maritime Sciences, Chalmers University of Technology, 412 58 Gothenburg, Sweden; (L.C.); (M.W.)
| | - Mauro Bellone
- Smart City Center of Excellence, Tallinn University of Technology, 12616 Tallinn, Estonia
| | - Lennart Svensson
- Department of Electrical Engineering, Chalmers University of Technology, 412 58 Gothenburg, Sweden;
| | - Mattias Wahde
- Applied Artificial Intelligence Research Group, Department of Mechanics and Maritime Sciences, Chalmers University of Technology, 412 58 Gothenburg, Sweden; (L.C.); (M.W.)
| | - Raivo Sell
- Department of Mechanical and Industrial Engineering, Tallinn University of Technology, 12616 Tallinn, Estonia;
| |
Collapse
|
40
|
Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues. ARRAY 2021. [DOI: 10.1016/j.array.2021.100057] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
41
|
Ravikumar S, Kavitha D. CNN‐OHGS: CNN‐oppositional‐based Henry gas solubility optimization model for autonomous vehicle control system. J FIELD ROBOT 2021. [DOI: 10.1002/rob.22020] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- S. Ravikumar
- Department of Information Technology SRM Valliammai Engineering College Chengalpattu Tamil Nadu India
| | - D. Kavitha
- Department of Computer Science and Engineering SRM Valliammai Engineering College Chengalpattu Tamil Nadu India
| |
Collapse
|
42
|
Carmona J, Guindel C, Garcia F, de la Escalera A. eHMI: Review and Guidelines for Deployment on Autonomous Vehicles. SENSORS 2021; 21:s21092912. [PMID: 33919209 PMCID: PMC8122490 DOI: 10.3390/s21092912] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 04/15/2021] [Accepted: 04/19/2021] [Indexed: 11/16/2022]
Abstract
Human-machine interaction is an active area of research due to the rapid development of autonomous systems and the need for communication. This review provides further insight into the specific issue of the information flow between pedestrians and automated vehicles by evaluating recent advances in external human-machine interfaces (eHMI), which enable the transmission of state and intent information from the vehicle to the rest of the traffic participants. Recent developments will be explored and studies analyzing their effectiveness based on pedestrian feedback data will be presented and contextualized. As a result, we aim to draw a broad perspective on the current status and recent techniques for eHMI and some guidelines that will encourage future research and development of these systems.
Collapse
|
43
|
A Synergy of Innovative Technologies towards Implementing an Autonomous DIY Electric Vehicle for Harvester-Assisting Purposes. MACHINES 2021. [DOI: 10.3390/machines9040082] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The boom in the electronics industry has made a variety of credit card-sized computer systems and plenty of accompanying sensing and acting elements widely available, at continuously diminishing cost and size levels. The benefits of this situation for agriculture are not left unexploited and thus, more accurate, efficient and environmentally-friendly systems are making the scene. In this context, there is an increasing interest in affordable, small-scale agricultural robots. A key factor for success is the balanced selection of innovative hardware and software components, among the plethora being available. This work describes exactly the steps for designing, implementing and testing a small autonomous electric vehicle, able to follow the farmer during the harvesting activities and to carry the fruits/vegetables from the plant area to the truck location. Quite inexpensive GPS and IMU units, assisted by hardware-accelerated machine vision, speech recognition and networking techniques can assure the fluent operation of a prototype vehicle exhibiting elementary automatic control functionality. The whole approach also highlights the challenges for achieving a truly working solution and provides directions for future exploitation and improvements.
Collapse
|
44
|
Ciria A, Schillaci G, Pezzulo G, Hafner VV, Lara B. Predictive Processing in Cognitive Robotics: A Review. Neural Comput 2021; 33:1402-1432. [PMID: 34496394 DOI: 10.1162/neco_a_01383] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 12/31/2020] [Indexed: 11/04/2022]
Abstract
Predictive processing has become an influential framework in cognitive sciences. This framework turns the traditional view of perception upside down, claiming that the main flow of information processing is realized in a top-down, hierarchical manner. Furthermore, it aims at unifying perception, cognition, and action as a single inferential process. However, in the related literature, the predictive processing framework and its associated schemes, such as predictive coding, active inference, perceptual inference, and free-energy principle, tend to be used interchangeably. In the field of cognitive robotics, there is no clear-cut distinction on which schemes have been implemented and under which assumptions. In this letter, working definitions are set with the main aim of analyzing the state of the art in cognitive robotics research working under the predictive processing framework as well as some related nonrobotic models. The analysis suggests that, first, research in both cognitive robotics implementations and nonrobotic models needs to be extended to the study of how multiple exteroceptive modalities can be integrated into prediction error minimization schemes. Second, a relevant distinction found here is that cognitive robotics implementations tend to emphasize the learning of a generative model, while in nonrobotics models, it is almost absent. Third, despite the relevance for active inference, few cognitive robotics implementations examine the issues around control and whether it should result from the substitution of inverse models with proprioceptive predictions. Finally, limited attention has been placed on precision weighting and the tracking of prediction error dynamics. These mechanisms should help to explore more complex behaviors and tasks in cognitive robotics research under the predictive processing framework.
Collapse
Affiliation(s)
- Alejandra Ciria
- Facultad de Psicología, Universidad Nacional Autónoma de México, Mexico City, CP 04510, Mexico
| | - Guido Schillaci
- BioRobotics Institute, Scuola Superiore Sant'Anna, 34 56025 Pontedera, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, 44 00185 Rome, Italy
| | - Verena V Hafner
- Adaptive Systems Group, Department of Computer Science, Humboldt-Universität zu Berlin, D-12489, Germany
| | - Bruno Lara
- Laboratorio de Robótica Cognitiva, Centro de Investigación en Ciencias, Universidad Autónoma del Estado de Morelos, Cuernavaca CP 62209, Mexico
| |
Collapse
|
45
|
Yang SM, Lin YA. Development of an Improved Rapidly Exploring Random Trees Algorithm for Static Obstacle Avoidance in Autonomous Vehicles. SENSORS (BASEL, SWITZERLAND) 2021; 21:2244. [PMID: 33806992 PMCID: PMC8004750 DOI: 10.3390/s21062244] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 03/15/2021] [Accepted: 03/17/2021] [Indexed: 11/16/2022]
Abstract
Safe path planning for obstacle avoidance in autonomous vehicles has been developed. Based on the Rapidly Exploring Random Trees (RRT) algorithm, an improved algorithm integrating path pruning, smoothing, and optimization with geometric collision detection is shown to improve planning efficiency. Path pruning, a prerequisite to path smoothing, is performed to remove the redundant points generated by the random trees for a new path, without colliding with the obstacles. Path smoothing is performed to modify the path so that it becomes continuously differentiable with curvature implementable by the vehicle. Optimization is performed to select a "near"-optimal path of the shortest distance among the feasible paths for motion efficiency. In the experimental verification, both a pure pursuit steering controller and a proportional-integral speed controller are applied to keep an autonomous vehicle tracking the planned path predicted by the improved RRT algorithm. It is shown that the vehicle can successfully track the path efficiently and reach the destination safely, with an average tracking control deviation of 5.2% of the vehicle width. The path planning is also applied to lane changes, and the average deviation from the lane during and after lane changes remains within 8.3% of the vehicle width.
Collapse
Affiliation(s)
- S. M. Yang
- Department of Aeronautics and Astronautics, National Cheng Kung University, Tainan City 70101, Taiwan;
| | | |
Collapse
|
46
|
Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. SENSORS 2021; 21:s21062140. [PMID: 33803889 PMCID: PMC8003231 DOI: 10.3390/s21062140] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 03/08/2021] [Accepted: 03/15/2021] [Indexed: 12/26/2022]
Abstract
With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.
Collapse
|
47
|
Barzegar V, Laflamme S, Hu C, Dodson J. Multi-Time Resolution Ensemble LSTMs for Enhanced Feature Extraction in High-Rate Time Series. SENSORS (BASEL, SWITZERLAND) 2021; 21:1954. [PMID: 33802233 PMCID: PMC8001144 DOI: 10.3390/s21061954] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 03/03/2021] [Accepted: 03/04/2021] [Indexed: 11/17/2022]
Abstract
Systems experiencing high-rate dynamic events, termed high-rate systems, typically undergo accelerations of amplitudes higher than 100 g-force in less than 10 ms. Examples include adaptive airbag deployment systems, hypersonic vehicles, and active blast mitigation systems. Given their critical functions, accurate and fast modeling tools are necessary for ensuring the target performance. However, the unique characteristics of these systems, which consist of (1) large uncertainties in the external loads, (2) high levels of non-stationarities and heavy disturbances, and (3) unmodeled dynamics generated from changes in system configurations, in combination with the fast-changing environments, limit the applicability of physical modeling tools. In this paper, a deep learning algorithm is used to model high-rate systems and predict their response measurements. It consists of an ensemble of short-sequence long short-term memory (LSTM) cells which are concurrently trained. To empower multi-step ahead predictions, a multi-rate sampler is designed to individually select the input space of each LSTM cell based on local dynamics extracted using the embedding theorem. The proposed algorithm is validated on experimental data obtained from a high-rate system. Results showed that the use of the multi-rate sampler yields better feature extraction from non-stationary time series compared with a more heuristic method, resulting in significant improvement in step ahead prediction accuracy and horizon. The lean and efficient architecture of the algorithm results in an average computing time of 25 μμs, which is below the maximum prediction horizon, therefore demonstrating the algorithm's promise in real-time high-rate applications.
Collapse
Affiliation(s)
- Vahid Barzegar
- Department of Civil, Construction, and Environmental Engineering, Iowa State University, 813 Bissell Road, Ames, IA 50011, USA;
| | - Simon Laflamme
- Department of Civil, Construction, and Environmental Engineering, Iowa State University, 813 Bissell Road, Ames, IA 50011, USA;
- Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, USA;
| | - Chao Hu
- Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, USA;
- Department of Mechanical Engineering, Iowa State University, Ames, IA 50011, USA
| | - Jacob Dodson
- Air Force Research Laboratory, Munitions Directorate, Fuzes Branch, Eglin Air Force Base, FL 32542, USA;
| |
Collapse
|
48
|
Abdu FJ, Zhang Y, Fu M, Li Y, Deng Z. Application of Deep Learning on Millimeter-Wave Radar Signals: A Review. SENSORS 2021; 21:s21061951. [PMID: 33802217 PMCID: PMC7999239 DOI: 10.3390/s21061951] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 02/24/2021] [Accepted: 02/26/2021] [Indexed: 11/16/2022]
Abstract
The progress brought by the deep learning technology over the last decade has inspired many research domains, such as radar signal processing, speech and audio recognition, etc., to apply it to their respective problems. Most of the prominent deep learning models exploit data representations acquired with either Lidar or camera sensors, leaving automotive radars rarely used. This is despite the vital potential of radars in adverse weather conditions, as well as their ability to simultaneously measure an object's range and radial velocity seamlessly. As radar signals have not been exploited very much so far, there is a lack of available benchmark data. However, recently, there has been a lot of interest in applying radar data as input to various deep learning algorithms, as more datasets are being provided. To this end, this paper presents a survey of various deep learning approaches processing radar signals to accomplish some significant tasks in an autonomous driving application, such as detection and classification. We have itemized the review based on different radar signal representations, as it is one of the critical aspects while using radar data with deep learning models. Furthermore, we give an extensive review of the recent deep learning-based multi-sensor fusion models exploiting radar signals and camera images for object detection tasks. We then provide a summary of the available datasets containing radar data. Finally, we discuss the gaps and important innovations in the reviewed papers and highlight some possible future research prospects.
Collapse
|
49
|
Review on Vehicle Detection Technology for Unmanned Ground Vehicles. SENSORS 2021; 21:s21041354. [PMID: 33672976 PMCID: PMC7918767 DOI: 10.3390/s21041354] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 02/05/2021] [Accepted: 02/10/2021] [Indexed: 11/17/2022]
Abstract
Unmanned ground vehicles (UGVs) have great potential in the application of both civilian and military fields, and have become the focus of research in many countries. Environmental perception technology is the foundation of UGVs, which is of great significance to achieve a safer and more efficient performance. This article firstly introduces commonly used sensors for vehicle detection, lists their application scenarios and compares the strengths and weakness of different sensors. Secondly, related works about one of the most important aspects of environmental perception technology-vehicle detection-are reviewed and compared in detail in terms of different sensors. Thirdly, several simulation platforms related to UGVs are presented for facilitating simulation testing of vehicle detection algorithms. In addition, some datasets about UGVs are summarized to achieve the verification of vehicle detection algorithms in practical application. Finally, promising research topics in the future study of vehicle detection technology for UGVs are discussed in detail.
Collapse
|
50
|
Ahangar MN, Ahmed QZ, Khan FA, Hafeez M. A Survey of Autonomous Vehicles: Enabling Communication Technologies and Challenges. SENSORS 2021; 21:s21030706. [PMID: 33494191 PMCID: PMC7864337 DOI: 10.3390/s21030706] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 01/11/2021] [Accepted: 01/18/2021] [Indexed: 11/16/2022]
Abstract
The Department of Transport in the United Kingdom recorded 25,080 motor vehicle fatalities in 2019. This situation stresses the need for an intelligent transport system (ITS) that improves road safety and security by avoiding human errors with the use of autonomous vehicles (AVs). Therefore, this survey discusses the current development of two main components of an ITS: (1) gathering of AVs surrounding data using sensors; and (2) enabling vehicular communication technologies. First, the paper discusses various sensors and their role in AVs. Then, various communication technologies for AVs to facilitate vehicle to everything (V2X) communication are discussed. Based on the transmission range, these technologies are grouped into three main categories: long-range, medium-range and short-range. The short-range group presents the development of Bluetooth, ZigBee and ultra-wide band communication for AVs. The medium-range examines the properties of dedicated short-range communications (DSRC). Finally, the long-range group presents the cellular-vehicle to everything (C-V2X) and 5G-new radio (5G-NR). An important characteristic which differentiates each category and its suitable application is latency. This research presents a comprehensive study of AV technologies and identifies the main advantages, disadvantages, and challenges.
Collapse
Affiliation(s)
- M. Nadeem Ahangar
- School of Computing and Engineering, University of Huddersfield, Huddersfield HD1 3DH, UK; (M.N.A.); (M.H.)
| | - Qasim Z. Ahmed
- School of Computing and Engineering, University of Huddersfield, Huddersfield HD1 3DH, UK; (M.N.A.); (M.H.)
- Correspondence:
| | - Fahd A. Khan
- School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad 44000, Pakistan;
| | - Maryam Hafeez
- School of Computing and Engineering, University of Huddersfield, Huddersfield HD1 3DH, UK; (M.N.A.); (M.H.)
| |
Collapse
|