1
|
Li Y, Zhang S, Zhu G, Huang Z, Wang R, Duan X, Wang Z. A CNN-Based Wearable System for Driver Drowsiness Detection. SENSORS (BASEL, SWITZERLAND) 2023; 23:3475. [PMID: 37050534 PMCID: PMC10099375 DOI: 10.3390/s23073475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 03/15/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
Drowsiness poses a serious challenge to road safety and various in-cabin sensing technologies have been experimented with to monitor driver alertness. Cameras offer a convenient means for contactless sensing, but they may violate user privacy and require complex algorithms to accommodate user (e.g., sunglasses) and environmental (e.g., lighting conditions) constraints. This paper presents a lightweight convolution neural network that measures eye closure based on eye images captured by a wearable glass prototype, which features a hot mirror-based design that allows the camera to be installed on the glass temples. The experimental results showed that the wearable glass prototype, with the neural network in its core, was highly effective in detecting eye blinks. The blink rate derived from the glass output was highly consistent with an industrial gold standard EyeLink eye-tracker. As eye blink characteristics are sensitive measures of driver drowsiness, the glass prototype and the lightweight neural network presented in this paper would provide a computationally efficient yet viable solution for real-world applications.
Collapse
|
2
|
Pandey NN, Muppalaneni NB. Strabismus free gaze detection system for driver’s using deep learning technique. PROGRESS IN ARTIFICIAL INTELLIGENCE 2023. [DOI: 10.1007/s13748-023-00296-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
3
|
A Proactive Recognition System for Detecting Commercial Vehicle Driver’s Distracted Behavior. SENSORS 2022; 22:s22062373. [PMID: 35336546 PMCID: PMC8955459 DOI: 10.3390/s22062373] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/15/2022] [Accepted: 03/16/2022] [Indexed: 12/10/2022]
Abstract
Road traffic accidents regarding commercial vehicles have been demonstrated as an important culprit restricting the steady development of the social economy, which are closely related to the distracted behavior of drivers. However, the existing driver’s distracted behavior surveillance systems for monitoring and preventing the distracted behavior of drivers still have some shortcomings such as fewer recognition objects and scenarios. This study aims to provide a more comprehensive methodological framework to demonstrate the significance of enlarging the recognition objects, scenarios and types of the existing driver’s distracted behavior recognition systems. The driver’s posture characteristics were primarily analyzed to provide the basis of the subsequent modeling. Five CNN sub-models were established for different posture categories and to improve the efficiency of recognition, accompanied by a holistic multi-cascaded CNN framework. To suggest the best model, image data sets of commercial vehicle driver postures including 117,410 daytime images and 60,480 night images were trained and tested. The findings demonstrate that compared to the non-cascaded models, both daytime and night cascaded models show better performance. Besides, the night models exhibit worse accuracy and better speed relative to their daytime model counterparts for both non-cascaded and cascaded models. This study could be used to develop countermeasures to improve driver safety and provide helpful information for the design of the driver’s real-time monitoring and warning system as well as the automatic driving system. Future research could be implemented to combine the vehicle state parameters with the driver’s microscopic behavior to establish a more comprehensive proactive surveillance system.
Collapse
|
4
|
Wang Y, Ding X, Yuan G, Fu X. Dual-Cameras-Based Driver's Eye Gaze Tracking System with Non-Linear Gaze Point Refinement. SENSORS 2022; 22:s22062326. [PMID: 35336497 PMCID: PMC8949346 DOI: 10.3390/s22062326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 02/25/2022] [Accepted: 03/05/2022] [Indexed: 02/06/2023]
Abstract
The human eye gaze plays a vital role in monitoring people’s attention, and various efforts have been made to improve in-vehicle driver gaze tracking systems. Most of them build the specific gaze estimation model by pre-annotated data training in an offline way. These systems usually tend to have poor generalization performance during the online gaze prediction, which is caused by the estimation bias between the training domain and the deployment domain, making the predicted gaze points shift from their correct location. To solve this problem, a novel driver’s eye gaze tracking method with non-linear gaze point refinement is proposed in a monitoring system using two cameras, which eliminates the estimation bias and implicitly fine-tunes the gaze points. Supported by the two-stage gaze point clustering algorithm, the non-linear gaze point refinement method can gradually extract the representative gaze points of the forward and mirror gaze zone and establish the non-linear gaze point re-mapping relationship. In addition, the Unscented Kalman filter is utilized to track the driver’s continuous status features. Experimental results show that the non-linear gaze point refinement method outperforms several previous gaze calibration and gaze mapping methods, and improves the gaze estimation accuracy even on the cross-subject evaluation. The system can be used for predicting the driver’s attention.
Collapse
|
5
|
Yuan G, Wang Y, Yan H, Fu X. Self-calibrated driver gaze estimation via gaze pattern learning. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107630] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
6
|
Pose Estimation of Driver’s Head Panning Based on Interpolation and Motion Vectors under a Boosting Framework. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112411600] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Over the last decade, a driver’s distraction has gained popularity due to its increased significance and high impact on road accidents. Various factors, such as mood disorder, anxiety, nervousness, illness, loud music, and driver’s head rotation, contribute significantly to causing a distraction. Many solutions have been proposed to address this problem; however, various aspects of it are still unresolved. The study proposes novel geometric and spatial scale-invariant features under a boosting framework for detecting a driver’s distraction due to the driver’s head panning. These features are calculated using facial landmark detection algorithms, including the Active Shape Model (ASM) and Boosted Regression with Markov Networks (BoRMaN). The proposed approach is compared with six existing state-of-the-art approaches using four benchmark datasets, including DrivFace dataset, Boston University (BU) dataset, FT-UMT dataset, and Pointing’04 dataset. The proposed approach outperforms the existing approaches achieving an accuracy of 94.43%, 92.08%, 96.63%, and 83.25% on standard datasets.
Collapse
|
7
|
González-Ortega D, Díaz-Pernas FJ, Martínez-Zarzuela M, Antón-Rodríguez M. Comparative Analysis of Kinect-Based and Oculus-Based Gaze Region Estimation Methods in a Driving Simulator. SENSORS (BASEL, SWITZERLAND) 2020; 21:E26. [PMID: 33374560 PMCID: PMC7793139 DOI: 10.3390/s21010026] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 12/19/2020] [Accepted: 12/21/2020] [Indexed: 12/15/2022]
Abstract
Driver's gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers' gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.
Collapse
Affiliation(s)
- David González-Ortega
- Department of Signal Theory, Communications and Telematics Engineering, Telecommunications Engineering School, University of Valladolid, 47011 Valladolid, Spain; (F.J.D.-P.); (M.M.-Z.); (M.A.-R.)
| | | | | | | |
Collapse
|
8
|
Pupil Localisation and Eye Centre Estimation Using Machine Learning and Computer Vision. SENSORS 2020; 20:s20133785. [PMID: 32640589 PMCID: PMC7374404 DOI: 10.3390/s20133785] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 06/27/2020] [Accepted: 07/04/2020] [Indexed: 11/16/2022]
Abstract
Various methods have been used to estimate the pupil location within an image or a real-time video frame in many fields. However, these methods lack the performance specifically in low-resolution images and varying background conditions. We propose a coarse-to-fine pupil localisation method using a composite of machine learning and image processing algorithms. First, a pre-trained model is employed for the facial landmark identification to extract the desired eye frames within the input image. Then, we use multi-stage convolution to find the optimal horizontal and vertical coordinates of the pupil within the identified eye frames. For this purpose, we define an adaptive kernel to deal with the varying resolution and size of input images. Furthermore, a dynamic threshold is calculated recursively for reliable identification of the best-matched candidate. We evaluated our method using various statistical and standard metrics along with a standardised distance metric that we introduce for the first time in this study. The proposed method outperforms previous works in terms of accuracy and reliability when benchmarked on multiple standard datasets. The work has diverse artificial intelligence and industrial applications including human computer interfaces, emotion recognition, psychological profiling, healthcare, and automated deception detection.
Collapse
|
9
|
Wang S, Li J, Yang P, Gao T, Bowers AR, Luo G. Towards Wide Range Tracking of Head Scanning Movement in Driving. INT J PATTERN RECOGN 2020; 34. [PMID: 34267412 DOI: 10.1142/s0218001420500330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Gaining environmental awareness through lateral head scanning (yaw rotations) is important for driving safety, especially when approaching intersections. Therefore, head scanning movements could be an important behavioral metric for driving safety research and driving risk mitigation systems. Tracking head scanning movements with a single in-car camera is preferred hardware-wise, but it is very challenging to track the head over almost a 180° range. In this paper we investigate two state-of-the-art methods, a multi-loss deep residual learning method with 50 layers (multi-loss ResNet-50) and an ORB feature-based simultaneous localization and mapping method (ORB-SLAM). While deep learning methods have been extensively studied for head pose detection, this is the first study in which SLAM has been employed to innovatively track head scanning over a very wide range. Our laboratory experimental results showed that ORB-SLAM was more accurate than multi-loss ResNet-50, which often failed when many facial features were not in the view. On the contrary, ORB-SLAM was able to continue tracking as it doesn't rely on particular facial features. Testing with real driving videos demonstrated the feasibility of using ORB-SLAM for tracking large lateral head scans in naturalistic video data.
Collapse
Affiliation(s)
- Shuhang Wang
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Boston, MA, USA, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Jianfeng Li
- Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 1A1, Canada
| | - Pengshuai Yang
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Tianxiao Gao
- Institute of Digital Media, Peking University, Beijing, 100871, China
| | - Alex R Bowers
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Boston, MA, USA, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Gang Luo
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Boston, MA, USA, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
10
|
|
11
|
Objective assessment of exploratory behaviour in schizophrenia using wireless motion capture. Schizophr Res 2018; 195:122-129. [PMID: 28954705 DOI: 10.1016/j.schres.2017.09.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2017] [Revised: 09/09/2017] [Accepted: 09/10/2017] [Indexed: 11/22/2022]
Abstract
Motivation deficits are a prominent feature of schizophrenia and have substantial consequences for functional outcome. The impact of amotivation on exploratory behaviour has not been extensively assessed by entirely objective means. This study evaluated deficits in exploratory behaviour in an open-field setting using wireless motion capture. Twenty-one stable adult outpatients with schizophrenia and twenty matched healthy controls completed the Novelty Exploration Task, in which participants explored a novel environment containing familiar and uncommon objects. Objective motion data were used to index participants' locomotor activity and tendency for visual and tactile object exploration. Clinical assessments of positive and negative symptoms, apathy, cognition, depression, medication side-effects, and community functioning were also administered. Relationships between task performance and clinical measures were evaluated using Spearman correlations, and group differences were evaluated using multivariate analysis of covariance tests. Although locomotor activity and tactile exploration were similar between the schizophrenia and healthy control groups, schizophrenia participants exhibited reduced visual object exploration (F(2,35)=3.40, p=0.045). Further, schizophrenia participants' geometric pattern of locomotion, visual exploration, and tactile exploration were correlated with overall negative symptoms (|ρ|=0.46-0.64, p<=0.039) and apathy (|ρ|=0.49-0.62, p<=0.028), and both visual and tactile exploration were also correlated with community functioning (|ρ|=0.46-0.48, p<=0.043). The Novelty Exploration Task may be a valuable tool to quantify exploratory behaviour beyond what is captured through standard clinical instruments and human observer ratings. Findings from this initial study suggest that locomotor activity and object interaction tendencies are impacted by motivation, and reveal deficits specifically in visual exploration in schizophrenia.
Collapse
|
12
|
Shi C, Li J, Wang Y, Luo G. Exploiting Lightweight Statistical Learning for Event-Based Vision Processing. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2018; 6:19396-19406. [PMID: 29750138 PMCID: PMC5937990 DOI: 10.1109/access.2018.2823260] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper presents a lightweight statistical learning framework potentially suitable for low-cost event-based vision systems, where visual information is captured by a dynamic vision sensor (DVS) and represented as an asynchronous stream of pixel addresses (events) indicating a relative intensity change on those locations. A simple random ferns classifier based on randomly selected patch-based binary features is employed to categorize pixel event flows. Our experimental results demonstrate that compared to existing event-based processing algorithms, such as spiking convolutional neural networks (SCNNs) and the state-of-the-art bag-of-events (BoE)-based statistical algorithms, our framework excels in high processing speed (2× faster than the BoE statistical methods and >100× faster than previous SCNNs in training speed) with extremely simple online learning process, and achieves state-of-the-art classification accuracy on four popular address-event representation data sets: MNIST-DVS, Poker-DVS, Posture-DVS, and CIFAR10-DVS. Hardware estimation shows that our algorithm will be preferable for low-cost embedded system implementations.
Collapse
Affiliation(s)
- Cong Shi
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114 USA
| | - Jiajun Li
- State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100864, China
| | - Ying Wang
- State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100864, China
| | - Gang Luo
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114 USA
| |
Collapse
|
13
|
Naqvi RA, Arsalan M, Batchuluun G, Yoon HS, Park KR. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor. SENSORS 2018; 18:s18020456. [PMID: 29401681 PMCID: PMC5855991 DOI: 10.3390/s18020456] [Citation(s) in RCA: 68] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 01/31/2018] [Accepted: 02/01/2018] [Indexed: 11/30/2022]
Abstract
A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.
Collapse
Affiliation(s)
- Rizwan Ali Naqvi
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Ganbayar Batchuluun
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Hyo Sik Yoon
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| |
Collapse
|
14
|
Wang Y, Zhao T, Ding X, Peng J, Bian J, Fu X. Learning a gaze estimator with neighbor selection from large-scale synthetic eye images. Knowl Based Syst 2018. [DOI: 10.1016/j.knosys.2017.10.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
15
|
Shi Z, Yang Y, Hospedales TM, Xiang T. Weakly-Supervised Image Annotation and Segmentation with Objects and Attributes. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2017; 39:2525-2538. [PMID: 28026753 DOI: 10.1109/tpami.2016.2645157] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We propose to model complex visual scenes using a non-parametric Bayesian model learned from weakly labelled images abundant on media sharing sites such as Flickr. Given weak image-level annotations of objects and attributes without locations or associations between them, our model aims to learn the appearance of object and attribute classes as well as their association on each object instance. Once learned, given an image, our model can be deployed to tackle a number of vision problems in a joint and coherent manner, including recognising objects in the scene (automatic object annotation), describing objects using their attributes (attribute prediction and association), and localising and delineating the objects (object detection and semantic segmentation). This is achieved by developing a novel Weakly Supervised Markov Random Field Stacked Indian Buffet Process (WS-MRF-SIBP) that models objects and attributes as latent factors and explicitly captures their correlations within and across superpixels. Extensive experiments on benchmark datasets demonstrate that our weakly supervised model significantly outperforms weakly supervised alternatives and is often comparable with existing strongly supervised models on a variety of tasks including semantic segmentation, automatic image annotation and retrieval based on object-attribute associations.
Collapse
|
16
|
Araujo GM, Ribeiro FML, Junior WSS, da Silva EAB, Goldenstein SK. Weak Classifier for Density Estimation in Eye Localization and Tracking. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:3410-3424. [PMID: 28422660 DOI: 10.1109/tip.2017.2694226] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, we propose a fast weak classifier that can detect and track eyes in video sequences. The approach relies on a least-squares detector based on the inner product detector (IPD) that can stimate a probability density distribution for a feature's location-which fits naturally with a Bayesian estimation cycle, such as a Kalman or particle filter. As a least-squares sliding window detector, it possesses tolerance to small variations in the desired pattern while maintaining good generalization capabilities and computational efficiency. We propose two approaches to integrating the IPD with a particle filter tracker. We use the BioID, FERET, LFPW, and COFW public datasets as well as five manually annotated high-definition video sequences to quantitatively evaluate the algorithms' performance. The video data set contains four subjects, different types of backgrounds, blurring due to fast motion, and occlusions. All code and data are available.
Collapse
|
17
|
Design of a Fatigue Detection System for High-Speed Trains Based on Driver Vigilance Using a Wireless Wearable EEG. SENSORS 2017; 17:s17030486. [PMID: 28257073 PMCID: PMC5375772 DOI: 10.3390/s17030486] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/25/2016] [Revised: 02/23/2017] [Accepted: 02/27/2017] [Indexed: 11/17/2022]
Abstract
The vigilance of the driver is important for railway safety, despite not being included in the safety management system (SMS) for high-speed train safety. In this paper, a novel fatigue detection system for high-speed train safety based on monitoring train driver vigilance using a wireless wearable electroencephalograph (EEG) is presented. This system is designed to detect whether the driver is drowsiness. The proposed system consists of three main parts: (1) a wireless wearable EEG collection; (2) train driver vigilance detection; and (3) early warning device for train driver. In the first part, an 8-channel wireless wearable brain-computer interface (BCI) device acquires the locomotive driver’s brain EEG signal comfortably under high-speed train-driving conditions. The recorded data are transmitted to a personal computer (PC) via Bluetooth. In the second step, a support vector machine (SVM) classification algorithm is implemented to determine the vigilance level using the Fast Fourier transform (FFT) to extract the EEG power spectrum density (PSD). In addition, an early warning device begins to work if fatigue is detected. The simulation and test results demonstrate the feasibility of the proposed fatigue detection system for high-speed train safety.
Collapse
|
18
|
A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation. SENSORS 2016; 16:242. [PMID: 26907278 PMCID: PMC4801618 DOI: 10.3390/s16020242] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2016] [Revised: 02/06/2016] [Accepted: 02/12/2016] [Indexed: 11/16/2022]
Abstract
In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver’s EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver’s vigilance level . Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model.
Collapse
|