1
|
Riexinger LE, Gabler HC. Expansion of NASS/CDS for characterizing run-off-road crashes. TRAFFIC INJURY PREVENTION 2020; 21:S118-S122. [PMID: 32804541 DOI: 10.1080/15389588.2020.1798942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Revised: 07/14/2020] [Accepted: 07/17/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Run-off-road (ROR) crashes account for one-third of all annual crash fatalities in the US. The National Automotive Sampling System Crashworthiness Data System (NASS/CDS) is a dataset which may be used to understand the nature of ROR crashes. Despite the wealth of coded data available in NASS/CDS, this dataset lacks coded information about the roadside environment and the off-road trajectory of the vehicle. This information would be useful for determining lane departure warning (LDW) benefits, residual safety problems, performance of current safety hardware, lane marking inventory, LDW test procedure development, radius of curvature characterization, and effectiveness of ESC. The purpose of this paper is to demonstrate a methodology for expanding the data available in NASS/CDS to form and validate a specialized road departure database. METHODS Observed, measured, and reconstructed data elements were extracted from NASS/CDS and compiled into the National Cooperative Highway Research Program (NCHRP) 17-43 database. Observed variables were primarily coded from the scene photographs and included information such as the lane markings, and geometry of the roadside cross-section. Additional variables were measured from the scaled scene diagrams including the path of the vehicle, road dimensions, and roadside object positions. The vehicle impact speed and departure speed were reconstructed using the WinSMASH delta-v, roadside object characteristics, and vehicle path. Two studies were conducted to demonstrate the usefulness of the NCHRP 17-43 database in evaluating both vehicle-based and infrastructure-based ROR countermeasures. RESULTS The resulting NCHRP 17-43 database includes 1,581 NASS/CDS cases representing 510,154 ROR crashes. Analysis of the database found that drivers which crashed following an overcorrection were younger than drivers which did not overcorrect. This may indicate that inexperienced drivers are more likely to overcorrect when departing the roadway. The 85th percentile impact severity of ROR crashes, which occur on roads with a speed limit greater than 65 mph, is higher than the practical worst-case test conditions for roadside barriers. CONCLUSIONS The NCHRP 17-43 database contains information extracted from NASS/CDS cases to better understand the nature of ROR crashes, driver behavior in these crashes, and the potential benefits of both vehicle-based and infrastructure-based ROR countermeasures.
Collapse
Affiliation(s)
- Luke E Riexinger
- Biomedical Engineering and Mechanics, Virginia Tech, Blacksburg, VA, USA
| | - Hampton C Gabler
- Biomedical Engineering and Mechanics, Virginia Tech, Blacksburg, VA, USA
| |
Collapse
|
2
|
Wang L, Zhong H, Ma W, Abdel-Aty M, Park J. How many crashes can connected vehicle and automated vehicle technologies prevent: A meta-analysis. ACCIDENT; ANALYSIS AND PREVENTION 2020; 136:105299. [PMID: 31945594 DOI: 10.1016/j.aap.2019.105299] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Revised: 09/02/2019] [Accepted: 09/13/2019] [Indexed: 06/10/2023]
Abstract
The connected and automated vehicle (CAV) technologies have made great progresses. It has been commonly accepted that CV or AV technologies would reduce human errors in driving and benefit traffic safety. However, the answer of how many crashes can be prevented because of CV or AV technologies has not reached a consistent conclusion. In order to quantitatively answer this question, this study used meta-analysis to evaluate the safety effectiveness of nine common and important CV or AV technologies, and tested the safety effectiveness of these technologies for six countries. First, 73 studies about the safety impact of CV or AV technologies were filtered out from 826 CAV-related papers or reports. Second, the safety impacts of these technologies with regard to assistant types and triggering times have been compared. It shows AV technologies can play a more significant role than CV technologies, and the technologies with closer triggering time to collision time have greater safety effectiveness. Third, in the meta-analysis, the random effect model was used to evaluate the safety effectiveness, and the funnel plots and trim-and-fill method were used to evaluate and adjust publication bias, so as to objectively evaluate the safety effectiveness of each technology. Then, according to the crash data of six countries, the comprehensive safety effectiveness and compilation of safety effectiveness of the above technologies were calculated. The results show that if all of technologies were implemented in the six countries, the average number of crashes could be reduced by 3.40 million, among which the India would reduce the most (54.24%). Additionally, different countries should develop different development strategies, e.g., USA should prioritize the development of the lane change warning and intersection warning, the UK should prioritize applications related to intersection warning and rear-end warning. Overall, this study provides comprehensive and quantitative understating of the safety effectiveness of CA or AV technologies and would contribute to government, vehicle companies, and agencies in deciding the development priority of CA or AV technologies.
Collapse
Affiliation(s)
- Ling Wang
- Key Laboratory of Road and Traffic Engineering of the Ministry of Education, Tongji University, 4800 Cao'an Road, Shanghai, PR China.
| | - Hao Zhong
- Key Laboratory of Road and Traffic Engineering of the Ministry of Education, Tongji University, 4800 Cao'an Road, Shanghai, PR China
| | - Wanjing Ma
- Key Laboratory of Road and Traffic Engineering of the Ministry of Education, Tongji University, 4800 Cao'an Road, Shanghai, PR China.
| | - Mohamed Abdel-Aty
- Department of Civil, Environmental and Construction Engineering, University of Central Florida, Orlando, FL, 32816, USA
| | - Juneyoung Park
- Department of Transportation and Logistics Engineering, Hanyang University, South Korea
| |
Collapse
|
3
|
Assessing the Socioeconomic Impacts of Intelligent Connected Vehicles in China: A Cost–Benefit Analysis. SUSTAINABILITY 2019. [DOI: 10.3390/su11123273] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The deployment of intelligent connected vehicles (ICVs) is regarded as a significant solution to improve road safety, transportation management, and energy efficiency. This study assessed the safety, traffic, environmental, and industrial economic benefits of ICV deployment in China under different scenarios. A bottom-up model was established to deal with these impacts within a unified framework, based on the existing theories and literature of ICVs’ cost–benefit analysis, as well as China’s most recent policies and statistics. The results indicate that the total benefits may reach 13.25 to 24.02 trillion renminbi (RMB) in 2050, while a cumulative benefit–cost ratio of 1.15 to 3.06 suggests high cost-effectiveness. However, if the government and industry only focus on their own interests, the break-even point may be delayed by several years. Hence, an effective business model is necessary to enhance public–private cooperation in ICV implementation. Meanwhile, the savings of travel time costs and fleet labor costs play an important part in all socioeconomic impacts. Therefore, the future design of ICVs should pay more attention to the utilization of in-vehicle time and the real substitution for human drivers.
Collapse
|
4
|
McDonald AD, Alambeigi H, Engström J, Markkula G, Vogelpohl T, Dunne J, Yuma N. Toward Computational Simulations of Behavior During Automated Driving Takeovers: A Review of the Empirical and Modeling Literatures. HUMAN FACTORS 2019; 61:642-688. [PMID: 30830804 DOI: 10.1177/0018720819829572] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE This article provides a review of empirical studies of automated vehicle takeovers and driver modeling to identify influential factors and their impacts on takeover performance and suggest driver models that can capture them. BACKGROUND Significant safety issues remain in automated-to-manual transitions of vehicle control. Developing models and computer simulations of automated vehicle control transitions may help designers mitigate these issues, but only if accurate models are used. Selecting accurate models requires estimating the impact of factors that influence takeovers. METHOD Articles describing automated vehicle takeovers or driver modeling research were identified through a systematic approach. Inclusion criteria were used to identify relevant studies and models of braking, steering, and the complete takeover process for further review. RESULTS The reviewed studies on automated vehicle takeovers identified several factors that significantly influence takeover time and post-takeover control. Drivers were found to respond similarly between manual emergencies and automated takeovers, albeit with a delay. The findings suggest that existing braking and steering models for manual driving may be applicable to modeling automated vehicle takeovers. CONCLUSION Time budget, repeated exposure to takeovers, silent failures, and handheld secondary tasks significantly influence takeover time. These factors in addition to takeover request modality, driving environment, non-handheld secondary tasks, level of automation, trust, fatigue, and alcohol significantly impact post-takeover control. Models that capture these effects through evidence accumulation were identified as promising directions for future work. APPLICATION Stakeholders interested in driver behavior during automated vehicle takeovers may use this article to identify starting points for their work.
Collapse
|
5
|
Riexinger LE, Sherony R, Gabler HC. Residual road departure crashes after full deployment of LDW and LDP systems. TRAFFIC INJURY PREVENTION 2019; 20:S177-S181. [PMID: 31381442 DOI: 10.1080/15389588.2019.1603375] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Objective: Road departures are one of the most severe crash modes in the United States. To help reduce this risk, vehicles are being introduced in the United States with lane departure warning (LDW) systems, which warn the driver of a departure, and lane departure prevention (LDP) systems, which assist the driver in steering back to the roadway. Previous studies have estimated that LDW/LDP systems may prevent one third of drift-out-of-lane road departure crashes. This study investigates the crashes that were not prevented, to potentially set research priorities for next-generation road departure prevention systems. Methods: The event data recorder (EDR) data from 128 road departure crashes in the National Automotive Sampling System Crashworthiness Data System (NASS-CDS) from 2011 to 2015 were mapped onto the vehicle trajectory and simulated with LDW/LDP to assess the potential for crash avoidance. The model predicted that 63-83% of single-vehicle road departure crashes may not be prevented by an LDW system and 49% may not be prevented by an LDP system. Results and Conclusions: For LDP systems, which were assumed to have zero latency, no crashes were avoided if the time-to-collision (TTC) from lane crossing to impact was less than 0.55 s. Obstacles such as guardrails and traffic barriers, which tend to be very close to the road, were more common among the remaining crashes. The study shows that LDW/LDP systems are limited by two factors, driver reaction time and TTC to the roadside object. Thus, earlier driver response and longer TTC may help in these situations.
Collapse
Affiliation(s)
- Luke E Riexinger
- a Biomedical Engineering and Mechanics, Virginia Tech , Blacksburg , Virginia
| | - Rini Sherony
- b Collaborative Safety Research Center, TEMA, Ann Arbor, Michigan
| | - Hampton C Gabler
- c Center for Injury Biomechanics, Virginia Tech , Blacksburg , Virginia
| |
Collapse
|
6
|
Kusano KD, Chen R, Montgomery J, Gabler HC. Population distributions of time to collision at brake application during car following from naturalistic driving data. JOURNAL OF SAFETY RESEARCH 2015; 54:95-104. [PMID: 26403908 DOI: 10.1016/j.jsr.2015.06.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2014] [Revised: 03/26/2015] [Accepted: 06/24/2015] [Indexed: 06/05/2023]
Abstract
PROBLEM Forward collision warning (FCW) systems are designed to mitigate the effects of rear-end collisions. Driver acceptance of these systems is crucial to their success, as perceived "nuisance" alarms may cause drivers to disable the systems. In order to make customizable FCW thresholds, system designers need to quantify the variation in braking behavior in the driving population. The objective of this study was to quantify the time to collision (TTC) that drivers applied the brakes during car following scenarios from a large scale naturalistic driving study (NDS). METHODS Because of the large amount of data generated by NDS, an automated algorithm was developed to identify lead vehicles using radar data recorded as part of the study. Using the search algorithm, all trips from 64 drivers from the 100-Car NDS were analyzed. A comparison of the algorithm to 7135 brake applications where the presence of a lead vehicle was manually identified found that the algorithm agreed with the human review 90.6% of the time. RESULTS This study examined 72,123 trips that resulted in 2.6 million brake applications. Population distributions of the minimum, 1st, and 10th percentiles were computed for each driver in speed ranges between 3 and 60 mph in 10 mph increments. As speed increased, so did the minimum TTC experience by drivers as well as variance in TTC. Younger drivers (18-30) had lower TTC at brake application compared to older drivers (30-51+), especially at speeds between 40 mph and 60 mph. DISCUSSION This is one of the first studies to use large scale NDS data to quantify braking behavior during car following. The results of this study can be used to design and evaluate FCW systems and calibrate traffic simulation models.
Collapse
|
7
|
Kusano KD, Gabler HC. Comparison of Expected Crash and Injury Reduction from Production Forward Collision and Lane Departure Warning Systems. TRAFFIC INJURY PREVENTION 2015; 16 Suppl 2:S109-S114. [PMID: 26436219 DOI: 10.1080/15389588.2015.1063619] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
OBJECTIVES The U.S. New Car Assessment Program (NCAP) now tests for forward collision warning (FCW) and lane departure warning (LDW). The design of these warnings differs greatly between vehicles and can result in different real-world field performance in preventing or mitigating the effects of collisions. The objective of this study was to compare the expected number of crashes and injured drivers that could be prevented if all vehicles in the fleet were equipped with the FCW and LDW systems tested under the U.S. NCAP. METHODS To predict the potential crashes and serious injury that could be prevented, our approach was to computationally model the U.S. crash population. The models simulated all rear-end and single-vehicle road departure collisions that occurred in a nationally representative crash database (NASS-CDS). A sample of 478 single-vehicle crashes from NASS-CDS 2012 was the basis for 24,822 simulations for LDW. A sample of 1,042 rear-end collisions from NASS-CDS years 1997-2013 was the basis for 7,616 simulations for FCW. For each crash, 2 simulations were performed: (1) without the system present and (2) with the system present. Models of each production safety system were based on 54 model year 2010-2014 vehicles that were evaluated under the NCAP confirmation procedure for LDW and/or FCW. NCAP performed 40 LDW and 45 FCW tests of these vehicles. RESULTS The design of the FCW systems had a dramatic impact on their potential to prevent crashes and injuries. Between 0 and 67% of crashes and 2 and 69% of moderately to fatally injured drivers in rear-end impacts could have been prevented if all vehicles were equipped with the FCW systems. Earlier warning times resulted in increased benefits. The largest effect on benefits, however, was the lower operating speed threshold of the systems. Systems that only operated at speeds above 20 mph were less than half as effective as those that operated above 5 mph with similar warning times. The production LDW systems could have prevented between 11 and 23% of drift-out-of-lane crashes and 13 and 22% of seriously to fatally injured drivers. A majority of the tested LDW systems delivered warnings near the point when the vehicle first touched the lane line, leading to similar benefits. Minimum operating speed also greatly affected LDW effectiveness. CONCLUSIONS The results of this study show that the expected field performance of FCW and LDW systems are highly dependent on the design and system limitations. Systems that delivered warnings earlier and operated at lower speeds may prevent far more crashes and injuries than systems that warn late and operate only at high speeds. These results suggest that future FCW and LDW evaluation should prioritize early warnings and full-speed range operation. A limitation of this study is that additional crash avoidance features that may also mitigate collisions-for example, brake assist, automated braking, or lane-keeping assistance-were not evaluated during the NCAP tests or in our benefits models. The potential additional mitigating effects of these systems were not quantified in this study.
Collapse
|
8
|
Montgomery J, Kusano KD, Gabler HC. Age and gender differences in time to collision at braking from the 100-Car Naturalistic Driving Study. TRAFFIC INJURY PREVENTION 2014; 15 Suppl 1:S15-S20. [PMID: 25307380 DOI: 10.1080/15389588.2014.928703] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
OBJECTIVE Forward collision warning (FCW) is an active safety system that aims to mitigate the effect of forward collisions by warning the driver of objects in front of the vehicle. Success of FCW relies on how drivers react to the alerts. Drivers who receive too many warnings that they deem as unnecessary-that is, nuisance alarms-may grow to distrust and turn the system off. To reduce the perception of nuisance alarms, FCW systems can be tailored to individual driving styles, but these driving styles must first be characterized. The objective of this study was to characterize differences in braking behavior between age and gender groups in car-following scenarios using data from the 100-Car Naturalistic Driving Study. METHODS The data source for this study was the 100-Car Naturalistic Driving Study, which recorded the driving of 108 primary drivers for approximately a year. Braking behavior was characterized in terms of time to collision (TTC) at brake application, a common metric used in the design of warning thresholds of FCW. Because of the large volume of data analyzed, the TTC at which drivers braked during car-following situations was collected via an automated search algorithm. The minimum TTC for each vehicle speed 10 mph increment from 10 mph to 80 mph was recorded for each driver. Mixed model analysis of variance was used to examine the differences between age and gender groups. RESULTS In total, 527,861 brake applications contained in 11,503 trips were analyzed. Differences in TTC at braking were statistically significant for age and gender (P<.01 for both cases). Males age 18-20 (n=7) had the lowest average minimum TTC at braking of 2.5±0.8 s, and females age 31-50 (n=6) had the highest average minimum TTC at braking of 6.4±0.9 s. On average, women (n=32) braked at a TTC 1.3 s higher than men (n=52). Age was a statistically significant factor for TTC at braking between participants under 30 (n=42) and participants over 30 (n=42), with the latter braking 1.7 s on average before the former. No statistical significance was found between ages 18-20 (n=15) and 21-30 (n=27) or between ages 31-50 (n=23) and 50+(n=19). CONCLUSIONS There are clear statistical differences in TTC at braking for both gender and those over 30 vs. those under 30. Designers of FCW systems can use the data found in this study to tailor alert timings to the target demographic of a vehicle when designing forward collision warning systems. Appropriate alert timings for FCW systems will maximize effectiveness in collision reduction and mitigation.
Collapse
|