1
|
Matthews G, Cumings R, De Los Santos EP, Feng IY, Mouloua SA. A new era for stress research: supporting user performance and experience in the digital age. ERGONOMICS 2025; 68:913-946. [PMID: 39520089 DOI: 10.1080/00140139.2024.2425953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Accepted: 10/16/2024] [Indexed: 11/16/2024]
Abstract
Stress is both a driver of objective performance impairments and a source of negative user experience of technology. This review addresses future directions for research on stress and ergonomics in the digital age. The review is structured around three levels of analysis. At the individual user level, stress is elicited by novel technologies and tasks including interaction with AI and robots, working in Virtual Reality, and operating autonomous vehicles. At the organisational level, novel, potentially stressful challenges include maintaining cybersecurity, surveillance and monitoring of employees supported by technology, and addressing bias and discrimination in the workplace. At the sociocultural level, technology, values and norms are evolving symbiotically, raising novel demands illustrated with respect to interactions with social media and new ethical challenges. We also briefly review the promise of neuroergonomics and emotional design to support stress mitigation. We conclude with seven high-level principles that may guide future work.
Collapse
Affiliation(s)
- Gerald Matthews
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Ryon Cumings
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | | | - Irene Y Feng
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Salim A Mouloua
- Department of Psychology, George Mason University, Fairfax, VA, USA
| |
Collapse
|
2
|
Goodall NJ. Comparability of driving automation crash databases. JOURNAL OF SAFETY RESEARCH 2025; 92:473-481. [PMID: 39986865 DOI: 10.1016/j.jsr.2025.01.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 08/21/2024] [Accepted: 01/13/2025] [Indexed: 02/24/2025]
Abstract
INTRODUCTION This paper reviewed current driving automation (DA) and baseline human-driven crash databases and evaluated their comparability. METHOD Five sources of DA crash data and three sources of human-driven crash data were reviewed for consistency of inclusion criteria, scope of coverage, and potential sources of bias. Alternative methods to determine vehicle automation capability using vehicle identification number (VIN) from state-maintained crash records were also explored. CONCLUSIONS Evaluated data sets used incompatible or nonstandard minimum crash severity thresholds, complicating crash rate comparisons. The most widely-used standard was "police-reportable crash," which itself has different reporting thresholds among jurisdictions. Although low- and no-damage crashes occur at greater frequencies and have more statistical power, they were not consistently reported for automated vehicles. Crash data collection can be improved through collection of driving automation exposure data, widespread collection of crash data form electronic data recorders, and standardization of crash definitions. PRACTICAL APPLICATIONS Researchers and DA developers may use this analysis to conduct more thorough and accurate evaluations of driving automation crash rates. Lawmakers and regulators may use these findings as evidence to enhance data collection efforts, both internally and via new rules regarding electronic data recorders.
Collapse
Affiliation(s)
- Noah J Goodall
- Virginia Transportation Research Council 530 Edgemont Road, Charlottesville, VA 22903, United States.
| |
Collapse
|
3
|
Tomzig M, Wörle J, Gary S, Baumann M, Neukum A. Strategic naps in automated driving - Sleep architecture predicts sleep inertia better than nap duration. ACCIDENT; ANALYSIS AND PREVENTION 2025; 209:107811. [PMID: 39427445 DOI: 10.1016/j.aap.2024.107811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 09/06/2024] [Accepted: 10/07/2024] [Indexed: 10/22/2024]
Abstract
At higher levels of driving automation, drivers can nap during parts of the trip but must take over control in others. Awakening from a nap is marked by sleep inertia which is tackled by the NASA nap paradigm in aviation: Strategic on-flight naps are restricted to 40 min to avoid deep sleep and therefore sleep inertia. For future automated driving, there are currently no such strategies for addressing sleep inertia. Given the disparate requirements, it is uncertain whether the strategies derived from aviation can be readily applied to automated driving. Therefore, our study aimed to compare the effects of restricting the duration of nap opportunities following the NASA nap paradigm to the effects of sleep architecture on sleep inertia in takeover scenarios in automated driving. In our driving simulator study, 24 participants were invited to sleep during three automated drives. They were awakened after 20, 40, or 60 min and asked to manually complete an urban drive. We assessed how napping duration, last sleep stage before takeover, and varying proportions of light, stable, and deep sleep influenced self-reported sleepiness, takeover times, and the number of driving errors. Takeover times increased with nap duration, but sleepiness and driving errors did not. Instead, all measures were significantly influenced by sleep architecture. Sleepiness increased after awakening from light and stable sleep, and takeover times after awakening from light sleep. Takeover times also increased with higher proportions of stable sleep. The number of driving errors was significantly increased with the proportion of deep sleep and after awakenings from stable and deep sleep. These results suggest that sleep architecture, not nap duration, is crucial for predicting sleep inertia. Therefore, the NASA nap paradigm is not suitable for driving contexts. Future driver monitoring systems should assess the sleep architecture to predict and prevent sleep inertia.
Collapse
Affiliation(s)
- Markus Tomzig
- Wuerzburg Institute for Traffic Sciences, WIVW GmbH, Robert-Bosch-Straße 4, 97209 Veitshöchheim, Germany; Ulm University, Albert-Einstein-Allee 41, 89081 Ulm, Germany.
| | - Johanna Wörle
- Wuerzburg Institute for Traffic Sciences, WIVW GmbH, Robert-Bosch-Straße 4, 97209 Veitshöchheim, Germany; Singapore-ETH Centre, 1 Create Way, CREATE Tower 138602, Singapore
| | - Sebastian Gary
- Wuerzburg Institute for Traffic Sciences, WIVW GmbH, Robert-Bosch-Straße 4, 97209 Veitshöchheim, Germany
| | - Martin Baumann
- Ulm University, Albert-Einstein-Allee 41, 89081 Ulm, Germany
| | - Alexandra Neukum
- Wuerzburg Institute for Traffic Sciences, WIVW GmbH, Robert-Bosch-Straße 4, 97209 Veitshöchheim, Germany
| |
Collapse
|
4
|
de Winter JCF, Eisma YB. Ergonomics & Human factors: fade of a discipline. ERGONOMICS 2024:1-9. [PMID: 39440364 DOI: 10.1080/00140139.2024.2416553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Accepted: 10/09/2024] [Indexed: 10/25/2024]
Abstract
In this commentary, we argue that the field of Ergonomics and Human Factors (EHF) has the tendency to present itself as a thriving and impactful science, while in reality, it is losing credibility. We assert that EHF science (1) has introduced terminology that is internally inconsistent and hardly predictive-valid, (2) has virtually no impact on industrial practice, which operates within frameworks of regulatory compliance and profit generation, (3) repeatedly employs the same approach of conducting lab experiments within unrealistic paradigms in order to complete deliverables, (4) suggests it is a cumulative science, but is neither a leader nor even an adopter of open-science initiatives that are characteristic of scientific progress and (5) is being assimilated by other disciplines as well as Big Tech. Recommendations are provided to reverse this trend, although we also express a certain resignation as our scientific discipline loses significance.Practitioner Summary: This paper offers criticism of the field of Ergonomics. There are issues such as unclear terminology, unrealistic experiments, insufficient impact and lack of open data. We provide recommendations to reverse the trend. This article concerns a critique of EHF as a science, and is not a critique of EHF practitioners.
Collapse
Affiliation(s)
- J C F de Winter
- Cognitive Robotics, Delft University of Technology, Delft, the Netherlands
| | - Y B Eisma
- Cognitive Robotics, Delft University of Technology, Delft, the Netherlands
| |
Collapse
|
5
|
Nordhoff S. A conceptual framework for automation disengagements. Sci Rep 2024; 14:8654. [PMID: 38622166 PMCID: PMC11018869 DOI: 10.1038/s41598-024-57882-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 03/22/2024] [Indexed: 04/17/2024] Open
Abstract
A better understanding of automation disengagements can lead to improved safety and efficiency of automated systems. This study investigates the factors contributing to automation disengagements initiated by human operators and the automation itself by analyzing semi-structured interviews with 103 users of Tesla's Autopilot and FSD Beta. The factors leading to automation disengagements are represented by categories. In total, we identified five main categories, and thirty-five subcategories. The main categories include human operator states (5), human operator's perception of the automation (17), human operator's perception of other humans (3), the automation's perception of the human operator (3), and the automation incapability in the environment (7). Human operators disengaged the automation when they anticipated failure, observed unnatural or unwanted automation behavior (e.g., erratic steering, running red lights), or believed the automation is not capable to operate safely in certain environments (e.g., inclement weather, non-standard roads). Negative experiences of human operators, such as frustration, unsafe feelings, and distrust represent some of the adverse human operate states leading to automation disengagements initiated by human operators. The automation, in turn, monitored human operators and disengaged itself if it detected insufficient vigilance or speed rule violations by human operators. Moreover, human operators can be influenced by the reactions of passengers and other road users, leading them to disengage the automation if they sensed discomfort, anger, or embarrassment due to the automation's actions. The results of the analysis are synthesized into a conceptual framework for automation disengagements, borrowing ideas from the human factor's literature and control theory. This research offers insights into the factors contributing to automation disengagements, and highlights not only the concerns of human operators but also the social aspects of this phenomenon. The findings provide information on potential edge cases of automated vehicle technology, which may help to enhance the safety and efficiency of such systems.
Collapse
Affiliation(s)
- S Nordhoff
- Department Transport and Planning, Delft University of Technology, Delft, The Netherlands.
| |
Collapse
|
6
|
Endsley MR. Ironies of artificial intelligence. ERGONOMICS 2023; 66:1656-1668. [PMID: 37534468 DOI: 10.1080/00140139.2023.2243404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 07/21/2023] [Indexed: 08/04/2023]
Abstract
Bainbridge's Ironies of Automation was a prescient description of automation related challenges for human performance that have characterised much of the 40 years since its publication. Today a new wave of automation based on artificial intelligence (AI) is being introduced across a wide variety of domains and applications. Not only are Bainbridge's original warnings still pertinent for AI, but AI's very nature and focus on cognitive tasks has introduced many new challenges for people who interact with it. Five ironies of AI are presented including difficulties with understanding AI and forming adaptations, opaqueness in AI limitations and biases that can drive human decision biases, and difficulties in understanding the AI reliability, despite the fact that AI remains insufficiently intelligent for many of its intended applications. Future directions are provided to create more human-centered AI applications that can address these challenges.
Collapse
|
7
|
Xu J, Kendrick K, Bowers AR. Letter to the Editor: Update on Experiences of a Driver with Vision Impairment when Using a Tesla Car-Full Self-driving (Beta) in City Driving. Optom Vis Sci 2023; 100:351-353. [PMID: 37097984 DOI: 10.1097/opx.0000000000002023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2023] Open
|
8
|
Nordhoff S, Stapel J, He X, Gentner A, Happee R. Do driver's characteristics, system performance, perceived safety, and trust influence how drivers use partial automation? A structural equation modelling analysis. Front Psychol 2023; 14:1125031. [PMID: 37139004 PMCID: PMC10150639 DOI: 10.3389/fpsyg.2023.1125031] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 03/06/2023] [Indexed: 05/05/2023] Open
Abstract
The present study surveyed actual extensive users of SAE Level 2 partially automated cars to investigate how driver’s characteristics (i.e., socio-demographics, driving experience, personality), system performance, perceived safety, and trust in partial automation influence use of partial automation. 81% of respondents stated that they use their automated car with speed (ACC) and steering assist (LKA) at least 1–2 times a week, and 84 and 92% activate LKA and ACC at least occasionally. Respondents positively rated the performance of Adaptive Cruise Control (ACC) and Lane Keeping Assistance (LKA). ACC was rated higher than LKA and detection of lead vehicles and lane markings was rated higher than smooth control for ACC and LKA, respectively. Respondents reported to primarily disengage (i.e., turn off) partial automation due to a lack of trust in the system and when driving is fun. They rarely disengaged the system when they noticed they become bored or sleepy. Structural equation modelling revealed that trust had a positive effect on driver’s propensity for secondary task engagement during partially automated driving, while the effect of perceived safety was not significant. Regarding driver’s characteristics, we did not find a significant effect of age on perceived safety and trust in partial automation. Neuroticism negatively correlated with perceived safety and trust, while extraversion did not impact perceived safety and trust. The remaining three personality dimensions ‘openness’, ‘conscientiousness’, and ‘agreeableness’ did not form valid and reliable scales in the confirmatory factor analysis, and could thus not be subjected to the structural equation modelling analysis. Future research should re-assess the suitability of the short 10-item scale as measure of the Big-Five personality traits, and investigate the impact on perceived safety, trust, use and use of automation.
Collapse
Affiliation(s)
- Sina Nordhoff
- Department Transport and Planning, Delft University of Technology, Delft, Netherlands
- *Correspondence: Sina Nordhoff,
| | - Jork Stapel
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| | - Xiaolin He
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| | | | - Riender Happee
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| |
Collapse
|