1
|
Hancock PA. Are humans still necessary? ERGONOMICS 2023; 66:1711-1718. [PMID: 37530394 DOI: 10.1080/00140139.2023.2236822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 07/05/2023] [Indexed: 08/03/2023]
Abstract
Our long accepted and historically-persistent human narrative almost exclusively places us at the motivational centre of events. The wellspring of this anthropocentric fable arises from the unitary and bounded nature of personal consciousness. Such immediate conscious experience frames the heroic vision we have told to, and subsequently sold to ourselves. But need this centrality necessarily be a given? The following work challenges this, oft unquestioned, foundational assumption, especially in light of developments in automated, autonomous, and artificially-intelligent systems. For, in these latter technologies, human contributions are becoming ever more peripheral and arguably unnecessary. The removal of the human operator from the inner loops of momentary control has progressed to now an ever more remote function as some form of supervisory monitor. The natural progression of that line of evolution is the eventual excision of humans from access to any form of control loop at all. This may even include system maintenance and then, prospectively, even initial design. The present argument features a 'unit of analysis' provocation which explores the proposition that socially, and even ergonomically, the human individual no longer occupies priority or any degree of pre-eminent centrality. Rather, we are witnessing a transitional phase of development in which socio-technical collectives are evolving as the principle sources of what, may well be profoundly unhuman motivation. These developing proclivities occupy our landscape of technological innovations that daily act to magnify, rather than diminish, such progressive inhumanities. Where this leaves a science focused on work as a human-centred enterprise serves to occupy the culminating consideration of the present discourse.
Collapse
Affiliation(s)
- P A Hancock
- Department of Psychology, and Institute for Simulation and Training, University of Central Florida, Orlando, Florida, USA
| |
Collapse
|
2
|
Hancock PA, Kessler TT, Kaplan AD, Stowers K, Brill JC, Billings DR, Schaefer KE, Szalma JL. How and why humans trust: A meta-analysis and elaborated model. Front Psychol 2023; 14:1081086. [PMID: 37051611 PMCID: PMC10083508 DOI: 10.3389/fpsyg.2023.1081086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 01/26/2023] [Indexed: 03/29/2023] Open
Abstract
Trust exerts an impact on essentially all forms of social relationships. It affects individuals in deciding whether and how they will or will not interact with other people. Equally, trust also influences the stance of entire nations in their mutual dealings. In consequence, understanding the factors that influence the decision to trust, or not to trust, is crucial to the full spectrum of social dealings. Here, we report the most comprehensive extant meta-analysis of experimental findings relating to such human-to-human trust. Our analysis provides a quantitative evaluation of the factors that influence interpersonal trust, the initial propensity to trust, as well as an assessment of the general trusting of others. Over 2,000 relevant studies were initially identified for potential inclusion in the meta-analysis. Of these, (n = 338) passed all screening criteria and provided therefrom a total of (n = 2,185) effect sizes for analysis. The identified dependent variables were trustworthiness, propensity to trust, general trust, and the trust that supervisors and subordinates express in each other. Correlational results demonstrated that a large range of trustor, trustee, and shared, contextual factors impact each of trustworthiness, the propensity to trust, and trust within working relationships. The emphasis in the present work on contextual factors being one of several trust dimensions herein originated. Experimental results established that the reputation of the trustee and the shared closeness of trustor and trustee were the most predictive factors of trustworthiness outcome. From these collective findings, we propose an elaborated, overarching descriptive theory of trust in which special note is taken of the theory’s application to the growing human need to trust in non-human entities. The latter include diverse forms of automation, robots, artificially intelligent entities, as well as specific implementations such as driverless vehicles to name but a few. Future directions as to the momentary dynamics of trust development, its sustenance and its dissipation are also evaluated.
Collapse
Affiliation(s)
- P. A. Hancock
- Department of Psychology and Institute for Simulation and Training, University of Central Florida, Orlando, FL, United States
- *Correspondence: P. A. Hancock,
| | - Theresa T. Kessler
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| | - Alexandra D. Kaplan
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| | - Kimberly Stowers
- Department of Management, University of Alabama, Tuscaloosa, AL, United States
| | - J. Christopher Brill
- United States Air Force Research Laboratory, Wright Patterson Air Force Base, Dayton, NV, United States
| | | | - Kristin E. Schaefer
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, Adelphi, MD, United States
| | - James L. Szalma
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
3
|
Beckers N, Siebert LC, Bruijnes M, Jonker C, Abbink D. Drivers of partially automated vehicles are blamed for crashes that they cannot reasonably avoid. Sci Rep 2022; 12:16193. [PMID: 36171437 PMCID: PMC9519957 DOI: 10.1038/s41598-022-19876-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 09/06/2022] [Indexed: 11/09/2022] Open
Abstract
People seem to hold the human driver to be primarily responsible when their partially automated vehicle crashes, yet is this reasonable? While the driver is often required to immediately take over from the automation when it fails, placing such high expectations on the driver to remain vigilant in partially automated driving is unreasonable. Drivers show difficulties in taking over control when needed immediately, potentially resulting in dangerous situations. From a normative perspective, it would be reasonable to consider the impact of automation on the driver's ability to take over control when attributing responsibility for a crash. We, therefore, analyzed whether the public indeed considers driver ability when attributing responsibility to the driver, the vehicle, and its manufacturer. Participants blamed the driver primarily, even though they recognized the driver's decreased ability to avoid the crash. These results portend undesirable situations in which users of partially driving automation are the ones held responsible, which may be unreasonable due to the detrimental impact of driving automation on human drivers. Lastly, the outcome signals that public awareness of such human-factors issues with automated driving should be improved.
Collapse
Affiliation(s)
- Niek Beckers
- AiTech, Delft University of Technology, Delft, Netherlands. .,Cognitive Robotics, Faculty of Mechanical, Maritime, and Material Engineering, Delft University of Technology, Delft, Netherlands.
| | - Luciano Cavalcante Siebert
- AiTech, Delft University of Technology, Delft, Netherlands.,Interactive Intelligence, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, Delft, Netherlands
| | - Merijn Bruijnes
- Public Governance and Management, Faculty of Law Economics and Governance, Utrecht University, Utrecht, Netherlands
| | - Catholijn Jonker
- AiTech, Delft University of Technology, Delft, Netherlands.,Interactive Intelligence, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, Delft, Netherlands
| | - David Abbink
- AiTech, Delft University of Technology, Delft, Netherlands.,Cognitive Robotics, Faculty of Mechanical, Maritime, and Material Engineering, Delft University of Technology, Delft, Netherlands
| |
Collapse
|