1
|
Jetha A, Bakhtari H, Rosella LC, Gignac MAM, Biswas A, Shahidi FV, Smith BT, Smith MJ, Mustard C, Khan N, Arrandale VH, Loewen PJ, Zuberi D, Dennerlein JT, Bonaccio S, Wu N, Irvin E, Smith PM. Artificial intelligence and the work-health interface: A research agenda for a technologically transforming world of work. Am J Ind Med 2023; 66:815-830. [PMID: 37525007 DOI: 10.1002/ajim.23517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 07/06/2023] [Accepted: 07/10/2023] [Indexed: 08/02/2023]
Abstract
The labor market is undergoing a rapid artificial intelligence (AI) revolution. There is currently limited empirical scholarship that focuses on how AI adoption affects employment opportunities and work environments in ways that shape worker health, safety, well-being and equity. In this article, we present an agenda to guide research examining the implications of AI on the intersection between work and health. To build the agenda, a full day meeting was organized and attended by 50 participants including researchers from diverse disciplines and applied stakeholders. Facilitated meeting discussions aimed to set research priorities related to workplace AI applications and its impact on the health of workers, including critical research questions, methodological approaches, data needs, and resource requirements. Discussions also aimed to identify groups of workers and working contexts that may benefit from AI adoption as well as those that may be disadvantaged by AI. Discussions were synthesized into four research agenda areas: (1) examining the impact of stronger AI on human workers; (2) advancing responsible and healthy AI; (3) informing AI policy for worker health, safety, well-being, and equitable employment; and (4) understanding and addressing worker and employer knowledge needs regarding AI applications. The agenda provides a roadmap for researchers to build a critical evidence base on the impact of AI on workers and workplaces, and will ensure that worker health, safety, well-being, and equity are at the forefront of workplace AI system design and adoption.
Collapse
Affiliation(s)
- Arif Jetha
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Hela Bakhtari
- Institute for Work & Health, Toronto, Ontario, Canada
| | - Laura C Rosella
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
- Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Temerty Centre for Artificial Intelligence Research and Education in Medicine, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Institute for Clinical Evaluative Sciences, Toronto, Ontario, Canada
- Institute for Better Health, Trillium Health Partners, Mississauga, Ontario, Canada
| | - Monique A M Gignac
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Aviroop Biswas
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Faraz V Shahidi
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Brendan T Smith
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
- Health Promotion, Chronic Disease, and Injury Prevention, Public Health Ontario, Toronto, Ontario, Canada
| | - Maxwell J Smith
- School of Health Studies, Faculty of Health Sciences, Western University, London, Ontario, Canada
| | - Cameron Mustard
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Naimul Khan
- Depratment of Electrical, Computer, and Biomedical Engineering, Toronto Metropolitan University, Toronto, Ontario, Canada
| | - Victoria H Arrandale
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
- Occupational Cancer Research Centre, Toronto, Ontario, Canada
| | - Peter J Loewen
- Munk School of Global Affairs and Public Policy, University of Toronto, Ontario, Canada
- Schwartz Reisman Institute for Technology and Society, University of Toronto, Ontario, Canada
| | - Daniyal Zuberi
- Factor-Inwentash Faculty of Social Work, University of Toronto, Ontario, Canada
| | - Jack T Dennerlein
- Department of Physical Therapy, Movement, and Rehabilitation Sciences, Bouve College of Health Sciences, Northeastern University, Boston, Massachusetts, USA
- Center for Work, Health, and Wellbeing, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, USA
| | - Silvia Bonaccio
- Institute for Work & Health, Toronto, Ontario, Canada
- Telfer School of Management, University of Ottawa, Ottawa, Ontario, Canada
| | - Nicole Wu
- Department of Political Science, University of Toronto, Toronto, Ontario, Canada
| | - Emma Irvin
- Institute for Work & Health, Toronto, Ontario, Canada
| | - Peter M Smith
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Choudhury A, Shamszare H. Investigating the Impact of User Trust on the Adoption and Use of ChatGPT: Survey Analysis. J Med Internet Res 2023; 25:e47184. [PMID: 37314848 DOI: 10.2196/47184] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 04/19/2023] [Accepted: 05/25/2023] [Indexed: 06/15/2023] Open
Abstract
BACKGROUND ChatGPT (Chat Generative Pre-trained Transformer) has gained popularity for its ability to generate human-like responses. It is essential to note that overreliance or blind trust in ChatGPT, especially in high-stakes decision-making contexts, can have severe consequences. Similarly, lacking trust in the technology can lead to underuse, resulting in missed opportunities. OBJECTIVE This study investigated the impact of users' trust in ChatGPT on their intent and actual use of the technology. Four hypotheses were tested: (1) users' intent to use ChatGPT increases with their trust in the technology; (2) the actual use of ChatGPT increases with users' intent to use the technology; (3) the actual use of ChatGPT increases with users' trust in the technology; and (4) users' intent to use ChatGPT can partially mediate the effect of trust in the technology on its actual use. METHODS This study distributed a web-based survey to adults in the United States who actively use ChatGPT (version 3.5) at least once a month between February 2023 through March 2023. The survey responses were used to develop 2 latent constructs: Trust and Intent to Use, with Actual Use being the outcome variable. The study used partial least squares structural equation modeling to evaluate and test the structural model and hypotheses. RESULTS In the study, 607 respondents completed the survey. The primary uses of ChatGPT were for information gathering (n=219, 36.1%), entertainment (n=203, 33.4%), and problem-solving (n=135, 22.2%), with a smaller number using it for health-related queries (n=44, 7.2%) and other activities (n=6, 1%). Our model explained 50.5% and 9.8% of the variance in Intent to Use and Actual Use, respectively, with path coefficients of 0.711 and 0.221 for Trust on Intent to Use and Actual Use, respectively. The bootstrapped results failed to reject all 4 null hypotheses, with Trust having a significant direct effect on both Intent to Use (β=0.711, 95% CI 0.656-0.764) and Actual Use (β=0.302, 95% CI 0.229-0.374). The indirect effect of Trust on Actual Use, partially mediated by Intent to Use, was also significant (β=0.113, 95% CI 0.001-0.227). CONCLUSIONS Our results suggest that trust is critical to users' adoption of ChatGPT. It remains crucial to highlight that ChatGPT was not initially designed for health care applications. Therefore, an overreliance on it for health-related advice could potentially lead to misinformation and subsequent health risks. Efforts must be focused on improving the ChatGPT's ability to distinguish between queries that it can safely handle and those that should be redirected to human experts (health care professionals). Although risks are associated with excessive trust in artificial intelligence-driven chatbots such as ChatGPT, the potential risks can be reduced by advocating for shared accountability and fostering collaboration between developers, subject matter experts, and human factors researchers.
Collapse
Affiliation(s)
- Avishek Choudhury
- Industrial and Management Systems Engineering, Benjamin M. Statler College of Engineering and Mineral Resources, West Virginia University, Morgantown, WV, United States
| | - Hamid Shamszare
- Industrial and Management Systems Engineering, Benjamin M. Statler College of Engineering and Mineral Resources, West Virginia University, Morgantown, WV, United States
| |
Collapse
|
3
|
Foffano F, Scantamburlo T, Cortés A. Investing in AI for social good: an analysis of European national strategies. AI Soc 2022; 38:479-500. [PMID: 35528248 PMCID: PMC9068863 DOI: 10.1007/s00146-022-01445-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 03/24/2022] [Indexed: 10/25/2022]
Abstract
Artificial Intelligence (AI) has become a driving force in modern research, industry and public administration and the European Union (EU) is embracing this technology with a view to creating societal, as well as economic, value. This effort has been shared by EU Member States which were all encouraged to develop their own national AI strategies outlining policies and investment levels. This study focuses on how EU Member States are approaching the promise to develop and use AI for the good of society through the lens of their national AI strategies. In particular, we aim to investigate how European countries are investing in AI and to what extent the stated plans contribute to the good of people and society as a whole. Our contribution consists of three parts: (i) a conceptualization of AI for social good highlighting the role of AI policy, in particular, the one put forward by the European Commission (EC); (ii) a qualitative analysis of 15 European national strategies mapping investment plans and suggesting their relation to the social good (iii) a reflection on the current status of investments in socially good AI and possible steps to move forward. Our study suggests that while European national strategies incorporate money allocations in the sphere of AI for social good (e.g. education), there is a broader variety of underestimated actions (e.g. multidisciplinary approach in STEM curricula and dialogue among stakeholders) that can boost the European commitment to sustainable and responsible AI innovation.
Collapse
Affiliation(s)
- Francesca Foffano
- UK and European Centre for Living Technology, University of York, York, Venice, Italy.,European Centre for Living Technology, Venice, Italy.,Barcelona Supercomputing Center, Barcelona, Spain
| | - Teresa Scantamburlo
- UK and European Centre for Living Technology, University of York, York, Venice, Italy.,European Centre for Living Technology, Venice, Italy.,Barcelona Supercomputing Center, Barcelona, Spain
| | - Atia Cortés
- UK and European Centre for Living Technology, University of York, York, Venice, Italy.,European Centre for Living Technology, Venice, Italy.,Barcelona Supercomputing Center, Barcelona, Spain
| |
Collapse
|