1
|
Muyskens K, Ma Y, Menikoff J, Hallinan J, Savulescu J. When can we Kick (Some) Humans "Out of the Loop"? An Examination of the use of AI in Medical Imaging for Lumbar Spinal Stenosis. Asian Bioeth Rev 2025; 17:207-223. [PMID: 39896088 PMCID: PMC11785850 DOI: 10.1007/s41649-024-00290-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 02/23/2024] [Accepted: 03/04/2024] [Indexed: 02/04/2025] Open
Abstract
Artificial intelligence (AI) has attracted an increasing amount of attention, both positive and negative. Its potential applications in healthcare are indeed manifold and revolutionary, and within the realm of medical imaging and radiology (which will be the focus of this paper), significant increases in accuracy and speed, as well as significant savings in cost, stand to be gained through the adoption of this technology. Because of its novelty, a norm of keeping humans "in the loop" wherever AI mechanisms are deployed has become synonymous with good ethical practice in some circles. It has been argued that keeping humans "in the loop" is important for reasons of safety, accountability, and the maintenance of institutional trust. However, as the application of machine learning for the detection of lumbar spinal stenosis (LSS) in this paper's case study reveals, there are some scenarios where an insistence on keeping humans in the loop (or in other words, the resistance to automation) seems unwarranted and could possibly lead us to miss out on very real and important opportunities in healthcare-particularly in low-resource settings. It is important to acknowledge these opportunity costs of resisting automation in such contexts, where better options may be unavailable. Using an AI model based on convolutional neural networks developed by a team of researchers at NUH/NUS medical school in Singapore for automated detection and classification of the lumbar spinal canal, lateral recess, and neural foraminal narrowing in an MRI scan of the spine to diagnose LSS, we will aim to demonstrate that where certain criteria hold (e.g., the AI is as accurate or better than human experts, risks are low in the event of an error, the gain in wellbeing is significant, and the task being automated is not essentially or importantly human), it is both morally permissible and even desirable to kick the humans out of the loop.
Collapse
Affiliation(s)
- Kathryn Muyskens
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Yonghui Ma
- Centre for Bioethics, Xiamen University, Xiamen, China
| | - Jerry Menikoff
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - James Hallinan
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Julian Savulescu
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, UK
| |
Collapse
|
2
|
Wenderott K, Krups J, Zaruchas F, Weigl M. Effects of artificial intelligence implementation on efficiency in medical imaging-a systematic literature review and meta-analysis. NPJ Digit Med 2024; 7:265. [PMID: 39349815 PMCID: PMC11442995 DOI: 10.1038/s41746-024-01248-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 08/31/2024] [Indexed: 10/04/2024] Open
Abstract
In healthcare, integration of artificial intelligence (AI) holds strong promise for facilitating clinicians' work, especially in clinical imaging. We aimed to assess the impact of AI implementation for medical imaging on efficiency in real-world clinical workflows and conducted a systematic review searching six medical databases. Two reviewers double-screened all records. Eligible records were evaluated for methodological quality. The outcomes of interest were workflow adaptation due to AI implementation, changes in time for tasks, and clinician workload. After screening 13,756 records, we identified 48 original studies to be incuded in the review. Thirty-three studies measured time for tasks, with 67% reporting reductions. Yet, three separate meta-analyses of 12 studies did not show significant effects after AI implementation. We identified five different workflows adapting to AI use. Most commonly, AI served as a secondary reader for detection tasks. Alternatively, AI was used as the primary reader for identifying positive cases, resulting in reorganizing worklists or issuing alerts. Only three studies scrutinized workload calculations based on the time saved through AI use. This systematic review and meta-analysis represents an assessment of the efficiency improvements offered by AI applications in real-world clinical imaging, predominantly revealing enhancements across the studies. However, considerable heterogeneity in available studies renders robust inferences regarding overall effectiveness in imaging tasks. Further work is needed on standardized reporting, evaluation of system integration, and real-world data collection to better understand the technological advances of AI in real-world healthcare workflows. Systematic review registration: Prospero ID CRD42022303439, International Registered Report Identifier (IRRID): RR2-10.2196/40485.
Collapse
Affiliation(s)
| | - Jim Krups
- Institute for Patient Safety, University Hospital Bonn, Bonn, Germany
| | - Fiona Zaruchas
- Institute for Patient Safety, University Hospital Bonn, Bonn, Germany
| | - Matthias Weigl
- Institute for Patient Safety, University Hospital Bonn, Bonn, Germany
| |
Collapse
|
3
|
Wosny M, Strasser LM, Hastings J. The Paradoxes of Digital Tools in Hospitals: Qualitative Interview Study. J Med Internet Res 2024; 26:e56095. [PMID: 39008341 PMCID: PMC11287097 DOI: 10.2196/56095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 03/21/2024] [Accepted: 04/16/2024] [Indexed: 07/16/2024] Open
Abstract
BACKGROUND Digital tools are progressively reshaping the daily work of health care professionals (HCPs) in hospitals. While this transformation holds substantial promise, it leads to frustrating experiences, raising concerns about negative impacts on clinicians' well-being. OBJECTIVE The goal of this study was to comprehensively explore the lived experiences of HCPs navigating digital tools throughout their daily routines. METHODS Qualitative in-depth interviews with 52 HCPs representing 24 medical specialties across 14 hospitals in Switzerland were performed. RESULTS Inductive thematic analysis revealed 4 main themes: digital tool use, workflow and processes, HCPs' experience of care delivery, and digital transformation and management of change. Within these themes, 6 intriguing paradoxes emerged, and we hypothesized that these paradoxes might partly explain the persistence of the challenges facing hospital digitalization: the promise of efficiency and the reality of inefficiency, the shift from face to face to interface, juggling frustration and dedication, the illusion of information access and trust, the complexity and intersection of workflows and care paths, and the opportunities and challenges of shadow IT. CONCLUSIONS Our study highlights the central importance of acknowledging and considering the experiences of HCPs to support the transformation of health care technology and to avoid or mitigate any potential negative experiences that might arise from digitalization. The viewpoints of HCPs add relevant insights into long-standing informatics problems in health care and may suggest new strategies to follow when tackling future challenges.
Collapse
Affiliation(s)
- Marie Wosny
- School of Medicine, University of St Gallen, St.Gallen, Switzerland
- Institute for Implementation Science in Health Care, University of Zurich, Zurich, Switzerland
| | | | - Janna Hastings
- School of Medicine, University of St Gallen, St.Gallen, Switzerland
- Institute for Implementation Science in Health Care, University of Zurich, Zurich, Switzerland
| |
Collapse
|
4
|
Ryan M. We're only human after all: a critique of human-centred AI. AI & SOCIETY 2024; 40:1303-1319. [PMID: 40225308 PMCID: PMC11985563 DOI: 10.1007/s00146-024-01976-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 05/18/2024] [Indexed: 04/15/2025]
Abstract
The use of a 'human-centred' artificial intelligence approach (HCAI) has substantially increased over the past few years in academic texts (1600 +); institutions (27 Universities have HCAI labs, such as Stanford, Sydney, Berkeley, and Chicago); in tech companies (e.g., Microsoft, IBM, and Google); in politics (e.g., G7, G20, UN, EU, and EC); and major institutional bodies (e.g., World Bank, World Economic Forum, UNESCO, and OECD). Intuitively, it sounds very appealing: placing human concerns at the centre of AI development and use. However, this paper will use insights from the works of Michel Foucault (mostly The Order of Things) to argue that the HCAI approach is deeply problematic in its assumptions. In particular, this paper will criticise four main assumptions commonly found within HCAI: human-AI hybridisation is desirable and unproblematic; humans are not currently at the centre of the AI universe; we should use humans as a way to guide AI development; AI is the next step in a continuous path of human progress; and increasing human control over AI will reduce harmful bias. This paper will contribute to the field of philosophy of technology by using Foucault's analysis to examine assumptions found in HCAI [it provides a Foucauldian conceptual analysis of a current approach (human-centredness) that aims to influence the design and development of a transformative technology (AI)], it will contribute to AI ethics debates by offering a critique of human-centredness in AI (by choosing Foucault, it provides a bridge between older ideas with contemporary issues), and it will also contribute to Foucault studies (by using his work to engage in contemporary debates, such as AI).
Collapse
Affiliation(s)
- Mark Ryan
- Wageningen Economic Research, Wageningen University and Research, Droevendaalsesteeg 4, 6708 PB Wageningen, The Netherlands
| |
Collapse
|
5
|
Fenwick A, Molnar G, Frangos P. Revisiting the role of HR in the age of AI: bringing humans and machines closer together in the workplace. Front Artif Intell 2024; 6:1272823. [PMID: 38288334 PMCID: PMC10822991 DOI: 10.3389/frai.2023.1272823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 11/16/2023] [Indexed: 01/31/2024] Open
Abstract
The functions of human resource management (HRM) have changed radically in the past 20 years due to market and technological forces, becoming more cross-functional and data-driven. In the age of AI, the role of HRM professionals in organizations continues to evolve. Artificial intelligence (AI) is transforming many HRM functions and practices throughout organizations creating system and process efficiencies, performing advanced data analysis, and contributing to the value creation process of the organization. A growing body of evidence highlights the benefits AI brings to the field of HRM. Despite the increased interest in AI-HRM scholarship, focus on human-AI interaction at work and AI-based technologies for HRM is limited and fragmented. Moreover, the lack of human considerations in HRM tech design and deployment can hamper AI digital transformation efforts. This paper provides a contemporary and forward-looking perspective to the strategic and human-centric role HRM plays within organizations as AI becomes more integrated in the workplace. Spanning three distinct phases of AI-HRM integration (technocratic, integrated, and fully-embedded), it examines the technical, human, and ethical challenges at each phase and provides suggestions on how to overcome them using a human-centric approach. Our paper highlights the importance of the evolving role of HRM in the AI-driven organization and provides a roadmap on how to bring humans and machines closer together in the workplace.
Collapse
Affiliation(s)
- Ali Fenwick
- Hult International Business School, Dubai, United Arab Emirates
| | - Gabor Molnar
- The ATLAS Institute, University of Colorado, Boulder, CO, United States
| | - Piper Frangos
- Hult International Business School, Ashridge, United Kingdom
| |
Collapse
|
6
|
Chen Y, Wu Z, Wang P, Xie L, Yan M, Jiang M, Yang Z, Zheng J, Zhang J, Zhu J. Radiology Residents' Perceptions of Artificial Intelligence: Nationwide Cross-Sectional Survey Study. J Med Internet Res 2023; 25:e48249. [PMID: 37856181 PMCID: PMC10623237 DOI: 10.2196/48249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 07/07/2023] [Accepted: 09/01/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming various fields, with health care, especially diagnostic specialties such as radiology, being a key but controversial battleground. However, there is limited research systematically examining the response of "human intelligence" to AI. OBJECTIVE This study aims to comprehend radiologists' perceptions regarding AI, including their views on its potential to replace them, its usefulness, and their willingness to accept it. We examine the influence of various factors, encompassing demographic characteristics, working status, psychosocial aspects, personal experience, and contextual factors. METHODS Between December 1, 2020, and April 30, 2021, a cross-sectional survey was completed by 3666 radiology residents in China. We used multivariable logistic regression models to examine factors and associations, reporting odds ratios (ORs) and 95% CIs. RESULTS In summary, radiology residents generally hold a positive attitude toward AI, with 29.90% (1096/3666) agreeing that AI may reduce the demand for radiologists, 72.80% (2669/3666) believing AI improves disease diagnosis, and 78.18% (2866/3666) feeling that radiologists should embrace AI. Several associated factors, including age, gender, education, region, eye strain, working hours, time spent on medical images, resilience, burnout, AI experience, and perceptions of residency support and stress, significantly influence AI attitudes. For instance, burnout symptoms were associated with greater concerns about AI replacement (OR 1.89; P<.001), less favorable views on AI usefulness (OR 0.77; P=.005), and reduced willingness to use AI (OR 0.71; P<.001). Moreover, after adjusting for all other factors, perceived AI replacement (OR 0.81; P<.001) and AI usefulness (OR 5.97; P<.001) were shown to significantly impact the intention to use AI. CONCLUSIONS This study profiles radiology residents who are accepting of AI. Our comprehensive findings provide insights for a multidimensional approach to help physicians adapt to AI. Targeted policies, such as digital health care initiatives and medical education, can be developed accordingly.
Collapse
Affiliation(s)
- Yanhua Chen
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Ziye Wu
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Peicheng Wang
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Linbo Xie
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Mengsha Yan
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Maoqing Jiang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jianjun Zheng
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jingfeng Zhang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jiming Zhu
- Vanke School of Public Health, Tsinghua University, Beijing, China
- Institute for Healthy China, Tsinghua University, Beijing, China
| |
Collapse
|
7
|
Gursoy F, Kakadiaris IA. Artificial intelligence research strategy of the United States: critical assessment and policy recommendations. Front Big Data 2023; 6:1206139. [PMID: 37609602 PMCID: PMC10440374 DOI: 10.3389/fdata.2023.1206139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 07/24/2023] [Indexed: 08/24/2023] Open
Abstract
The foundations of Artificial Intelligence (AI), a field whose applications are of great use and concern for society, can be traced back to the early years of the second half of the 20th century. Since then, the field has seen increased research output and funding cycles followed by setbacks. The new millennium has seen unprecedented interest in AI progress and expectations with significant financial investments from the public and private sectors. However, the continual acceleration of AI capabilities and real-world applications is not guaranteed. Mainly, accountability of AI systems in the context of the interplay between AI and the broader society is essential for adopting AI systems via the trust placed in them. Continual progress in AI research and development (R&D) can help tackle humanity's most significant challenges to improve social good. The authors of this paper suggest that the careful design of forward-looking research policies serves a crucial function in avoiding potential future setbacks in AI research, development, and use. The United States (US) has kept its leading role in R&D, mainly shaping the global trends in the field. Accordingly, this paper presents a critical assessment of the US National AI R&D Strategic Plan and prescribes six recommendations to improve future research strategies in the US and around the globe.
Collapse
|
8
|
Weiter so mit MTO? Konzeptionelle Entwicklungsbedarfe soziotechnischer Arbeits- und Systemgestaltung. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2023. [DOI: 10.1007/s11612-023-00669-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
Abstract
ZusammenfassungDieser konzeptionelle Beitrag nimmt auf Grundlage ausgewählter Publikationen zu soziotechnischer Arbeits- und Systemgestaltung (STS) den Stand der Diskussion auf und stellt aktuelle Suchprozesse und Lösungsansätze vor, mit denen soziotechnische Ansätze aktuelle Herausforderungen der Arbeits- bzw. Systemgestaltung in industriellen Kontexten bearbeiten. Darauf aufbauend werden Forschungs- und Entwicklungsdesiderate benannt sowie Ansatzpunkte aufgezeigt, um zu praktisch tragfähigen Gestaltungsmethoden und -lösungen zu kommen. Dafür werden Erfahrungen und erste Ergebnisse eigener praktischer Forschungsarbeiten genutzt.
Collapse
|
9
|
Samhammer D, Roller R, Hummel P, Osmanodja B, Burchardt A, Mayrdorfer M, Duettmann W, Dabrock P. "Nothing works without the doctor:" Physicians' perception of clinical decision-making and artificial intelligence. Front Med (Lausanne) 2022; 9:1016366. [PMID: 36606050 PMCID: PMC9807757 DOI: 10.3389/fmed.2022.1016366] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 11/23/2022] [Indexed: 12/24/2022] Open
Abstract
Introduction Artificial intelligence-driven decision support systems (AI-DSS) have the potential to help physicians analyze data and facilitate the search for a correct diagnosis or suitable intervention. The potential of such systems is often emphasized. However, implementation in clinical practice deserves continuous attention. This article aims to shed light on the needs and challenges arising from the use of AI-DSS from physicians' perspectives. Methods The basis for this study is a qualitative content analysis of expert interviews with experienced nephrologists after testing an AI-DSS in a straightforward usage scenario. Results The results provide insights on the basics of clinical decision-making, expected challenges when using AI-DSS as well as a reflection on the test run. Discussion While we can confirm the somewhat expectable demand for better explainability and control, other insights highlight the need to uphold classical strengths of the medical profession when using AI-DSS as well as the importance of broadening the view of AI-related challenges to the clinical environment, especially during treatment. Our results stress the necessity for adjusting AI-DSS to shared decision-making. We conclude that explainability must be context-specific while fostering meaningful interaction with the systems available.
Collapse
Affiliation(s)
- David Samhammer
- Institute for Systematic Theology II (Ethics), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany,*Correspondence: David Samhammer,
| | - Roland Roller
- German Research Center for Artificial Intelligence (DFKI), Berlin, Germany,Department of Nephrology and Medical Intensive Care, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Patrik Hummel
- Department of Industrial Engineering and Innovation Sciences, Philosophy and Ethics Group, TU Eindhoven, Eindhoven, Netherlands
| | - Bilgin Osmanodja
- Department of Nephrology and Medical Intensive Care, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Aljoscha Burchardt
- German Research Center for Artificial Intelligence (DFKI), Berlin, Germany
| | - Manuel Mayrdorfer
- Department of Nephrology and Medical Intensive Care, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany,Division of Nephrology and Dialysis, Department of Internal Medicine III, Medical University of Vienna, Vienna, Austria
| | - Wiebke Duettmann
- Department of Nephrology and Medical Intensive Care, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Peter Dabrock
- Institute for Systematic Theology II (Ethics), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| |
Collapse
|