1
|
Groeneveld S, Bin Noon G, den Ouden MEM, van Os-Medendorp H, van Gemert-Pijnen JEWC, Verdaasdonk RM, Morita PP. The Cooperation Between Nurses and a New Digital Colleague "AI-Driven Lifestyle Monitoring" in Long-Term Care for Older Adults: Viewpoint. JMIR Nurs 2024; 7:e56474. [PMID: 38781012 DOI: 10.2196/56474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/27/2024] [Accepted: 04/03/2024] [Indexed: 05/25/2024] Open
Abstract
Technology has a major impact on the way nurses work. Data-driven technologies, such as artificial intelligence (AI), have particularly strong potential to support nurses in their work. However, their use also introduces ambiguities. An example of such a technology is AI-driven lifestyle monitoring in long-term care for older adults, based on data collected from ambient sensors in an older adult's home. Designing and implementing this technology in such an intimate setting requires collaboration with nurses experienced in long-term and older adult care. This viewpoint paper emphasizes the need to incorporate nurses and the nursing perspective into every stage of designing, using, and implementing AI-driven lifestyle monitoring in long-term care settings. It is argued that the technology will not replace nurses, but rather act as a new digital colleague, complementing the humane qualities of nurses and seamlessly integrating into nursing workflows. Several advantages of such a collaboration between nurses and technology are highlighted, as are potential risks such as decreased patient empowerment, depersonalization, lack of transparency, and loss of human contact. Finally, practical suggestions are offered to move forward with integrating the digital colleague.
Collapse
Affiliation(s)
- Sjors Groeneveld
- Research Group Technology, Health & Care, Saxion University of Applied Sciences, Enschede, Netherlands
- Research Group Smart Health, Saxion University of Applied Sciences, Enschede, Netherlands
- TechMed Center, Health Technology Implementation, University of Twente, Enschede, Netherlands
| | - Gaya Bin Noon
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada
| | - Marjolein E M den Ouden
- Research Group Technology, Health & Care, Saxion University of Applied Sciences, Enschede, Netherlands
- Research Group Care and Technology, Regional Community College of Twente, Hengelo, Netherlands
| | - Harmieke van Os-Medendorp
- Domain Health, Sports, and Welfare, Inholland University of Applied Sciences, Amsterdam, Netherlands
- Spaarne Gasthuis Academy, Hoofddorp, Netherlands
| | - J E W C van Gemert-Pijnen
- Centre for eHealth and Wellbeing Research, Section of Psychology, Health and Technology, University of Twente, Enschede, Netherlands
| | - Rudolf M Verdaasdonk
- TechMed Center, Health Technology Implementation, University of Twente, Enschede, Netherlands
| | - Plinio Pelegrini Morita
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada
- Research Institute for Aging, University of Waterloo, Waterloo, ON, Canada
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Centre for Digital Therapeutics, Techna Institute, University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
2
|
Wang W, Wang Y, Chen L, Ma R, Zhang M. Justice at the Forefront: Cultivating felt accountability towards Artificial Intelligence among healthcare professionals. Soc Sci Med 2024; 347:116717. [PMID: 38518481 DOI: 10.1016/j.socscimed.2024.116717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 02/10/2024] [Accepted: 02/20/2024] [Indexed: 03/24/2024]
Abstract
The advent of AI has ushered in a new era of patient care, but with it emerges a contentious debate surrounding accountability for algorithmic medical decisions. Within this discourse, a spectrum of views prevails, ranging from placing accountability on AI solution providers to laying it squarely on the shoulders of healthcare professionals. In response to this debate, this study, grounded in the mutualistic partner choice (MPC) model of the evolution of morality, seeks to establish a configurational framework for cultivating felt accountability towards AI among healthcare professionals. This framework underscores two pivotal conditions: AI ethics enactment and trusting belief in AI and considers the influence of organizational complexity in the implementation of this framework. Drawing on Fuzzy-set Qualitative Comparative Analysis (fsQCA) of a sample of 401 healthcare professionals, this study reveals that a) focusing justice and autonomy in AI ethics enactment along with building trusting belief in AI reliability and functionality reinforces healthcare professionals' sense of felt accountability towards AI, b) fostering felt accountability towards AI necessitates ensuring the establishment of trust in its functionality for high complexity hospitals, and c) prioritizing justice in AI ethics enactment and trust in AI reliability is essential for low complexity hospitals.
Collapse
Affiliation(s)
- Weisha Wang
- Research Center for Smarter Supply Chain, Business School, Soochow University, 50 Donghuan Road, Suzhou, 215006, China.
| | - Yichuan Wang
- Sheffield University Management School, University of Sheffield, Conduit Rd, Sheffield, S10 1FL, United Kingdom.
| | - Long Chen
- Brunel University London, United Kingdom.
| | - Rui Ma
- Greenwich Business School, University of Greenwich, United Kingdom.
| | - Minhao Zhang
- University of Bristol School of Management, University of Bristol, United Kingdom.
| |
Collapse
|
3
|
Pyo J, Pachepsky Y, Kim S, Abbas A, Kim M, Kwon YS, Ligaray M, Cho KH. Long short-term memory models of water quality in inland water environments. WATER RESEARCH X 2023; 21:100207. [PMID: 38098887 PMCID: PMC10719578 DOI: 10.1016/j.wroa.2023.100207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/08/2023] [Accepted: 11/14/2023] [Indexed: 12/17/2023]
Abstract
Water quality is substantially influenced by a multitude of dynamic and interrelated variables, including climate conditions, landuse and seasonal changes. Deep learning models have demonstrated predictive power of water quality due to the superior ability to automatically learn complex patterns and relationships from variables. Long short-term memory (LSTM), one of deep learning models for water quality prediction, is a type of recurrent neural network that can account for longer-term traits of time-dependent data. It is the most widely applied network used to predict the time series of water quality variables. First, we reviewed applications of a standalone LSTM and discussed its calculation time, prediction accuracy, and good robustness with process-driven numerical models and the other machine learning. This review was expanded into the LSTM model with data pre-processing techniques, including the Complete Ensemble Empirical Mode Decomposition with Adaptive Noise method and Synchrosqueezed Wavelet Transform. The review then focused on the coupling of LSTM with a convolutional neural network, attention network, and transfer learning. The coupled networks demonstrated their performance over the standalone LSTM model. We also emphasized the influence of the static variables in the model and used the transformation method on the dataset. Outlook and further challenges were addressed. The outlook for research and application of LSTM in hydrology concludes the review.
Collapse
Affiliation(s)
- JongCheol Pyo
- Department for Environmental Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Yakov Pachepsky
- Environmental Microbial and Food Safety Laboratory, USDA-ARS, Beltsville, MD, USA
| | - Soobin Kim
- School of Civil, Urban, Earth, and Environmental Engineering, Ulsan National Institute of Science and Technology, 50 UNIST-gil, Ulju-gun, Ulsan 44919, Republic of Korea
- Disposal Safety Evaluation R&D Division, Korea Atomic Energy Research Institute (KAERI), 111, Daedeok-daero 989 beon-gil, Yuseong-gu, Daejeon 34057, Republic of Korea
| | - Ather Abbas
- Physical Sciences and Engineering, King Abdullah University of Science and Technology, Thuwal 23955-6900, Kingdom of Saudi Arabia
| | - Minjeong Kim
- Disposal Safety Evaluation R&D Division, Korea Atomic Energy Research Institute (KAERI), 111, Daedeok-daero 989 beon-gil, Yuseong-gu, Daejeon 34057, Republic of Korea
| | - Yong Sung Kwon
- Environmental Impact Assessment Team, Division of Ecological Assessment Research, National Institute of Ecology, Seocheon, Republic of Korea
| | - Mayzonee Ligaray
- Institute of Environmental Science and Meteorology, College of Science, University of the Philippines Diliman, Quezon City 1101, Philippines
| | - Kyung Hwa Cho
- School of Civil, Environmental and Architectural Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
4
|
Hine E, Yousefi Y, Osivand P, Brand D, Kugler K, Chiara PG. The AI Act Grand Challenge shows how autonomous robots will be regulated. Sci Robot 2023; 8:eadk5632. [PMID: 37992193 DOI: 10.1126/scirobotics.adk5632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
One of the winning teams of the EU AI Act Grand Challenge analyzes how the AI Act will regulate robots.
Collapse
Affiliation(s)
- Emmie Hine
- Department of Legal Studies, University of Bologna, Via Zamboni 27/29, Bologna 40126, Italy
| | - Yasaman Yousefi
- Department of Legal Studies, University of Bologna, Via Zamboni 27/29, Bologna 40126, Italy
| | - Parisa Osivand
- Dalla Lana School of Public Health, University of Toronto, 155 College St., Toronto, ON M5T 3M7, Canada
| | - Dirk Brand
- School of Public Leadership, Stellenbosch University, Carl Cronje Dr., Cape Town 7530, South Africa
| | - Kholofelo Kugler
- Faculty of Law, University of Lucerne, Frohburgstrasse 3, Postfach 4466, Lucerne 6002, Switzerland
| | - Pier Giorgio Chiara
- Department of Legal Studies, University of Bologna, Via Zamboni 27/29, Bologna 40126, Italy
| |
Collapse
|
5
|
Mazzini G, Bagni F. Considerations on the regulation of AI systems in the financial sector by the AI Act. Front Artif Intell 2023; 6:1277544. [PMID: 38028663 PMCID: PMC10676199 DOI: 10.3389/frai.2023.1277544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 10/26/2023] [Indexed: 12/01/2023] Open
Abstract
The proposal for the Artificial Intelligence regulation in the EU (AI Act) is a horizontal legal instrument that aims to regulate, according to a tailored risk-based approach, the development and use of AI systems across a plurality of sectors, including the financial sector. In particular, AI systems intended to be used to evaluate the creditworthiness or establish the credit score of natural persons are classified as "high-risk AI systems". The proposal, tabled by the Commission in April 2021, is currently at the center of intense interinstitutional negotiations between the two branches of the European legislature, the European Parliament and the Council. Without prejudice to the ongoing legislative deliberations, the paper aims to provide an overview of the main elements and choices made by the Commission in respect of the regulation of AI in the financial sector, as well as of the position taken in that regard by the European Parliament and Council.
Collapse
Affiliation(s)
| | - Filippo Bagni
- IMT School for Advanced Studies of Lucca, Lucca, Italy
| |
Collapse
|
6
|
Finocchiaro G. The regulation of artificial intelligence. AI & SOCIETY 2023. [DOI: 10.1007/s00146-023-01650-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
AbstractBefore embarking on a discussion of the regulation of artificial intelligence (AI), it is first necessary to define the subject matter regulated. Defining artificial intelligence is a difficult endeavour, and many definitions have been proposed over the years. Although more than 70 years have passed since it was adopted, the most convincing definition is still nonetheless that proposed by Turing; in any case, it is important to be mindful of the risk of anthropomorphising artificial intelligence, which may arise in particular from its very definition. Once we have established the subject matter regulated, we must ask ourselves whether lawmakers should pursue an approach that seeks to regulate artificial intelligence as a whole, or whether by contrast they should regulate applications of artificial intelligence in specific sectors or individual areas. The proposal for a regulation on artificial intelligence published on 21 April 2021 implements the former approach whilst also pursuing geopolitical goals. After providing an initial overview of the notion of artificial intelligence, this article investigates the geopolitical context to the proposal for a regulation, and then goes on to illustrate the regulatory model embraced by the proposal as well as related critical aspects.
Collapse
|
7
|
Approval and Certification of Ophthalmic AI Devices in the European Union. Ophthalmol Ther 2023; 12:633-638. [PMID: 36652171 PMCID: PMC10011240 DOI: 10.1007/s40123-023-00652-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 01/05/2023] [Indexed: 01/19/2023] Open
Abstract
Artificial intelligence (AI)-based medical devices are already commercially available in Europe. The regulations surrounding the introduction and use of medical AI devices in the European Union (EU) are different to those in the USA, and the specifics of European legislature in medical AI are not commonly known. European law classifies medical devices into four classes: I, IIa, IIb, and III, depending on the perceived risk level of the device. Medical devices are certified under independent nongovernment bodies, and some can even self-certify their compliance with EU standards. The European "open" approach is vastly different from the strict perspective of the FDA, as reflected by the number of available medical AI devices. The EU is currently in a transitory period between two regulations, further complicating the legislative landscape. The devices in question deal with extremely sensitive data, collecting, processing, and sending images and diagnoses over the internet. The EU approach puts a large burden of verifying the effectiveness and integrity of the AI device on the consumer, without giving consumers many tools to do that effectively. This highlights the need for effective legislation and oversight from governing bodies, as well as the need for understanding the legalities and limitations of AI devices for those implementing them in clinical practice.
Collapse
|
8
|
From EU Robotics and AI Governance to HRI Research: Implementing the Ethics Narrative. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00982-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
AbstractIn recent years, the European Union has made considerable efforts to develop dedicated strategies and policies for the governance of robotics and AI. An important component of the EU’s approach is its emphasis on the need to mitigate the potential societal impacts of the expected rise in the interactive capacities of autonomous systems. In the quest to define and implement new policies addressing this issue, ethical notions have taken an increasingly central position. This paper presents a concise overview of the integration of this ethics narrative in the EU’s policy plans. It demonstrates how the ethics narrative aids the definition of policy issues and the establishment of new policy ideas. Crucially, in this context, robotics and AI are explicitly understood as emerging technologies. This implies many ambiguities about their actual future impact, which in turn results in uncertainty regarding effective implementation of policies that draw on the ethics narrative. In an effort to develop clearer pathways towards the further development of ethical notions in AI and robotics governance, this paper understands human-robot interaction (HRI) research as a field that can play an important role in the implementation of ethics. Four different complementary pathways towards ethics integration in (HRI) research are proposed, namely: providing insights for the improvement of ethical assessment, further research into the moral competence of artificial agents, engage in value-based design and implementation of robots, and participation in discussions on building ethical sociotechnical systems around robots.
Collapse
|
9
|
Fontes C, Corrigan C, Lütge C. Governing AI during a pandemic crisis: Initiatives at the EU level. TECHNOLOGY IN SOCIETY 2023; 72:102204. [PMID: 36777094 PMCID: PMC9894826 DOI: 10.1016/j.techsoc.2023.102204] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 01/16/2023] [Accepted: 01/19/2023] [Indexed: 06/18/2023]
Abstract
After the outbreak of Covid-19, the European Commission (EC) promptly took the initiative to lead and coordinate a common European response. The actions unfolded in several directions, paving the way to the uptake of AI-related solutions and placing hope in these tools to face crises, namely of a public health and global nature. In this article, we focus on initiatives for the uptake of AI-related solutions from the experimental level towards implementation. The Repository of AI and Robotics solutions, launched in 2020, is an example of an initiative put forth to leverage and disseminate knowledge on AI, expanding the fields of application and fostering the development and adaptation of cutting-edge technologies to explore how they can assist in tackling specific tasks during a public health crisis. Using this database, the article outlines the promise of AI as a hope for handling specific needs and tasks and how the uptake of such technologies was accelerated during the Covid-19 pandemic. In extension, we frame initiatives for the uptake of AI-enabled solutions from a governance perspective, focusing on the establishment of frameworks for ethical and trustworthy AI by defining principles and standards that aim to protect the underlying values deemed fundamental.
Collapse
Affiliation(s)
- Catarina Fontes
- Technical University of Munich, School of Social Sciences and Technology, Institute for Ethics in Artificial Intelligence, München, 80333, Germany
| | - Caitlin Corrigan
- Technical University of Munich, School of Social Sciences and Technology, Institute for Ethics in Artificial Intelligence, München, 80333, Germany
| | - Christoph Lütge
- Technical University of Munich, School of Social Sciences and Technology, Institute for Ethics in Artificial Intelligence, München, 80333, Germany
| |
Collapse
|
10
|
Leading the Charge on Digital Regulation: The More, the Better, or Policy Bubble? DIGITAL SOCIETY : ETHICS, SOCIO-LEGAL AND GOVERNANCE OF DIGITAL TECHNOLOGY 2023; 2:4. [PMID: 36686333 PMCID: PMC9844176 DOI: 10.1007/s44206-023-00033-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 01/04/2023] [Indexed: 01/19/2023]
Abstract
For about a decade, the concept of 'digital sovereignty' has been prominent in the European policy discourse. In the quest for digital sovereignty, the European Union has adopted a constitutional approach to protect fundamental rights and democratic values, and to ensure fair and competitive digital markets. Thus, 'digital constitutionalism' emerged as a twin discourse. A corollary of these discourses is a third phenomenon resulting from a regulatory externalisation of European law beyond the bloc's borders, the so-called 'Brussels Effect'. The dynamics arising from Europe's digital policy and regulatory activism imply increasing legal complexities. This paper argues that this phenomenon in policy-making is a case of a positive 'policy bubble' characterised by an oversupply of policies and legislative acts. The phenomenon can be explained by the amplification of values in the framing of digital policy issues. To unpack the policy frames and values at stake, this paper provides an overview of the digital policy landscape, followed by a critical assessment to showcase the practical implications of positive policy bubbles.
Collapse
|
11
|
Occhipinti C, Carnevale A, Briguglio L, Iannone A, Bisconti P. SAT: a methodology to assess the social acceptance of innovative AI-based technologies. JOURNAL OF INFORMATION COMMUNICATION & ETHICS IN SOCIETY 2022. [DOI: 10.1108/jices-09-2021-0095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Purpose
The purpose of this paper is to present the conceptual model of an innovative methodology (SAT) to assess the social acceptance of technology, especially focusing on artificial intelligence (AI)-based technology.
Design/methodology/approach
After a review of the literature, this paper presents the main lines by which SAT stands out from current methods, namely, a four-bubble approach and a mix of qualitative and quantitative techniques that offer assessments that look at technology as a socio-technical system. Each bubble determines the social variability of a cluster of values: User-Experience Acceptance, Social Disruptiveness, Value Impact and Trust.
Findings
The methodology is still in development, requiring further developments, specifications and validation. Accordingly, the findings of this paper refer to the realm of the research discussion, that is, highlighting the importance of preventively assessing and forecasting the acceptance of technology and building the best design strategies to boost sustainable and ethical technology adoption.
Social implications
Once SAT method will be validated, it could constitute a useful tool, with societal implications, for helping users, markets and institutions to appraise and determine the co-implications of technology and socio-cultural contexts.
Originality/value
New AI applications flood today’s users and markets, often without a clear understanding of risks and impacts. In the European context, regulations (EU AI Act) and rules (EU Ethics Guidelines for Trustworthy) try to fill this normative gap. The SAT method seeks to integrate the risk-based assessment of AI with an assessment of the perceptive-psychological and socio-behavioural aspects of its social acceptability.
Collapse
|
12
|
The environmental challenges of AI in EU law: lessons learned from the Artificial Intelligence Act (AIA) with its drawbacks. TRANSFORMING GOVERNMENT- PEOPLE PROCESS AND POLICY 2022. [DOI: 10.1108/tg-07-2021-0121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
The paper aims to examine the environmental challenges of artificial intelligence (AI) in EU law that regard both illicit uses of the technology, i.e. overuse or misuse of AI and its possible underuses. The aim of the paper is to show how such regulatory efforts of legislators should be understood as a critical component of the Green Deal of the EU institutions, that is, to save our planet from impoverishment, plunder and destruction.
Design/methodology/approach
To illustrate the different ways in which AI can represent a game-changer for our environmental challenges, attention is drawn to a multidisciplinary approach, which includes the analysis of the initiatives on the European Green Deal; the proposals for a new legal framework on data governance and AI; principles of environmental and constitutional law; the interaction of such principles and provisions of environmental and constitutional law with AI regulations; other sources of EU law and of its Member States.
Findings
Most recent initiatives on AI, including the AI Act (AIA) of the European Commission, have insisted on a human-centric approach, whereas it seems obvious that the challenges of environmental law, including those triggered by AI, should be addressed in accordance with an ontocentric, rather than anthropocentric stance. The paper provides four recommendations for the legal consequences of this short-sighted view, including the lack of environmental concerns in the AIA.
Research limitations/implications
The environmental challenges of AI suggest complementing current regulatory efforts of EU lawmakers with a new generation of eco-impact assessments; duties of care and disclosure of non-financial information; clearer parameters for the implementation of the integration principle in EU constitutional law; special policies for the risk of underusing AI for environmental purposes. Further research should examine these policies in connection with the principle of sustainability and the EU plan for a circular economy, as another crucial ingredient of the Green Deal.
Practical implications
The paper provides a set of concrete measures to properly tackle both illicit uses of AI and the risk of its possible underuse for environmental purposes. Such measures do not only concern the “top down” efforts of legislators but also litigation and the role of courts. Current trends of climate change litigation and the transplant of class actions into several civil law jurisdictions shed new light on the ways in which we should address the environmental challenges of AI, even before a court.
Social implications
A more robust protection of people’s right to a high level of environmental protection and the improvement of the quality of the environment follows as a result of the analysis on the legal threats and opportunities brought forth by AI.
Originality/value
The paper explores a set of issues, often overlooked by scholars and institutions, that is nonetheless crucial for any Green Deal, such as the distinction between the human-centric approach of current proposals in the field of technological regulation and the traditional ontocentric stance of environmental law. The analysis considers for the first time the legal issues that follow this distinction in the field of AI regulation and how we should address them.
Collapse
|
13
|
Verdicchio M, Perin A. When Doctors and AI Interact: on Human Responsibility for Artificial Risks. PHILOSOPHY & TECHNOLOGY 2022; 35:11. [PMID: 35223383 PMCID: PMC8857871 DOI: 10.1007/s13347-022-00506-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 01/18/2022] [Indexed: 01/09/2023]
Abstract
AbstractA discussion concerning whether to conceive Artificial Intelligence (AI) systems as responsible moral entities, also known as “artificial moral agents” (AMAs), has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With this perspective in mind, we focus on the use of AI-based diagnostic systems and shed light on the complex networks of persons, organizations and artifacts that come to be when AI systems are designed, developed, and used in medicine. We then discuss relational criteria of judgment in support of the attribution of responsibility to humans when adverse events are caused or induced by errors in AI systems.
Collapse
Affiliation(s)
- Mario Verdicchio
- Department of Management Information and Production Engineering, University of Bergamo, Bergamo, Italy
- Berlin Ethics Lab, Technische Universität Berlin, Berlin, Germany
| | - Andrea Perin
- Facultad de Derecho, Universidad Andrés Bello, Santiago de Chile, Chile
| |
Collapse
|
14
|
The Good, the Bad, and the Invisible with Its Opportunity Costs: Introduction to the ‘J’ Special Issue on “the Impact of Artificial Intelligence on Law”. J 2022. [DOI: 10.3390/j5010011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Scholars and institutions have been increasingly debating the moral and legal challenges of AI, together with the models of governance that should strike the balance between the opportunities and threats brought forth by AI, its ‘good’ and ‘bad’ facets. There are more than a hundred declarations on the ethics of AI and recent proposals for AI regulation, such as the European Commission’s AI Act, have further multiplied the debate. Still, a normative challenge of AI is mostly overlooked, and regards the underuse, rather than the misuse or overuse, of AI from a legal viewpoint. From health care to environmental protection, from agriculture to transportation, there are many instances of how the whole set of benefits and promises of AI can be missed or exploited far below its full potential, and for the wrong reasons: business disincentives and greed among data keepers, bureaucracy and professional reluctance, or public distrust in the era of no-vax conspiracies theories. The opportunity costs that follow this technological underuse is almost terra incognita due to the ‘invisibility’ of the phenomenon, which includes the ‘shadow prices’ of economy. This introduction provides metrics for such assessment and relates this work to the development of new standards for the field. We must quantify how much it costs not to use AI systems for the wrong reasons.
Collapse
|
15
|
Borsci S, Lehtola VV, Nex F, Yang MY, Augustijn EW, Bagheriye L, Brune C, Kounadi O, Li J, Moreira J, Van Der Nagel J, Veldkamp B, Le DV, Wang M, Wijnhoven F, Wolterink JM, Zurita-Milla R. Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle. AI & SOCIETY 2022. [DOI: 10.1007/s00146-021-01383-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
AbstractThe European Union (EU) Commission’s whitepaper on Artificial Intelligence (AI) proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: (i) the lack of a coherent EU vision to drive future decision-making processes at state and local levels and (ii) the lack of methods to support a sustainable diffusion of AI in our society. The lack of a coherent vision stems from not considering societal differences across the EU member states. We suggest that these differences may lead to a fractured market and an AI crisis in which different members of the EU will adopt nation-centric strategies to exploit AI, thus preventing the development of a frictionless market as envisaged by the EU. Moreover, the Commission aims at changing the AI development culture proposing a human-centred and safety-first perspective that is not supported by methodological advancements, thus taking the risks of unforeseen social and societal impacts of AI. We discuss potential societal, technical, and methodological gaps that should be filled to avoid the risks of developing AI systems at the expense of society. Our analysis results in the recommendation that the EU regulators and policymakers consider how to complement the EC programme with rules and compensatory mechanisms to avoid market fragmentation due to local and global ambitions. Moreover, regulators should go beyond the human-centred approach establishing a research agenda seeking answers to the technical and methodological open questions regarding the development and assessment of human-AI co-action aiming for a sustainable AI diffusion in the society.
Collapse
|
16
|
Monteith S, Glenn T, Geddes J, Whybrow PC, Achtyes E, Bauer M. Expectations for Artificial Intelligence (AI) in Psychiatry. Curr Psychiatry Rep 2022; 24:709-721. [PMID: 36214931 PMCID: PMC9549456 DOI: 10.1007/s11920-022-01378-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/15/2022] [Indexed: 01/29/2023]
Abstract
PURPOSE OF REVIEW Artificial intelligence (AI) is often presented as a transformative technology for clinical medicine even though the current technology maturity of AI is low. The purpose of this narrative review is to describe the complex reasons for the low technology maturity and set realistic expectations for the safe, routine use of AI in clinical medicine. RECENT FINDINGS For AI to be productive in clinical medicine, many diverse factors that contribute to the low maturity level need to be addressed. These include technical problems such as data quality, dataset shift, black-box opacity, validation and regulatory challenges, and human factors such as a lack of education in AI, workflow changes, automation bias, and deskilling. There will also be new and unanticipated safety risks with the introduction of AI. The solutions to these issues are complex and will take time to discover, develop, validate, and implement. However, addressing the many problems in a methodical manner will expedite the safe and beneficial use of AI to augment medical decision making in psychiatry.
Collapse
Affiliation(s)
- Scott Monteith
- Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, MI, 49684, USA.
| | | | - John Geddes
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
| | - Peter C. Whybrow
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angeles, CA USA
| | - Eric Achtyes
- Michigan State University College of Human Medicine, Grand Rapids, MI 49684 USA ,Network180, Grand Rapids, MI USA
| | - Michael Bauer
- Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
17
|
Mökander J, Axente M, Casolari F, Floridi L. Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation. Minds Mach (Dordr) 2021; 32:241-268. [PMID: 34754142 PMCID: PMC8569069 DOI: 10.1007/s11023-021-09577-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 10/14/2021] [Indexed: 11/04/2022]
Abstract
The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that the AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.
Collapse
Affiliation(s)
- Jakob Mökander
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS UK
| | - Maria Axente
- UK All Party Parliamentary Group on AI (APPG AI), London, UK
| | - Federico Casolari
- Department of Legal Studies, University of Bologna, via Zamboni 27/29, 40126 Bologna, Italy
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS UK.,The Alan Turing Institute, The British Library, 2QR, 96 Euston Rd, London, NW1 2DB UK
| |
Collapse
|
18
|
Langer M, König CJ. Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human resource management. HUMAN RESOURCE MANAGEMENT REVIEW 2021. [DOI: 10.1016/j.hrmr.2021.100881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
19
|
The European Commission’s Proposal for an Artificial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS). J 2021. [DOI: 10.3390/j4040043] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
On 21 April 2021, the European Commission presented its long-awaited proposal for a Regulation “laying down harmonized rules on Artificial Intelligence”, the so-called “Artificial Intelligence Act” (AIA). This article takes a critical look at the proposed regulation. After an introduction (1), the paper analyzes the unclear preemptive effect of the AIA and EU competences (2), the scope of application (3), the prohibited uses of Artificial Intelligence (AI) (4), the provisions on high-risk AI systems (5), the obligations of providers and users (6), the requirements for AI systems with limited risks (7), the enforcement system (8), the relationship of the AIA with the existing legal framework (9), and the regulatory gaps (10). The last section draws some final conclusions (11).
Collapse
|