1
|
Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK, Baabdullah AM, Koohang A, Raghavan V, Ahuja M, Albanna H, Albashrawi MA, Al-Busaidi AS, Balakrishnan J, Barlette Y, Basu S, Bose I, Brooks L, Buhalis D, Carter L, Chowdhury S, Crick T, Cunningham SW, Davies GH, Davison RM, Dé R, Dennehy D, Duan Y, Dubey R, Dwivedi R, Edwards JS, Flavián C, Gauld R, Grover V, Hu MC, Janssen M, Jones P, Junglas I, Khorana S, Kraus S, Larsen KR, Latreille P, Laumer S, Malik FT, Mardani A, Mariani M, Mithas S, Mogaji E, Nord JH, O’Connor S, Okumus F, Pagani M, Pandey N, Papagiannidis S, Pappas IO, Pathak N, Pries-Heje J, Raman R, Rana NP, Rehm SV, Ribeiro-Navarrete S, Richter A, Rowe F, Sarker S, Stahl BC, Tiwari MK, van der Aalst W, Venkatesh V, Viglia G, Wade M, Walton P, Wirtz J, Wright R. “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2023. [DOI: 10.1016/j.ijinfomgt.2023.102642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
|
2
|
Gordijn B, Ten Have H. Beyond ethical post-mortems. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2022; 25:305-306. [PMID: 35915370 DOI: 10.1007/s11019-022-10107-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Affiliation(s)
- Bert Gordijn
- Institute of Ethics, Dublin City University, Dublin, Ireland
| | - Henk Ten Have
- Duquesne University, Pittsburgh, USA
- Anahuac University, Mexico City, Mexico
| |
Collapse
|
3
|
Wieczorek M, O’Brolchain F, Saghai Y, Gordijn B. The ethics of self-tracking. A comprehensive review of the literature. ETHICS & BEHAVIOR 2022. [DOI: 10.1080/10508422.2022.2082969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
4
|
Mökander J, Floridi L. Operationalising AI governance through ethics-based auditing: an industry case study. AI AND ETHICS 2022; 3:451-468. [PMID: 35669570 PMCID: PMC9152664 DOI: 10.1007/s43681-022-00171-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 04/29/2022] [Indexed: 12/31/2022]
Abstract
Ethics-based auditing (EBA) is a structured process whereby an entity's past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA-such as the feasibility and effectiveness of different auditing procedures-have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit. While previous literature concerning EBA has focussed on proposing or analysing evaluation metrics or visualisation techniques, our findings suggest that the main difficulties large multinational organisations face when conducting EBA mirror classical governance challenges. These include ensuring harmonised standards across decentralised organisations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes. The case study presented in this article contributes to the existing literature by providing a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective.
Collapse
Affiliation(s)
- Jakob Mökander
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Department of Legal Studies, University of Bologna, Via Zamboni 33, 40126 Bologna, Italy
| |
Collapse
|
5
|
McLennan S, Fiske A, Tigard D, Müller R, Haddadin S, Buyx A. Embedded ethics: a proposal for integrating ethics into the development of medical AI. BMC Med Ethics 2022; 23:6. [PMID: 35081955 PMCID: PMC8793193 DOI: 10.1186/s12910-022-00746-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 01/20/2022] [Indexed: 12/22/2022] Open
Abstract
The emergence of ethical concerns surrounding artificial intelligence (AI) has led to an explosion of high-level ethical principles being published by a wide range of public and private organizations. However, there is a need to consider how AI developers can be practically assisted to anticipate, identify and address ethical issues regarding AI technologies. This is particularly important in the development of AI intended for healthcare settings, where applications will often interact directly with patients in various states of vulnerability. In this paper, we propose that an ‘embedded ethics’ approach, in which ethicists and developers together address ethical issues via an iterative and continuous process from the outset of development, could be an effective means of integrating robust ethical considerations into the practical development of medical AI.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute of History and Ethics in Medicine, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany.
| | - Amelia Fiske
- Institute of History and Ethics in Medicine, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
| | - Daniel Tigard
- Institute of History and Ethics in Medicine, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
| | - Ruth Müller
- Munich Center for Technology in Society, School of Management and School of Life Sciences, Technical University of Munich, Munich, Germany
| | - Sami Haddadin
- Munich School of Robotics and Machine Intelligence, Technical University of Munich, Munich, Germany
| | - Alena Buyx
- Institute of History and Ethics in Medicine, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany.,Munich School of Robotics and Machine Intelligence, Technical University of Munich, Munich, Germany
| |
Collapse
|
6
|
Hermann E. Leveraging Artificial Intelligence in Marketing for Social Good-An Ethical Perspective. JOURNAL OF BUSINESS ETHICS : JBE 2022; 179:43-61. [PMID: 34054170 PMCID: PMC8150633 DOI: 10.1007/s10551-021-04843-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 05/12/2021] [Indexed: 05/08/2023]
Abstract
Artificial intelligence (AI) is (re)shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications (will) provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To reconcile some of these tensions and account for the AI-for-social-good perspective, the authors make suggestions of how AI in marketing can be leveraged to promote societal and environmental well-being.
Collapse
Affiliation(s)
- Erik Hermann
- Wireless Systems,
IHP - Leibniz-Institut für innovative Mikroelektronik
, Frankfurt (Oder), Germany
| |
Collapse
|
7
|
Morley J, Kinsey L, Elhalal A, Garcia F, Ziosi M, Floridi L. Operationalising AI ethics: barriers, enablers and next steps. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01308-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractBy mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology of tools and methods designed to translate between the five most common AI ethics principles and implementable design practices. Whilst a useful starting point, that research rested on the assumption that all AI practitioners are aware of the ethical implications of AI, understand their importance, and are actively seeking to respond to them. In reality, it is unclear whether this is the case. It is this limitation that we seek to overcome here by conducting a mixed-methods qualitative analysis to answer the following four questions: what do AI practitioners understand about the need to translate ethical principles into practice? What motivates AI practitioners to embed ethical principles into design practices? What barriers do AI practitioners face when attempting to translate ethical principles into practice? And finally, what assistance do AI practitioners want and need when translating ethical principles into practice?
Collapse
|
8
|
Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Minds Mach (Dordr) 2021; 31:239-256. [PMID: 34720418 PMCID: PMC8550007 DOI: 10.1007/s11023-021-09563-w] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 06/01/2021] [Indexed: 11/06/2022]
Abstract
As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service.’
Collapse
|
9
|
Shaw JA, Donia J. The Sociotechnical Ethics of Digital Health: A Critique and Extension of Approaches From Bioethics. Front Digit Health 2021; 3:725088. [PMID: 34713196 PMCID: PMC8521799 DOI: 10.3389/fdgth.2021.725088] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 08/25/2021] [Indexed: 12/03/2022] Open
Abstract
The widespread adoption of digital technologies raises important ethical issues in health care and public health. In our view, understanding these ethical issues demands a perspective that looks beyond the technology itself to include the sociotechnical system in which it is situated. In this sense, a sociotechnical system refers to the broader collection of material devices, interpersonal relationships, organizational policies, corporate contracts, and government regulations that shape the ways in which digital health technologies are adopted and used. Bioethical approaches to the assessment of digital health technologies are typically confined to ethical issues raised by features of the technology itself. We suggest that an ethical perspective confined to functions of the technology is insufficient to assess the broader impact of the adoption of technologies on the care environment and the broader health-related ecosystem of which it is a part. In this paper we review existing approaches to the bioethics of digital health, and draw on concepts from design ethics and science & technology studies (STS) to critique a narrow view of the bioethics of digital health. We then describe the sociotechnical system produced by digital health technologies when adopted in health care environments, and outline the various considerations that demand attention for a comprehensive ethical analysis of digital health technologies in this broad perspective. We conclude by outlining the importance of social justice for ethical analysis from a sociotechnical perspective.
Collapse
Affiliation(s)
- James A Shaw
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada.,Women's College Hospital, Toronto, ON, Canada
| | - Joseph Donia
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
10
|
Ethics-based auditing of automated decision-making systems: intervention points and policy implications. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01286-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
AbstractOrganisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations verify claims about their ADMS and (b) provide decision-subjects with justifications for the outputs produced by ADMS. In this article, we outline the conditions under which EBA procedures can be feasible and effective in practice. First, we argue that EBA is best understood as a ‘soft’ yet ‘formal’ governance mechanism. This implies that the main responsibility of auditors should be to spark ethical deliberation at key intervention points throughout the software development process and ensure that there is sufficient documentation to respond to potential inquiries. Second, we frame AMDS as parts of larger sociotechnical systems to demonstrate that to be feasible and effective, EBA procedures must link to intervention points that span all levels of organisational governance and all phases of the software lifecycle. The main function of EBA should, therefore, be to inform, formalise, assess, and interlink existing governance structures. Finally, we discuss the policy implications of our findings. To support the emergence of feasible and effective EBA procedures, policymakers and regulators could provide standardised reporting formats, facilitate knowledge exchange, provide guidance on how to resolve normative tensions, and create an independent body to oversee EBA of ADMS.
Collapse
|
11
|
Hermann E, Hermann G, Tremblay JC. Ethical Artificial Intelligence in Chemical Research and Development: A Dual Advantage for Sustainability. SCIENCE AND ENGINEERING ETHICS 2021; 27:45. [PMID: 34231042 PMCID: PMC8260511 DOI: 10.1007/s11948-021-00325-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 06/25/2021] [Indexed: 06/13/2023]
Abstract
Artificial intelligence can be a game changer to address the global challenge of humanity-threatening climate change by fostering sustainable development. Since chemical research and development lay the foundation for innovative products and solutions, this study presents a novel chemical research and development process backed with artificial intelligence and guiding ethical principles to account for both process- and outcome-related sustainability. Particularly in ethically salient contexts, ethical principles have to accompany research and development powered by artificial intelligence to promote social and environmental good and sustainability (beneficence) while preventing any harm (non-maleficence) for all stakeholders (i.e., companies, individuals, society at large) affected.
Collapse
Affiliation(s)
- Erik Hermann
- IHP - Leibniz-Institut für innovative Mikroelektronik, Frankfurt (Oder), Germany.
| | | | | |
Collapse
|
12
|
Stahl BC. AI Ecosystems for Human Flourishing: The Recommendations. SPRINGERBRIEFS IN RESEARCH AND INNOVATION GOVERNANCE 2021. [PMCID: PMC7968612 DOI: 10.1007/978-3-030-69978-9_7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
This chapter develops the conclusions that can be drawn from the application of the ecosystem metaphor to AI. It highlights the challenges that arise for the ethical governance of AI ecosystems. These provide the basis for the definition of requirements that successful governance interventions have to fulfil. Three main requirements become apparent: the need for a clear delimitation of the boundaries of the ecosystem in question, the provision and maintenance of knowledge and capacities within the ecosystem, and the need for adaptable, flexible and careful governance structures that are capable of reacting to environmental changes. Based on these requirements, the chapter then spells out some recommendations for interventions that are likely to be able to shape AI ecosystems in ways that are conducive to human flourishing.
Collapse
Affiliation(s)
- Bernd Carsten Stahl
- Centre for Computing and Social Responsibility, De Montfort University, Leicester, UK
| |
Collapse
|
13
|
Fiske A, Tigard D, Müller R, Haddadin S, Buyx A, McLennan S. Embedded Ethics Could Help Implement the Pipeline Model Framework for Machine Learning Healthcare Applications. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2020; 20:32-35. [PMID: 33103978 DOI: 10.1080/15265161.2020.1820101] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
|
14
|
Seeing and Viewing Through a Postdigital Pandemic: Shifting from Physical Proximity to Scopic Mediation. POSTDIGITAL SCIENCE AND EDUCATION 2020. [PMCID: PMC7338282 DOI: 10.1007/s42438-020-00156-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
This paper addresses a particular area of concern regarding our habits that pertains to embodied experience and education through the Covid-19 pandemic: the shift from comfort to discomfort regarding face-to-face social interaction within the same physical space. To explore this transition, I use the related concepts of seeing and viewing from Isaac Asimov’s novel, The Naked Sun (1957), which are useful tools for investigating a probable collateral effect of rapid social distancing for the sake of avoiding contagion and includes replacing physically proximate interaction and procedures with a scopic mediation. Seeing and viewing provide concepts for understanding how values change in the midst of fears concerning contagion through physical contact that are mollified through the use of technology analogous to video conferencing. Postdigital education and the concepts of we-think, we-learn, and we-act provide critical tools for helping us understand this transition of perspective regarding educational and social practices in the midst of a pandemic.
Collapse
|