1
|
Ratti E, Morrison M, Jakab I. Ethical and social considerations of applying artificial intelligence in healthcare-a two-pronged scoping review. BMC Med Ethics 2025; 26:68. [PMID: 40420080 DOI: 10.1186/s12910-025-01198-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Accepted: 03/17/2025] [Indexed: 05/28/2025] Open
Abstract
BACKGROUND Artificial Intelligence (AI) is being designed, tested, and in many cases actively employed in almost every aspect of healthcare from primary care to public health. It is by now well established that any application of AI carries an attendant responsibility to consider the ethical and societal aspects of its development, deployment and impact. However, in the rapidly developing field of AI, developments such as machine learning, neural networks, generative AI, and large language models have the potential to raise new and distinct ethical and social issues compared to, for example, automated data processing or more 'basic' algorithms. METHODS This article presents a scoping review of the ethical and social issues pertaining to AI in healthcare, with a novel two-pronged design. One strand of the review (SR1) consists of a broad review of the academic literature restricted to a recent timeframe (2021-23), to better capture up to date developments and debates. The second strand (SR2) consists of a narrow review, limited to prior systematic and scoping reviews on the ethics of AI in healthcare, but extended over a longer timeframe (2014-2024) to capture longstanding and recurring themes and issues in the debate. This strategy provides a practical way to deal with an increasingly voluminous literature on the ethics of AI in healthcare in a way that accounts for both the depth and evolution of the literature. RESULTS SR1 captures the heterogeneity of audience, medical fields, and ethical and societal themes (and their tradeoffs) raised by AI systems. SR2 provides a comprehensive picture of the way scoping reviews on ethical and societal issues in AI in healthcare have been conceptualized, as well as the trends and gaps identified. CONCLUSION Our analysis shows that the typical approach to ethical issues in AI, which is based on the appeal to general principles, becomes increasingly unlikely to do justice to the nuances and specificities of the ethical and societal issues raised by AI in healthcare, as the technology moves from abstract debate and discussion to real world situated applications and concerns in healthcare settings.
Collapse
Affiliation(s)
- Emanuele Ratti
- Department of Philosophy, Cotham House University of Bristol, Bristol, BS6 6JL, UK
| | - Michael Morrison
- Helex - Centre for Health, Law and Emerging Technologies, Faculty of Law, University of Oxford, St Cross Building, Room 201St Cross Road, Oxford, OX1 3UL, UK.
- Institute for Science, Innovation and Society, School of Anthropology and Museum Ethnography, University of Oxford, 64 Banbury Road, Oxford, OX2 6PN, UK.
| | - Ivett Jakab
- YAGHMA B.V., 6C , Poortweg, Delft, Netherlands
| |
Collapse
|
2
|
Lindgren H, Lindvall K, Richter-Sundberg L. Responsible design of an AI system for health behavior change-an ethics perspective on the participatory design process of the STAR-C digital coach. Front Digit Health 2025; 7:1436347. [PMID: 40134464 PMCID: PMC11934961 DOI: 10.3389/fdgth.2025.1436347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 02/10/2025] [Indexed: 03/27/2025] Open
Abstract
Introduction The increased focus on the ethical aspects of artificial intelligence (AI) follows the increased use in society of data-driven analyses of personal information collected in the use of digital applications for various purposes that the individual is often not aware of. The purpose of this study is to investigate how values and norms are transformed into design choices in a participatory design process of an AI-based digital coaching application for promoting health and to prevent cardiovascular diseases, where a variety of expertise and perspectives are represented. Method A participatory design process was conducted engaging domain professionals and potential users in co-design workshops, interviews and observations of prototype use. The design process and outcome was analyzed from a responsible design of AI systems perspective. Results The results include deepened understanding of the values and norms underlying health coaching applications and how an AI-based intervention could provide person-tailored support in managing conflicting norms. Further, the study contributes to increased awareness of the value of participatory design in achieving value-based design of AI systems aimed at promoting health through behavior change, and the inclusion of social norms as a design material in the process. Conclusion It was concluded that the relationship between the anticipated future users and the organization(s) or enterprises developing and implementing the health-promoting application is directing which values are manifested in the application.
Collapse
Affiliation(s)
- Helena Lindgren
- Department of Computing Science, Umeå University, Umeå, Sweden
| | | | | |
Collapse
|
3
|
Winfield AFT, Swana M, Ives J, Hauert S. On the ethical governance of swarm robotic systems in the real world. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2025; 383:20240142. [PMID: 39880033 PMCID: PMC11779541 DOI: 10.1098/rsta.2024.0142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 11/08/2024] [Accepted: 11/25/2024] [Indexed: 01/31/2025]
Abstract
In this paper, we address the question: what practices would be required for the responsible design and operation of real-world swarm robotic systems? We argue that swarm robotic systems must be developed and operated within a framework of ethical governance. We will also explore the human factors surrounding the operation and management of swarm systems, advancing the view that human factors are no less important to swarm robots than social robots. Ethical governance must be anticipatory, and a powerful method for practical anticipatory governance is ethical risk assessment (ERA). As case studies, this paper includes four worked examples of ERAs for fictional but realistic real-world swarms. Although of key importance, ERA is not the only tool available to the responsible roboticist. We outline the supporting role of ethical principles, standards, and verification and validation. Given that real-world swarm robotic systems are likely to be deployed in diverse ecologies, we also ask: how can swarm robotic systems be sustainable? We bring all of these ideas together to describe the complete life cycle of swarm robotic systems, showing where and how the tools and interventions are applied within a framework of anticipatory ethical governance.This article is part of the theme issue 'The road forward with swarm systems'.
Collapse
Affiliation(s)
- Alan F. T. Winfield
- Bristol Robotics Laboratory, University of the West of England, BristolBS16 1QY, UK
| | - Matimba Swana
- School of Engineering Mathematics and Technology, University of Bristol, BristolBS8 1TW, UK
| | - Jonathan Ives
- Centre for Ethics in Medicine Bristol Medical School, University of Bristol, BristolBS8 1TW, UK
| | - Sabine Hauert
- Bristol Robotics Laboratory, School of Engineering Mathematics and Technology, University of Bristol, BristolBS8 1TW, UK
| |
Collapse
|
4
|
Idaikkadar N, Bodin E, Cholli P, Navon L, Ortmann L, Banja J, Waller LA, Alic A, Yuan K, Law R. Advancing Ethical Considerations for Data Science in Injury and Violence Prevention. Public Health Rep 2025:333549241312055. [PMID: 39834075 PMCID: PMC11748135 DOI: 10.1177/00333549241312055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2025] Open
Abstract
Data science is an emerging field that provides new analytical methods. It incorporates novel data sources (eg, internet data) and methods (eg, machine learning) that offer valuable and timely insights into public health issues, including injury and violence prevention. The objective of this research was to describe ethical considerations for public health data scientists conducting injury and violence prevention-related data science projects to prevent unintended ethical, legal, and social consequences, such as loss of privacy or loss of public trust. We first reviewed foundational bioethics and public health ethics literature to identify key ethical concepts relevant to public health data science. After identifying these ethics concepts, we held a series of discussions to organize them under broad ethical domains. Within each domain, we examined relevant ethics concepts from our review of the primary literature. Lastly, we developed questions for each ethical domain to facilitate the early conceptualization stage of the ethical analysis of injury and violence prevention projects. We identified 4 ethical domains: privacy, responsible stewardship, justice as fairness, and inclusivity and engagement. We determined that each domain carries equal weight, with no consideration bearing more importance than the others. Examples of ethical considerations are clearly identifying project goals, determining whether people included in projects are at risk of reidentification through external sources or linkages, and evaluating and minimizing the potential for bias in data sources used. As data science methodologies are incorporated into public health research to work toward reducing the effect of injury and violence on individuals, families, and communities in the United States, we recommend that relevant ethical issues be identified, considered, and addressed.
Collapse
Affiliation(s)
- Nimi Idaikkadar
- Division of Injury Prevention, National Center for Injury Prevention and Control, Centers for Disease Control and Prevention, Atlanta, GA, USA
| | - Eva Bodin
- Office of Readiness and Response, Immediate Office of the Director, Centers for Disease Control and Prevention, Atlanta, GA, USA
| | - Preetam Cholli
- National Center for HIV, Viral Hepatitis, STD, and TB Prevention, Centers for Disease Control and Prevention, Atlanta, GA, USA
| | - Livia Navon
- Division of Injury Prevention, National Center for Injury Prevention and Control, Centers for Disease Control and Prevention, Atlanta, GA, USA
| | - Leonard Ortmann
- Office of Public Health Ethics and Regulations, Office of Science, Centers for Disease Control and Prevention, Atlanta, GA, USA
| | - John Banja
- Center for Ethics, Emory University, Atlanta, GA, USA
| | - Lance A. Waller
- Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, GA, USA
| | - Alen Alic
- Division of Injury Prevention, National Center for Injury Prevention and Control, Centers for Disease Control and Prevention, Atlanta, GA, USA
| | - Keming Yuan
- Division of Injury Prevention, National Center for Injury Prevention and Control, Centers for Disease Control and Prevention, Atlanta, GA, USA
| | - Royal Law
- Division of Injury Prevention, National Center for Injury Prevention and Control, Centers for Disease Control and Prevention, Atlanta, GA, USA
| |
Collapse
|
5
|
Muyskens K, Ma Y, Dunn M. Can an AI-carebot be filial? Reflections from Confucian ethics. Nurs Ethics 2024; 31:999-1009. [PMID: 38472138 DOI: 10.1177/09697330241238332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
This article discusses the application of artificially intelligent robots within eldercare and explores a series of ethical considerations, including the challenges that AI (Artificial Intelligence) technology poses to traditional Chinese Confucian filial piety. From the perspective of Confucian ethics, the paper argues that robots cannot adequately fulfill duties of care. Due to their detachment from personal relationships and interactions, the "emotions" of AI robots are merely performative reactions in different situations, rather than actual emotional abilities. No matter how "humanized" robots become, it is difficult to establish genuine empathy and a meaningful relationship with them for this reason. Even so, we acknowledge that AI robots are a significant tool in managing the demands of elder care and the growth of care poverty, and as such, we attempt to outline some parameters within which care robotics could be acceptable within a Confucian ethical system. Finally, the paper discusses the social impact and ethical considerations brought on by the interaction between humans and machines. It is observed that the relationship between humans and technology has always had both utopian and dystopian aspects, and robotic elder care is no exception. AI caregiver robots will likely become a part of elder care, and the transformation of these robots from "service providers" to "companions" seems inevitable. In light of this, the application of AI-augmented robotic elder care will also eventually change our understanding of interpersonal relationships and traditional requirements of filial piety.
Collapse
|
6
|
Pozzi G, De Proost M. Keeping an AI on the mental health of vulnerable populations: reflections on the potential for participatory injustice. AI AND ETHICS 2024; 5:2281-2291. [PMID: 40421378 PMCID: PMC12103376 DOI: 10.1007/s43681-024-00523-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 07/05/2024] [Indexed: 05/28/2025]
Abstract
Considering the overall shortage of therapists to meet the psychological needs of vulnerable populations, AI-based technologies are often seen as a possible remedy. Particularly smartphone apps or chatbots are increasingly used to offer mental health support, mostly through cognitive behavioral therapy. The assumption underlying the deployment of these systems is their ability to make mental health support accessible to generally underserved populations. Hence, this seems to be aligned with the fundamental biomedical principle of justice understood in its distributive meaning. However, considerations of the principle of justice in its epistemic significance are still in their infancy in the debates revolving around the ethical issues connected to the use of mental health chatbots. This paper aims to fill this research gap, focusing on a less familiar kind of harm that these systems can cause, namely the harm to users in their capacities as knowing subjects. More specifically, we frame our discussion in terms of one form of epistemic injustice that such practices are especially prone to bring about, i.e., participatory injustice. To make our theoretical analysis more graspable and to show its urgency, we discuss the case of a mental health Chatbot, Karim, deployed to deliver mental health support to Syrian refugees. This case substantiates our theoretical considerations and the epistemo-ethical concerns arising from the use of mental health applications among vulnerable populations. Finally, we argue that conceptualizing epistemic participation as a capability within the framework of Capability Sensitive Design can be a first step toward ameliorating the participatory injustice discussed in this paper.
Collapse
Affiliation(s)
- Giorgia Pozzi
- Delft University of Technology, Faculty of Technology, Policy and Management, Jaffalaan 5, Delft, 2628BX The Netherlands
| | - Michiel De Proost
- Ghent University, Faculty of Arts and Philosophy, Department of Philosophy and Moral Sciences, Blandijnberg 2, Gent, B-9000 Belgium
| |
Collapse
|
7
|
Long Y, Novak L, Walsh CG. Searching for Value Sensitive Design in Applied Health AI: A Narrative Review. Yearb Med Inform 2024; 33:75-82. [PMID: 40199292 PMCID: PMC12020519 DOI: 10.1055/s-0044-1800723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2025] Open
Abstract
OBJECTIVE Recent advances in the implementation of healthcare artificial intelligence (AI) have drawn attention toward design methods to address the impacts on workflow. Lesser known than human-centered design, Value Sensitive Design (VSD) is an established framework integrating values into conceptual, technical, and empirical investigations of technology. We sought to study the current state of the literature intersecting elements of VSD with practical applications of healthcare AI. METHODS Using a modified VSD framework attentive to AI-specific values, we conducted a narrative review informed by PRISMA guidelines and assessed VSD elements across design and implementation case studies. RESULTS Our search produced 819 articles that went through multiple rounds of review. Nine studies qualified for full-text review. Most of the studies focused on values for the individual or professional practice such as trust and autonomy. Attention to organizational (e.g., stewardship, employee well-being) and societal (e.g., equity, justice) values was lacking. Studies were primarily from the U.S. and Western Europe. CONCLUSION Future design studies might better incorporate components of VSD by considering larger domains, organizational and societal, in value identification and to bridge to design processes that are not just human-centered but value sensitive. The small number of heterogeneous studies underlines the importance of broader studies of elements of VSD to inform healthcare AI in practice.
Collapse
Affiliation(s)
- Yufei Long
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN
| | - Laurie Novak
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN
| | - Colin G. Walsh
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
8
|
Singhal A, Neveditsin N, Tanveer H, Mago V. Toward Fairness, Accountability, Transparency, and Ethics in AI for Social Media and Health Care: Scoping Review. JMIR Med Inform 2024; 12:e50048. [PMID: 38568737 PMCID: PMC11024755 DOI: 10.2196/50048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 12/21/2023] [Accepted: 02/15/2024] [Indexed: 04/05/2024] Open
Abstract
BACKGROUND The use of social media for disseminating health care information has become increasingly prevalent, making the expanding role of artificial intelligence (AI) and machine learning in this process both significant and inevitable. This development raises numerous ethical concerns. This study explored the ethical use of AI and machine learning in the context of health care information on social media platforms (SMPs). It critically examined these technologies from the perspectives of fairness, accountability, transparency, and ethics (FATE), emphasizing computational and methodological approaches that ensure their responsible application. OBJECTIVE This study aims to identify, compare, and synthesize existing solutions that address the components of FATE in AI applications in health care on SMPs. Through an in-depth exploration of computational methods, approaches, and evaluation metrics used in various initiatives, we sought to elucidate the current state of the art and identify existing gaps. Furthermore, we assessed the strength of the evidence supporting each identified solution and discussed the implications of our findings for future research and practice. In doing so, we made a unique contribution to the field by highlighting areas that require further exploration and innovation. METHODS Our research methodology involved a comprehensive literature search across PubMed, Web of Science, and Google Scholar. We used strategic searches through specific filters to identify relevant research papers published since 2012 focusing on the intersection and union of different literature sets. The inclusion criteria were centered on studies that primarily addressed FATE in health care discussions on SMPs; those presenting empirical results; and those covering definitions, computational methods, approaches, and evaluation metrics. RESULTS Our findings present a nuanced breakdown of the FATE principles, aligning them where applicable with the American Medical Informatics Association ethical guidelines. By dividing these principles into dedicated sections, we detailed specific computational methods and conceptual approaches tailored to enforcing FATE in AI-driven health care on SMPs. This segmentation facilitated a deeper understanding of the intricate relationship among the FATE principles and highlighted the practical challenges encountered in their application. It underscored the pioneering contributions of our study to the discourse on ethical AI in health care on SMPs, emphasizing the complex interplay and the limitations faced in implementing these principles effectively. CONCLUSIONS Despite the existence of diverse approaches and metrics to address FATE issues in AI for health care on SMPs, challenges persist. The application of these approaches often intersects with additional ethical considerations, occasionally leading to conflicts. Our review highlights the lack of a unified, comprehensive solution for fully and effectively integrating FATE principles in this domain. This gap necessitates careful consideration of the ethical trade-offs involved in deploying existing methods and underscores the need for ongoing research.
Collapse
Affiliation(s)
- Aditya Singhal
- Department of Computer Science, Lakehead University, Thunder Bay, ON, Canada
| | - Nikita Neveditsin
- Department of Mathematics and Computing Science, Saint Mary's University, Halifax, NS, Canada
| | - Hasnaat Tanveer
- Faculty of Mathematics, University of Waterloo, Waterloo, ON, Canada
| | - Vijay Mago
- School of Health Policy and Management, York University, Toronto, ON, Canada
| |
Collapse
|
9
|
Schicktanz S, Welsch J, Schweda M, Hein A, Rieger JW, Kirste T. AI-assisted ethics? considerations of AI simulation for the ethical assessment and design of assistive technologies. Front Genet 2023; 14:1039839. [PMID: 37434952 PMCID: PMC10331421 DOI: 10.3389/fgene.2023.1039839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 05/23/2023] [Indexed: 07/13/2023] Open
Abstract
Current ethical debates on the use of artificial intelligence (AI) in healthcare treat AI as a product of technology in three ways. First, by assessing risks and potential benefits of currently developed AI-enabled products with ethical checklists; second, by proposing ex ante lists of ethical values seen as relevant for the design and development of assistive technology, and third, by promoting AI technology to use moral reasoning as part of the automation process. The dominance of these three perspectives in the discourse is demonstrated by a brief summary of the literature. Subsequently, we propose a fourth approach to AI, namely, as a methodological tool to assist ethical reflection. We provide a concept of an AI-simulation informed by three separate elements: 1) stochastic human behavior models based on behavioral data for simulating realistic settings, 2) qualitative empirical data on value statements regarding internal policy, and 3) visualization components that aid in understanding the impact of changes in these variables. The potential of this approach is to inform an interdisciplinary field about anticipated ethical challenges or ethical trade-offs in concrete settings and, hence, to spark a re-evaluation of design and implementation plans. This may be particularly useful for applications that deal with extremely complex values and behavior or with limitations on the communication resources of affected persons (e.g., persons with dementia care or for care of persons with cognitive impairment). Simulation does not replace ethical reflection but does allow for detailed, context-sensitive analysis during the design process and prior to implementation. Finally, we discuss the inherently quantitative methods of analysis afforded by stochastic simulations as well as the potential for ethical discussions and how simulations with AI can improve traditional forms of thought experiments and future-oriented technology assessment.
Collapse
Affiliation(s)
- Silke Schicktanz
- University Medical Center Göttingen, Department for Medical Ethics and History of Medicine, Göttingen, Germany
- Hanse-Wissenschaftskolleg, Institute of Advance Studies, Delmenhorst, Germany
| | - Johannes Welsch
- University Medical Center Göttingen, Department for Medical Ethics and History of Medicine, Göttingen, Germany
| | - Mark Schweda
- University of Oldenburg, Department of Health Services Research, Division for Ethics in Medicine, Oldenburg, Germany
| | - Andreas Hein
- University of Oldenburg, Department of Health Services Research, Division Assistance Systems and Medical Device Technology, Oldenburg, Germany
| | - Jochem W. Rieger
- University of Oldenburg, Applied Neurocognitive Psychology Lab, Oldenburg, Germany
| | - Thomas Kirste
- University of Rostock, Institute for Visual and Analytic Computing, Rostock, Germany
| |
Collapse
|
10
|
Cho YS, Hong PC. Applying Machine Learning to Healthcare Operations Management: CNN-Based Model for Malaria Diagnosis. Healthcare (Basel) 2023; 11:1779. [PMID: 37372897 DOI: 10.3390/healthcare11121779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 06/14/2023] [Accepted: 06/14/2023] [Indexed: 06/29/2023] Open
Abstract
The purpose of this study is to explore how machine learning technologies can improve healthcare operations management. A machine learning-based model to solve a specific medical problem is developed to achieve this research purpose. Specifically, this study presents an AI solution for malaria infection diagnosis by applying the CNN (convolutional neural network) algorithm. Based on malaria microscopy image data from the NIH National Library of Medicine, a total of 24,958 images were used for deep learning training, and 2600 images were selected for final testing of the proposed diagnostic architecture. The empirical results indicate that the CNN diagnostic model correctly classified most malaria-infected and non-infected cases with minimal misclassification, with performance metrics of precision (0.97), recall (0.99), and f1-score (0.98) for uninfected cells, and precision (0.99), recall (0.97), and f1-score (0.98) for parasite cells. The CNN diagnostic solution rapidly processed a large number of cases with a high reliable accuracy of 97.81%. The performance of this CNN model was further validated through the k-fold cross-validation test. These results suggest the advantage of machine learning-based diagnostic methods over conventional manual diagnostic methods in improving healthcare operational capabilities in terms of diagnostic quality, processing costs, lead time, and productivity. In addition, a machine learning diagnosis system is more likely to enhance the financial profitability of healthcare operations by reducing the risk of unnecessary medical disputes related to diagnostic errors. As an extension for future research, propositions with a research framework are presented to examine the impacts of machine learning on healthcare operations management for safety and quality of life in global communities.
Collapse
Affiliation(s)
- Young Sik Cho
- College of Business, Jackson State University, Jackson, MS 39217, USA
| | - Paul C Hong
- John B. and Lillian E. Neff College of Business and Innovation, The University of Toledo, Toledo, OH 43606, USA
| |
Collapse
|
11
|
Bleher H, Braun M. Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice. SCIENCE AND ENGINEERING ETHICS 2023; 29:21. [PMID: 37237246 PMCID: PMC10220094 DOI: 10.1007/s11948-023-00443-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 04/26/2023] [Indexed: 05/28/2023]
Abstract
Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory-practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze each of these three approaches by asking how they understand and conceptualize theory and practice. We outline the conceptual strengths as well as their shortcomings: an embedded ethics approach is context-oriented but risks being biased by it; ethically aligned approaches are principles-oriented but lack justification theories to deal with trade-offs between competing principles; and the interdisciplinary Value Sensitive Design approach is based on stakeholder values but needs linkage to political, legal, or social governance aspects. Against this background, we develop a meta-framework for applied AI ethics conceptions with three dimensions. Based on critical theory, we suggest these dimensions as starting points to critically reflect on the conceptualization of theory and practice. We claim, first, that the inclusion of the dimension of affects and emotions in the ethical decision-making process stimulates reflections on vulnerabilities, experiences of disregard, and marginalization already within the AI development process. Second, we derive from our analysis that considering the dimension of justifying normative background theories provides both standards and criteria as well as guidance for prioritizing or evaluating competing principles in cases of conflict. Third, we argue that reflecting the governance dimension in ethical decision-making is an important factor to reveal power structures as well as to realize ethical AI and its application because this dimension seeks to combine social, legal, technical, and political concerns. This meta-framework can thus serve as a reflective tool for understanding, mapping, and assessing the theory-practice conceptualizations within AI ethics approaches to address and overcome their blind spots.
Collapse
Affiliation(s)
- Hannah Bleher
- Chair of Social Ethics and Ethics of Technology, University of Bonn, Rabinstraße 8, 53111, Bonn, Germany.
| | - Matthias Braun
- Chair of Social Ethics and Ethics of Technology, University of Bonn, Rabinstraße 8, 53111, Bonn, Germany
| |
Collapse
|
12
|
Liefgreen A, Weinstein N, Wachter S, Mittelstadt B. Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it. AI & SOCIETY 2023; 39:2183-2199. [PMID: 39309255 PMCID: PMC11415467 DOI: 10.1007/s00146-023-01684-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 04/21/2023] [Indexed: 09/25/2024]
Abstract
Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values such as transparency and fairness from the outset. Drawing on insights from psychological theories, we assert the need to understand the values that underlie decisions made by individuals involved in creating and deploying AI systems. We describe how this understanding can be leveraged to increase engagement with de-biasing and fairness-enhancing practices within the AI healthcare industry, ultimately leading to sustained behavioral change via autonomy-supportive communication strategies rooted in motivational and social psychology theories. In developing these pathways to engagement, we consider the norms and needs that govern the AI healthcare domain, and we evaluate incentives for maintaining the status quo against economic, legal, and social incentives for behavior change in line with transparency and fairness values.
Collapse
Affiliation(s)
- Alice Liefgreen
- Hillary Rodham Clinton School of Law, University of Swansea, Swansea, SA2 8PP UK
- School of Psychology and Clinical Language Sciences, University of Reading, Whiteknights Road, Reading, RG6 6AL UK
| | - Netta Weinstein
- School of Psychology and Clinical Language Sciences, University of Reading, Whiteknights Road, Reading, RG6 6AL UK
| | - Sandra Wachter
- Oxford Internet Institute, University of Oxford, 1 St. Giles, Oxford, OX1 3JS UK
| | - Brent Mittelstadt
- Oxford Internet Institute, University of Oxford, 1 St. Giles, Oxford, OX1 3JS UK
| |
Collapse
|
13
|
Cenci A, Ilskov SJ, Andersen NS, Chiarandini M. The participatory value-sensitive design (VSD) of a mHealth app targeting citizens with dementia in a Danish municipality. AI AND ETHICS 2023:1-27. [PMID: 37360145 PMCID: PMC10099010 DOI: 10.1007/s43681-023-00274-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 03/02/2023] [Indexed: 06/28/2023]
Abstract
The Sammen Om Demens (together for dementia), a citizen science project developing and implementing an AI-based smartphone app targeting citizens with dementia, is presented as an illustrative case of ethical, applied AI entailing interdisciplinary collaborations and inclusive and participative scientific practices engaging citizens, end users, and potential recipients of technological-digital innovation. Accordingly, the participatory Value-Sensitive Design of the smartphone app (a tracking device) is explored and explained across all of its phases (conceptual, empirical, and technical). Namely, from value construction and value elicitation to the delivery, after various iterations engaging both expert and non-expert stakeholders, of an embodied prototype built on and tailored to their values. The emphasis is on how moral dilemmas and value conflicts, often resulting from diverse people's needs or vested interests, have been resolved in practice to deliver a unique digital artifact with moral imagination that fulfills vital ethical-social desiderata without undermining technical efficiency. The result is an AI-based tool for the management and care of dementia that can be considered more ethical and democratic, since it meaningfully reflects diverse citizens' values and expectations on the app. In the conclusion, we suggest that the co-design methodology outlined in this study is suitable to generate more explainable and trustworthy AI, and also, it helps to advance towards technical-digital innovation holding a human face.
Collapse
Affiliation(s)
- Alessandra Cenci
- Department of Philosophy, Institute for the Study and Culture (IKV), University of Southern Denmark, Odense, Denmark
| | - Susanne Jakobsen Ilskov
- Department of Philosophy, Institute for the Study and Culture (IKV), University of Southern Denmark, Odense, Denmark
| | - Nicklas Sindlev Andersen
- Department of Mathematics and Data Science (IMADA), University of Southern Denmark, Odense, Denmark
| | - Marco Chiarandini
- Department of Mathematics and Data Science (IMADA), University of Southern Denmark, Odense, Denmark
| |
Collapse
|
14
|
Asin-Garcia E, Robaey Z, Kampers LFC, Martins Dos Santos VAP. Exploring the Impact of Tensions in Stakeholder Norms on Designing for Value Change: The Case of Biosafety in Industrial Biotechnology. SCIENCE AND ENGINEERING ETHICS 2023; 29:9. [PMID: 36882674 PMCID: PMC9992083 DOI: 10.1007/s11948-023-00432-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
Synthetic biologists design and engineer organisms for a better and more sustainable future. While the manifold prospects are encouraging, concerns about the uncertain risks of genome editing affect public opinion as well as local regulations. As a consequence, biosafety and associated concepts, such as the Safe-by-design framework and genetic safeguard technologies, have gained notoriety and occupy a central position in the conversation about genetically modified organisms. Yet, as regulatory interest and academic research in genetic safeguard technologies advance, the implementation in industrial biotechnology, a sector that is already employing engineered microorganisms, lags behind. The main goal of this work is to explore the utilization of genetic safeguard technologies for designing biosafety in industrial biotechnology. Based on our results, we posit that biosafety is a case of a changing value, by means of further specification of how to realize biosafety. Our investigation is inspired by the Value Sensitive Design framework, to investigate scientific and technological choices in their appropriate social context. Our findings discuss stakeholder norms for biosafety, reasonings about genetic safeguards, and how these impact the practice of designing for biosafety. We show that tensions between stakeholders occur at the level of norms, and that prior stakeholder alignment is crucial for value specification to happen in practice. Finally, we elaborate in different reasonings about genetic safeguards for biosafety and conclude that, in absence of a common multi-stakeholder effort, the differences in informal biosafety norms and the disparity in biosafety thinking could end up leading to design requirements for compliance instead of for safety.
Collapse
Affiliation(s)
- Enrique Asin-Garcia
- Laboratory of Systems and Synthetic Biology, Wageningen University & Research, 6708, WE, Wageningen, The Netherlands.
- Bioprocess Engineering Group, Wageningen University & Research, 6700, AA, Wageningen, The Netherlands.
| | - Zoë Robaey
- Department of Social Sciences, Wageningen University & Research, 6708, WE, Wageningen, The Netherlands
| | - Linde F C Kampers
- Laboratory of Systems and Synthetic Biology, Wageningen University & Research, 6708, WE, Wageningen, The Netherlands
| | - Vitor A P Martins Dos Santos
- Laboratory of Systems and Synthetic Biology, Wageningen University & Research, 6708, WE, Wageningen, The Netherlands
- Bioprocess Engineering Group, Wageningen University & Research, 6700, AA, Wageningen, The Netherlands
- LifeGlimmer GmbH, Berlin, Germany
| |
Collapse
|
15
|
Dhirani LL, Mukhtiar N, Chowdhry BS, Newe T. Ethical Dilemmas and Privacy Issues in Emerging Technologies: A Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:1151. [PMID: 36772190 PMCID: PMC9921682 DOI: 10.3390/s23031151] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Industry 5.0 is projected to be an exemplary improvement in digital transformation allowing for mass customization and production efficiencies using emerging technologies such as universal machines, autonomous and self-driving robots, self-healing networks, cloud data analytics, etc., to supersede the limitations of Industry 4.0. To successfully pave the way for acceptance of these technologies, we must be bound and adhere to ethical and regulatory standards. Presently, with ethical standards still under development, and each region following a different set of standards and policies, the complexity of being compliant increases. Having vague and inconsistent ethical guidelines leaves potential gray areas leading to privacy, ethical, and data breaches that must be resolved. This paper examines the ethical dimensions and dilemmas associated with emerging technologies and provides potential methods to mitigate their legal/regulatory issues.
Collapse
Affiliation(s)
- Lubna Luxmi Dhirani
- Department of Electronic and Computer Engineering, University of Limerick, V94 T9PX Limerick, Ireland
- Confirm—SFI Smart Manufacturing Centre, V94 C928 Limerick, Ireland
| | - Noorain Mukhtiar
- Department of Electronic Engineering, Mehran University of Engineering & Technology, Jamshoro 76062, Pakistan
| | - Bhawani Shankar Chowdhry
- Department of Electronic Engineering, Mehran University of Engineering & Technology, Jamshoro 76062, Pakistan
| | - Thomas Newe
- Department of Electronic and Computer Engineering, University of Limerick, V94 T9PX Limerick, Ireland
- Confirm—SFI Smart Manufacturing Centre, V94 C928 Limerick, Ireland
| |
Collapse
|
16
|
The tech industry hijacking of the AI ethics research agenda and why we should reclaim it. DISCOVER ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1007/s44163-022-00043-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
AbstractThis paper reflects on the tech industry’s colonization of the AI ethics research field and addresses conflicts of interest in public policymaking concerning AI. The AI ethics research community faces two intertwined challenges: In the first place, we have a tech industry heavily influencing the AI ethics research agenda. Secondly, cleaning up after the tech industry has implied that we have turned to value-driven design methods to bring ethics to AI design. But by framing research questions relevant to a technical practice, we have facilitated the technological solutionism behind the tech industry’s business model. Therefore, this paper takes the first steps to reshape the AI ethics research agenda by suggesting moving toward an emancipatory framework that brings politics to design while, at the same time, bearing in mind that AI is not to be treated as an inevitability. As a research community, we must focus on the repressive power dynamics exacerbated by AI and address challenges facing the vulnerable groups seldom heard, despite the fact that they are the ones most negatively affected by AI initiatives.
Collapse
|
17
|
Occhipinti C, Carnevale A, Briguglio L, Iannone A, Bisconti P. SAT: a methodology to assess the social acceptance of innovative AI-based technologies. JOURNAL OF INFORMATION COMMUNICATION & ETHICS IN SOCIETY 2022. [DOI: 10.1108/jices-09-2021-0095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Purpose
The purpose of this paper is to present the conceptual model of an innovative methodology (SAT) to assess the social acceptance of technology, especially focusing on artificial intelligence (AI)-based technology.
Design/methodology/approach
After a review of the literature, this paper presents the main lines by which SAT stands out from current methods, namely, a four-bubble approach and a mix of qualitative and quantitative techniques that offer assessments that look at technology as a socio-technical system. Each bubble determines the social variability of a cluster of values: User-Experience Acceptance, Social Disruptiveness, Value Impact and Trust.
Findings
The methodology is still in development, requiring further developments, specifications and validation. Accordingly, the findings of this paper refer to the realm of the research discussion, that is, highlighting the importance of preventively assessing and forecasting the acceptance of technology and building the best design strategies to boost sustainable and ethical technology adoption.
Social implications
Once SAT method will be validated, it could constitute a useful tool, with societal implications, for helping users, markets and institutions to appraise and determine the co-implications of technology and socio-cultural contexts.
Originality/value
New AI applications flood today’s users and markets, often without a clear understanding of risks and impacts. In the European context, regulations (EU AI Act) and rules (EU Ethics Guidelines for Trustworthy) try to fill this normative gap. The SAT method seeks to integrate the risk-based assessment of AI with an assessment of the perceptive-psychological and socio-behavioural aspects of its social acceptability.
Collapse
|
18
|
Brayford KM. Autonomous weapons systems and the necessity of interpretation: what Heidegger can tell us about automated warfare. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01586-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
19
|
From Pluralistic Normative Principles to Autonomous-Agent Rules. Minds Mach (Dordr) 2022. [DOI: 10.1007/s11023-022-09614-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractWith recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of autonomous agents requires the refinement of normative principles into explicitly formulated practical rules. This paper develops a process for deriving specification rules from a set of high-level norms, thereby bridging the gap between normative principles and operational practice. This enables autonomous agents to select and execute the most normatively favourable action in the intended context premised on a range of underlying relevant normative principles. In the translation and reduction of normative principles to SLEEC rules, we present an iterative process that uncovers normative principles, addresses SLEEC concerns, identifies and resolves SLEEC conflicts, and generates both preliminary and complex normatively-relevant rules, thereby guiding the development of autonomous agents and better positioning them as normatively SLEEC-sensitive or SLEEC-compliant.
Collapse
|
20
|
Kendal E. Ethical, Legal and Social Implications of Emerging Technology (ELSIET) Symposium. JOURNAL OF BIOETHICAL INQUIRY 2022; 19:363-370. [PMID: 35749026 PMCID: PMC9243845 DOI: 10.1007/s11673-022-10197-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 05/17/2022] [Indexed: 06/15/2023]
Affiliation(s)
- Evie Kendal
- School of Health Sciences and Biostatistics, Swinburne University of Technology, John St, Hawthorn, Victoria, Australia.
| |
Collapse
|
21
|
Afroogh S, Esmalian A, Mostafavi A, Akbari A, Rasoulkhani K, Esmaeili S, Hajiramezanali E. Tracing app technology: an ethical review in the COVID-19 era and directions for post-COVID-19. ETHICS AND INFORMATION TECHNOLOGY 2022; 24:30. [PMID: 35915595 PMCID: PMC9330978 DOI: 10.1007/s10676-022-09659-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 07/06/2022] [Indexed: 06/15/2023]
Abstract
We conducted a systematic literature review on the ethical considerations of the use of contact tracing app technology, which was extensively implemented during the COVID-19 pandemic. The rapid and extensive use of this technology during the COVID-19 pandemic, while benefiting the public well-being by providing information about people's mobility and movements to control the spread of the virus, raised several ethical concerns for the post-COVID-19 era. To investigate these concerns for the post-pandemic situation and provide direction for future events, we analyzed the current ethical frameworks, research, and case studies about the ethical usage of tracing app technology. The results suggest there are seven essential ethical considerations-privacy, security, acceptability, government surveillance, transparency, justice, and voluntariness-in the ethical use of contact tracing technology. In this paper, we explain and discuss these considerations and how they are needed for the ethical usage of this technology. The findings also highlight the importance of developing integrated guidelines and frameworks for implementation of such technology in the post- COVID-19 world. Supplementary Information The online version contains supplementary material available at 10.1007/s10676-022-09659-6.
Collapse
Affiliation(s)
- Saleh Afroogh
- Department of Philosophy, The State University of New York at Albany, Albany, NY 12203 USA
| | - Amir Esmalian
- UrbanResilience.AI Lab, Zachry Department of Civil and Environmental Engineering, Texas A&M University, College Station, TX 77840 USA
| | - Ali Mostafavi
- UrbanResilience.AI Lab, Zachry Department of Civil and Environmental Engineering, Texas A&M University, College Station, TX 77840 USA
| | - Ali Akbari
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77840 USA
| | | | - Shahriar Esmaeili
- Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843 USA
| | - Ehsan Hajiramezanali
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX USA
| |
Collapse
|
22
|
AI for the public. How public interest theory shifts the discourse on AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01480-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractAI for social good is a thriving research topic and a frequently declared goal of AI strategies and regulation. This article investigates the requirements necessary in order for AI to actually serve a public interest, and hence be socially good. The authors propose shifting the focus of the discourse towards democratic governance processes when developing and deploying AI systems. The article draws from the rich history of public interest theory in political philosophy and law, and develops a framework for ‘public interest AI’. The framework consists of (1) public justification for the AI system, (2) an emphasis on equality, (3) deliberation/ co-design process, (4) technical safeguards, and (5) openness to validation. This framework is then applied to two case studies, namely SyRI, the Dutch welfare fraud detection project, and UNICEF’s Project Connect, that maps schools worldwide. Through the analysis of these cases, the authors conclude that public interest is a helpful and practical guide for the development and governance of AI for the people.
Collapse
|
23
|
Giovanola B, Tiribelli S. Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms. AI & SOCIETY 2022; 38:549-563. [PMID: 35615443 PMCID: PMC9123626 DOI: 10.1007/s00146-022-01455-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 04/13/2022] [Indexed: 01/09/2023]
Abstract
The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently explored. Our paper aims to fill this gap and address the AI ethics principle of fairness from a conceptual standpoint, drawing insights from accounts of fairness elaborated in moral philosophy and using them to conceptualise fairness as an ethical value and to redefine fairness in HMLA accordingly. To achieve our goal, following a first section aimed at clarifying the background, methodology and structure of the paper, in the second section, we provide an overview of the discussion of the AI ethics principle of fairness in HMLA and show that the concept of fairness underlying this debate is framed in purely distributive terms and overlaps with non-discrimination, which is defined in turn as the absence of biases. After showing that this framing is inadequate, in the third section, we pursue an ethical inquiry into the concept of fairness and argue that fairness ought to be conceived of as an ethical value. Following a clarification of the relationship between fairness and non-discrimination, we show that the two do not overlap and that fairness requires much more than just non-discrimination. Moreover, we highlight that fairness not only has a distributive but also a socio-relational dimension. Finally, we pinpoint the constitutive components of fairness. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect to include respect for individual persons. In the fourth section, we analyse the implications of our conceptual redefinition of fairness as an ethical value in the discussion of fairness in HMLA. Here, we claim that fairness requires more than non-discrimination and the absence of biases as well as more than just distribution; it needs to ensure that HMLA respects persons both as persons and as particular individuals. Finally, in the fifth section, we sketch some broader implications and show how our inquiry can contribute to making HMLA and, more generally, AI promote the social good and a fairer society.
Collapse
Affiliation(s)
- Benedetta Giovanola
- Department of Political Sciences, Communication, and International Relations, University of Macerata, Macerata, 62100 Italy.,Department of Philosophy, Tufts University, 222 Miner Hall, Medford, MA 02155 USA
| | - Simona Tiribelli
- Department of Political Sciences, Communication, and International Relations, University of Macerata, Macerata, 62100 Italy.,Present Address: Institute for Technology and Global Health, PathCheck Foundation, 955 Massachusetts Ave, Cambridge, MA 02139 USA
| |
Collapse
|
24
|
The ethics of algorithms from the perspective of the cultural history of consciousness: first look. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01475-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
25
|
Theunissen M, Browning J. Putting explainable AI in context: institutional explanations for medical AI. ETHICS AND INFORMATION TECHNOLOGY 2022; 24:23. [PMID: 35539962 PMCID: PMC9073821 DOI: 10.1007/s10676-022-09649-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/11/2022] [Indexed: 06/14/2023]
Abstract
There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations-and it is unclear either address the epistemic worries of the medical professionals using these systems. We argue these systems do require an explanation, but an institutional explanation. These types of explanations provide the reasons why the medical professional should rely on the system in practice-that is, they focus on trying to address the epistemic concerns of those using the system in specific contexts and specific occasions. But ensuring that these institutional explanations are fit for purpose means ensuring the institutions designing and deploying these systems are transparent about the assumptions baked into the system. This requires coordination with experts and end-users concerning how it will function in the field, the metrics used to evaluate its accuracy, and the procedures for auditing the system to prevent biases and failures from going unaddressed. We contend this broader explanation is necessary for either post hoc explanations or accuracy scores to be epistemically meaningful to the medical professional, making it possible for them to rely on these systems as effective and useful tools in their practices.
Collapse
Affiliation(s)
- Mark Theunissen
- Department of Values, Technology and Innovation, School of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands
| | | |
Collapse
|
26
|
Capasso M, Umbrello S. Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2022; 25:11-22. [PMID: 34822096 PMCID: PMC8613457 DOI: 10.1007/s11019-021-10062-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/16/2021] [Indexed: 11/25/2022]
Abstract
Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing behaviours. This article discusses how these AI-driven systems pose particular ethical challenges with regards to nudging. To confront these issues, the value sensitive design (VSD) approach is adopted as a principled methodology that designers can adopt to design these systems to avoid harming and contribute to the social good. The AI for Social Good (AI4SG) factors are adopted as the norms constraining maleficence. In contrast, higher-order values specific to AI, such as those from the EU High-Level Expert Group on AI and the United Nations Sustainable Development Goals, are adopted as the values to be promoted as much as possible in design. The use case of Amazon Alexa's Healthcare Skills is used to illustrate this design approach. It provides an exemplar of how designers and engineers can begin to orientate their design programs of these technologies towards the social good.
Collapse
Affiliation(s)
- Marianna Capasso
- Scuola Superiore Sant’Anna, Piazza Martiri della Libertà 33, 56127 Pisa, Italia
| | - Steven Umbrello
- Department of Values, Technology, & Innovation, School of Technology, Policy & Management, Delft University of Technology, Jaffalaan 5, 2628 BX Delft, The Netherlands
| |
Collapse
|
27
|
Maas J. Machine learning and power relations. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01400-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractThere has been an increased focus within the AI ethics literature on questions of power, reflected in the ideal of accountability supported by many Responsible AI guidelines. While this recent debate points towards the power asymmetry between those who shape AI systems and those affected by them, the literature lacks normative grounding and misses conceptual clarity on how these power dynamics take shape. In this paper, I develop a workable conceptualization of said power dynamics according to Cristiano Castelfranchi’s conceptual framework of power and argue that end-users depend on a system’s developers and users, because end-users rely on these systems to satisfy their goals, constituting a power asymmetry between developers, users and end-users. I ground my analysis in the neo-republican moral wrong of domination, drawing attention to legitimacy concerns of the power-dependence relation following from the current lack of accountability mechanisms. I illustrate my claims on the basis of a risk-prediction machine learning system, and propose institutional (external auditing) and project-specific solutions (increase contestability through design-for-values approaches) to mitigate domination.
Collapse
|
28
|
Ericsson D, Stasinski R, Stenström E. Body, mind, and soul principles for designing management education: an ethnography from the future. CULTURE AND ORGANIZATION 2022. [DOI: 10.1080/14759551.2022.2028148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Daniel Ericsson
- School of Business and Economics, Linnaeus University, Växjö, Sweden
| | - Robert Stasinski
- Department of Culture and Aesthetics, Stockholm University, Stockholm, Sweden
| | - Emma Stenström
- Center for Arts, Business & Culture (ABC), Stockholm School of Economics, Stockholm, Sweden
| |
Collapse
|
29
|
Digital Platforms for the Common Good: Social Innovation for Active Citizenship and ESG. SUSTAINABILITY 2022. [DOI: 10.3390/su14020639] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The platform business model has attracted significant attention in business research and practice. However, much of the existing literature studies commercial platforms that seek to maximize profit. In contrast, we focus on a platform for volunteers that aims to maximize social impact. This business model is called a platform for the common good. The article proposes a Causal Loop Diagram (CLD) model that explains how a platform for the common good creates value. Our model maps the key strategic feedback loops that constitute the core structure of the platform and explains its growth and performance through time. We show that multiple types of network effects create interlocking, reinforcing feedback loops. Overall, the article contributes towards a dynamic theory of the platforms for the common good. Moreover, the article provides insights for social entrepreneurs who seek to build, understand, and optimize platforms that maximize social value and managers of companies that seek to participate in such platforms. Social entrepreneurs should seek to leverage the critical feedback loops of their platform.
Collapse
|
30
|
Wedin A, Wikman–Svahn P. A Value Sensitive Scenario Planning Method for Adaptation to Uncertain Future Sea Level Rise. SCIENCE AND ENGINEERING ETHICS 2021; 27:69. [PMID: 34787726 PMCID: PMC8599313 DOI: 10.1007/s11948-021-00347-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 10/11/2021] [Indexed: 06/13/2023]
Abstract
Value sensitive design (VSD) aims at creating better technology based on social and ethical values. However, VSD has not been applied to long-term and uncertain future developments, such as societal planning for climate change. This paper describes a new method that combines elements from VSD with scenario planning. The method was developed for and applied to a case study of adaptation to sea level rise (SLR) in southern Sweden in a series of workshops. The participants of the workshops found that the method provided a framework for discussing long-term planning, enabled identification of essential values, challenged established planning practices, helped find creative solutions, and served as a reminder that we do not know what will happen in the future. Finally, we reflect on the limitations of the method and suggest further research on how it can be improved for value sensitive design of adaptation measures to manage uncertain future sea level rise.
Collapse
Affiliation(s)
- Anna Wedin
- Division of Philosophy, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Per Wikman–Svahn
- Division of Philosophy, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
31
|
Abstract
AbstractWe examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may benefit their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fill this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient.
Collapse
|
32
|
Kazim E, Koshiyama AS, Hilliard A, Polle R. Systematizing Audit in Algorithmic Recruitment. J Intell 2021; 9:46. [PMID: 34564294 PMCID: PMC8482073 DOI: 10.3390/jintelligence9030046] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 09/14/2021] [Accepted: 09/14/2021] [Indexed: 11/18/2022] Open
Abstract
Business psychologists study and assess relevant individual differences, such as intelligence and personality, in the context of work. Such studies have informed the development of artificial intelligence systems (AI) designed to measure individual differences. This has been capitalized on by companies who have developed AI-driven recruitment solutions that include aggregation of appropriate candidates (Hiretual), interviewing through a chatbot (Paradox), video interview assessment (MyInterview), and CV-analysis (Textio), as well as estimation of psychometric characteristics through image-(Traitify) and game-based assessments (HireVue) and video interviews (Cammio). However, driven by concern that such high-impact technology must be used responsibly due to the potential for unfair hiring to result from the algorithms used by these tools, there is an active effort towards proving mechanisms of governance for such automation. In this article, we apply a systematic algorithm audit framework in the context of the ethically critical industry of algorithmic recruitment systems, exploring how audit assessments on AI-driven systems can be used to assure that such systems are being responsibly deployed in a fair and well-governed manner. We outline sources of risk for the use of algorithmic hiring tools, suggest the most appropriate opportunities for audits to take place, recommend ways to measure bias in algorithms, and discuss the transparency of algorithms.
Collapse
Affiliation(s)
- Emre Kazim
- Department of Computer Science, University College London, Gower St, London WC1E 6EA, UK; (A.S.K.); (R.P.)
| | - Adriano Soares Koshiyama
- Department of Computer Science, University College London, Gower St, London WC1E 6EA, UK; (A.S.K.); (R.P.)
| | - Airlie Hilliard
- Institute of Management Studies, Goldsmiths, University of London, New Cross, London SE14 6NW, UK;
| | - Roseline Polle
- Department of Computer Science, University College London, Gower St, London WC1E 6EA, UK; (A.S.K.); (R.P.)
| |
Collapse
|
33
|
Nyrup R. From General Principles to Procedural Values: Responsible Digital Health Meets Public Health Ethics. Front Digit Health 2021; 3:690417. [PMID: 34713166 PMCID: PMC8521828 DOI: 10.3389/fdgth.2021.690417] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/14/2021] [Indexed: 11/13/2022] Open
Abstract
Most existing work in digital ethics is modeled on the "principlist" approach to medical ethics, seeking to articulate a small set of general principles to guide ethical decision-making. Critics have highlighted several limitations of such principles, including (1) that they mask ethical disagreements between and within stakeholder communities, and (2) that they provide little guidance for how to resolve trade-offs between different values. This paper argues that efforts to develop responsible digital health practices could benefit from paying closer attention to a different branch of medical ethics, namely public health ethics. In particular, I argue that the influential "accountability for reasonableness" (A4R) approach to public health ethics can help overcome some of the limitations of existing digital ethics principles. A4R seeks to resolve trade-offs through decision-procedures designed according to certain shared procedural values. This allows stakeholders to recognize decisions reached through these procedures as legitimate, despite their underlying disagreements. I discuss the prospects for adapting A4R to the context of responsible digital health and suggest questions for further research.
Collapse
Affiliation(s)
- Rune Nyrup
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
34
|
Umbrello S, Capasso M, Balistreri M, Pirni A, Merenda F. Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots. Minds Mach (Dordr) 2021; 31:395-419. [PMID: 34092922 PMCID: PMC8165341 DOI: 10.1007/s11023-021-09561-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 05/24/2021] [Indexed: 11/29/2022]
Abstract
Healthcare is becoming increasingly automated with the development and deployment of care robots. There are many benefits to care robots but they also pose many challenging ethical issues. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. Using the value sensitive design (VSD) approach to technology design, this paper extends its application to care robots by integrating the values of care, values that are specific to AI, and higher-scale values such as the United Nations Sustainable Development Goals (SDGs). The ethical issues specific to care robots for the elderly are discussed at length alongside examples of specific design requirements that work to ameliorate these ethical concerns.
Collapse
Affiliation(s)
- Steven Umbrello
- Institute for Ethics and Emerging Technologies, University of Turin, Via Sant'Ottavio, 20, 10124 Turin, TO Italy
| | - Marianna Capasso
- Scuola Superiore Sant'Anna, Piazza Martiri della Libertà, 33, 56127 Pisa, Italy
| | | | - Alberto Pirni
- Scuola Superiore Sant'Anna, Piazza Martiri della Libertà, 33, 56127 Pisa, Italy
| | - Federica Merenda
- Scuola Superiore Sant'Anna, Piazza Martiri della Libertà, 33, 56127 Pisa, Italy
| |
Collapse
|