1
|
Salloch S, Eriksen A. What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024:1-12. [PMID: 38767971 DOI: 10.1080/15265161.2024.2353800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as "human in the loop" or "meaningful human control" are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the role of the "human in the loop" and to overcome the perspective of rivaling ethical principles in the evaluation of AI in health care. We argue that patients should be perceived as "fellow workers" and epistemic partners in the interpretation of ML_CDSS outputs. We further highlight that a meaningful process of integrating (rather than weighing and balancing) ethical principles is most appropriate in the evaluation of medical AI.
Collapse
|
2
|
Muralidharan A, Savulescu J, Schaefer GO. AI and the need for justification (to the patient). ETHICS AND INFORMATION TECHNOLOGY 2024; 26:16. [PMID: 38450175 PMCID: PMC10912120 DOI: 10.1007/s10676-024-09754-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.
Collapse
Affiliation(s)
- Anantharaman Muralidharan
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Julian Savulescu
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Murdoch Children’s Research Institute, Melbourne, VIC Australia
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, UK
| | - G. Owen Schaefer
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
3
|
Schmidt KW, Lechner F. [ChatGPT: aid to medical ethics decision making?]. DIE ANAESTHESIOLOGIE 2024; 73:186-192. [PMID: 38315183 DOI: 10.1007/s00101-024-01385-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
BACKGROUND Physicians have to make countless decisions every day. The medical, ethical and legal aspects are often intertwined and subject to change over time. Involving an ethics committee or arranging an ethical consultation are examples of potential aids to decision making. Whether and how artificial intelligence (AI) and the large language model (LLM) of the company OpenAI (San Francisco, CA, USA), known under the name ChatGPT, can also help and support ethical decision making is increasingly becoming a matter of controversial debate. MATERIAL AND METHODS Based on a case example, in which a female physician is confronted with ethical and legal issues and presents these to ChatGPT to come up with answers, the first indications of the strengths and weaknesses are ascertained. CONCLUSION Due to the rapid technical development and access to ever increasing quantities of data, the utilization should be closely observed and evaluated.
Collapse
Affiliation(s)
- Kurt W Schmidt
- Zentrum für Ethik in der Medizin, Agaplesion Markus Krankenhaus, Wilhelm-Epstein-Str. 4, 60431, Frankfurt a. M., Deutschland.
| | - Fabian Lechner
- Institut für Künstliche Intelligenz, Universitätsklinikum Gießen und Marburg, Marburg, Deutschland
| |
Collapse
|
4
|
Apostolopoulos ID, Papandrianos NI, Papathanasiou ND, Papageorgiou EI. Fuzzy Cognitive Map Applications in Medicine over the Last Two Decades: A Review Study. Bioengineering (Basel) 2024; 11:139. [PMID: 38391626 PMCID: PMC10886348 DOI: 10.3390/bioengineering11020139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Revised: 01/18/2024] [Accepted: 01/27/2024] [Indexed: 02/24/2024] Open
Abstract
Fuzzy Cognitive Maps (FCMs) have become an invaluable tool for healthcare providers because they can capture intricate associations among variables and generate precise predictions. FCMs have demonstrated their utility in diverse medical applications, from disease diagnosis to treatment planning and prognosis prediction. Their ability to model complex relationships between symptoms, biomarkers, risk factors, and treatments has enabled healthcare providers to make informed decisions, leading to better patient outcomes. This review article provides a thorough synopsis of using FCMs within the medical domain. A systematic examination of pertinent literature spanning the last two decades forms the basis of this overview, specifically delineating the diverse applications of FCMs in medical realms, including decision-making, diagnosis, prognosis, treatment optimisation, risk assessment, and pharmacovigilance. The limitations inherent in FCMs are also scrutinised, and avenues for potential future research and application are explored.
Collapse
Affiliation(s)
| | - Nikolaos I Papandrianos
- Department of Energy Systems, University of Thessaly, Gaiopolis Campus, 41500 Larisa, Greece
| | | | - Elpiniki I Papageorgiou
- Department of Energy Systems, University of Thessaly, Gaiopolis Campus, 41500 Larisa, Greece
| |
Collapse
|
5
|
Mohammadi F, Masoumi SZ, Khazaei S, Hosseiny SMM. Psychometrics assessment of ethical decision-making around end-of-life care scale for adolescents in the final stage of life. Front Pediatr 2024; 11:1266929. [PMID: 38318315 PMCID: PMC10839055 DOI: 10.3389/fped.2023.1266929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 12/14/2023] [Indexed: 02/07/2024] Open
Abstract
Introduction Healthcare professionals have a critical role in ethical decision-making around end-of-life care. Properly evaluating the ethical decision-making of health care professionals in end-of-life care requires reliable, tailored, and comprehensive assessments. The current study aimed to translate and assess psychometrically a Persian version of the ethical decision making in end-of-life care scale for Iranian adolescents in the final stages of life. Methods The present study investigates the methodology and multicenter research. 310 healthcare professionals who treat/care for adolescents at the end of life were selected from 7 cities in Iran. The original version of the end-of-life care decision-making scale was translated into Persian using the forward-backward translation method, and its psychometric properties were evaluated using COSMIN criteria. Results Exploratory factor analysis revealed that the factor loadings of the items ranged from 0.68 to 0.89, all of which were statistically significant. Furthermore, three factors had eigenvalues greater than 1, accounting for 81.64% of the total variance. Confirmatory factor analysis indicated a proper goodness of fit in the hypothesized factor structure. The internal consistency reliability of the tool was assessed in terms of its homogeneity, yielding a Cronbach's alpha coefficient of 0.93. Conclusion The Persian version of the End-of-Life Care Decision-Making Scale demonstrates satisfactory validity and reliability among healthcare professionals working with adolescents in the final stages of life. Therefore, nursing managers can utilize this tool to measure and evaluate ethical decision-making in end-of-life care for adolescents in the final stages of life and identify the most appropriate strategies, including educational interventions, to improve ethical decision-making in end-of-life care if necessary.
Collapse
Affiliation(s)
- Fateme Mohammadi
- School of Nursing and Midwifery, Chronic Diseases(Home Care) Research Center and Autism Spectrum Disorders Research Center, Department of Nursing, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Seyedeh Zahra Masoumi
- Department of Midwifery, School of Nursing and Midwifery, Mother and Child Care Research Center, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Salman Khazaei
- Health Sciences Research Center, Health Sciences & Technology Research Institute, Hamadan University of Medical Science, Hamadan, Iran
| | | |
Collapse
|
6
|
Drezga-Kleiminger M, Demaree-Cotton J, Koplin J, Savulescu J, Wilkinson D. Should AI allocate livers for transplant? Public attitudes and ethical considerations. BMC Med Ethics 2023; 24:102. [PMID: 38012660 PMCID: PMC10683249 DOI: 10.1186/s12910-023-00983-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 11/14/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND Allocation of scarce organs for transplantation is ethically challenging. Artificial intelligence (AI) has been proposed to assist in liver allocation, however the ethics of this remains unexplored and the view of the public unknown. The aim of this paper was to assess public attitudes on whether AI should be used in liver allocation and how it should be implemented. METHODS We first introduce some potential ethical issues concerning AI in liver allocation, before analysing a pilot survey including online responses from 172 UK laypeople, recruited through Prolific Academic. FINDINGS Most participants found AI in liver allocation acceptable (69.2%) and would not be less likely to donate their organs if AI was used in allocation (72.7%). Respondents thought AI was more likely to be consistent and less biased compared to humans, although were concerned about the "dehumanisation of healthcare" and whether AI could consider important nuances in allocation decisions. Participants valued accuracy, impartiality, and consistency in a decision-maker, more than interpretability and empathy. Respondents were split on whether AI should be trained on previous decisions or programmed with specific objectives. Whether allocation decisions were made by transplant committee or AI, participants valued consideration of urgency, survival likelihood, life years gained, age, future medication compliance, quality of life, future alcohol use and past alcohol use. On the other hand, the majority thought the following factors were not relevant to prioritisation: past crime, future crime, future societal contribution, social disadvantage, and gender. CONCLUSIONS There are good reasons to use AI in liver allocation, and our sample of participants appeared to support its use. If confirmed, this support would give democratic legitimacy to the use of AI in this context and reduce the risk that donation rates could be affected negatively. Our findings on specific ethical concerns also identify potential expectations and reservations laypeople have regarding AI in this area, which can inform how AI in liver allocation could be best implemented.
Collapse
Affiliation(s)
- Max Drezga-Kleiminger
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK
| | - Joanna Demaree-Cotton
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK
| | - Julian Koplin
- Monash Bioethics Centre, Monash University, Melbourne, Australia
| | - Julian Savulescu
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK
- Murdoch Children's Research Institute, Melbourne, Australia
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Dominic Wilkinson
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia.
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK.
- Murdoch Children's Research Institute, Melbourne, Australia.
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.
- John Radcliffe Hospital, Oxford, UK.
| |
Collapse
|
7
|
Schmidt KW, Lechner F. [ChatGPT: aid to medical ethics decision making?]. INNERE MEDIZIN (HEIDELBERG, GERMANY) 2023; 64:1065-1071. [PMID: 37821756 DOI: 10.1007/s00108-023-01601-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/14/2023] [Indexed: 10/13/2023]
Abstract
BACKGROUND Physicians have to make countless decisions every day. The medical, ethical and legal aspects are often intertwined and subject to change over time. Involving an ethics committee or arranging an ethical consultation are examples of potential aids to decision making. Whether and how artificial intelligence (AI) and the large language model (LLM) of the company OpenAI (San Francisco, CA, USA), known under the name ChatGPT, can also help and support ethical decision making is increasingly becoming a matter of controversial debate. MATERIAL AND METHODS Based on a case example, in which a female physician is confronted with ethical and legal issues and presents these to ChatGPT to come up with answers, the first indications of the strengths and weaknesses are ascertained. CONCLUSION Due to the rapid technical development and access to ever increasing quantities of data, the utilization should be closely observed and evaluated.
Collapse
Affiliation(s)
- Kurt W Schmidt
- Zentrum für Ethik in der Medizin, Agaplesion Markus Krankenhaus, Wilhelm-Epstein-Str. 4, 60431, Frankfurt a. M., Deutschland.
| | - Fabian Lechner
- Institut für Künstliche Intelligenz, Universitätsklinikum Gießen und Marburg, Marburg, Deutschland
| |
Collapse
|
8
|
Meier LJ. ChatGPT's Responses to Dilemmas in Medical Ethics: The Devil is in the Details. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:63-65. [PMID: 37812097 DOI: 10.1080/15265161.2023.2250290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
9
|
Benzinger L, Ursin F, Balke WT, Kacprowski T, Salloch S. Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons. BMC Med Ethics 2023; 24:48. [PMID: 37415172 PMCID: PMC10327319 DOI: 10.1186/s12910-023-00929-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 06/28/2023] [Indexed: 07/08/2023] Open
Abstract
BACKGROUND Healthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use. METHODS PubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract screened according to defined inclusion and exclusion criteria, resulting in 44 papers whose full texts were analysed using the Kuckartz method of qualitative text analysis. RESULTS Artificial Intelligence might increase patient autonomy by improving the accuracy of predictions and allowing patients to receive their preferred treatment. It is thought to increase beneficence by providing reliable information, thereby, supporting surrogate decision-making. Some authors fear that reducing ethical decision-making to statistical correlations may limit autonomy. Others argue that AI may not be able to replicate the process of ethical deliberation because it lacks human characteristics. Concerns have been raised about issues of justice, as AI may replicate existing biases in the decision-making process. CONCLUSIONS The prospective benefits of using AI in clinical ethical decision-making are manifold, but its development and use should be undertaken carefully to avoid ethical pitfalls. Several issues that are central to the discussion of Clinical Decision Support Systems, such as justice, explicability or human-machine interaction, have been neglected in the debate on AI for clinical ethics so far. TRIAL REGISTRATION This review is registered at Open Science Framework ( https://osf.io/wvcs9 ).
Collapse
Affiliation(s)
- Lasse Benzinger
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany.
| | - Frank Ursin
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany
| | - Wolf-Tilo Balke
- Institute for Information Systems, TU Braunschweig, Braunschweig, Germany
| | - Tim Kacprowski
- Division Data Science in Biomedicine, Peter L. Reichertz Institute for Medical Informatics of Technische Universität Braunschweig and Hannover Medical School, Braunschweig, Germany
- Braunschweig Integrated Centre for Systems Biology (BRICS), TU Braunschweig, Braunschweig, Germany
| | - Sabine Salloch
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany
| |
Collapse
|
10
|
Wallner J. Healthcare Ethics Consultation in Austria: Joining the International Path of Professionalization. THE JOURNAL OF CLINICAL ETHICS 2023; 34:69-78. [PMID: 36940354 DOI: 10.1086/723427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/22/2023]
Abstract
AbstractHealthcare ethics consultation has been developed, practiced, and analyzed internationally. However, only a few professional standards have evolved globally in this field that would be comparable to standards in other areas of healthcare. This article cannot compensate for this situation. It contributes to the ongoing debate on professionalization by presenting experiences with ethics consultation in Austria, though. After exploring its contexts and providing an overview of one of its primary ethics programs, the article analyzes the underlying assumptions of "ethics consultation" as an essential effort on the path to professionalize ethics consultation.
Collapse
|
11
|
Meier LJ. Systemising triage: COVID-19 guidelines and their underlying theories of distributive justice. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2022; 25:703-714. [PMID: 35796935 PMCID: PMC9261143 DOI: 10.1007/s11019-022-10101-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/02/2022] [Indexed: 06/14/2023]
Abstract
The COVID-19 pandemic has been overwhelming public health-care systems around the world. With demand exceeding the availability of medical resources in several regions, hospitals have been forced to invoke triage. To ensure that this difficult task proceeds in a fair and organised manner, governments scrambled experts to draft triage guidelines under enormous time pressure. Although there are similarities between the documents, they vary considerably in how much weight their respective authors place on the different criteria that they propose. Since most of the recommendations do not come with ethical justifications, analysing them requires that one traces back these criteria to their underlying theories of distributive justice. In the literature, COVID-19 triage has been portrayed as a value conflict solely between utilitarian and egalitarian elements. While these two accounts are indeed the main antipodes, I shall show that in fact all four classic theories of distributive justice are involved: utilitarianism, egalitarianism, libertarianism, and communitarianism. Detecting these in the documents and classifying the suggested criteria accordingly enables one to understand the balancing between the different approaches to distributive justice-which is crucial for both managing the current pandemic and in preparation for the next global health crisis.
Collapse
Affiliation(s)
- Lukas J Meier
- Churchill College, University of Cambridge, Storey's Way, Cambridge, CB3 0DS, UK.
| |
Collapse
|
12
|
Meier LJ, Hein A, Diepold K, Buyx A. Clinical Ethics - To Compute, or Not to Compute? THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:W1-W4. [PMID: 36205553 DOI: 10.1080/15265161.2022.2127970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Affiliation(s)
- Lukas J Meier
- University of Cambridge
- Technical University of Munich
| | | | | | | |
Collapse
|
13
|
Rahimzadeh V, Lawson J, Baek J, Dove ES. Automating Justice: An Ethical Responsibility of Computational Bioethics. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:30-33. [PMID: 35737496 PMCID: PMC9761488 DOI: 10.1080/15265161.2022.2075051] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
|
14
|
Sabatello M. Wrongful Birth: AI-Tools for Moral Decisions in Clinical Care in the Absence of Disability Ethics. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:43-46. [PMID: 35737491 PMCID: PMC9720610 DOI: 10.1080/15265161.2022.2075971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
|
15
|
Barwise A, Pickering B. The AI Needed for Ethical Decision Making Does Not Exist. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:46-49. [PMID: 35737497 DOI: 10.1080/15265161.2022.2075052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
|
16
|
Klugman CM, Gerke S. Rise of the Bioethics AI: Curse or Blessing? THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:35-37. [PMID: 35737489 DOI: 10.1080/15265161.2022.2075056] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
|
17
|
Sauerbrei A, Hallowell N, Kerasidou A. AIgorithmic Ethics: A Technically Sweet Solution to a Non-Problem. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:28-30. [PMID: 35737495 DOI: 10.1080/15265161.2022.2075050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
- Aurelia Sauerbrei
- University of Oxford, Nuffield Department of Population Health, Ethox Centre
| | - Nina Hallowell
- University of Oxford, Nuffield Department of Population Health, Ethox Centre
| | - Angeliki Kerasidou
- University of Oxford, Nuffield Department of Population Health, Ethox Centre
| |
Collapse
|
18
|
Gundersen T, Bærøe K. Ethical Algorithmic Advice: Some Reasons to Pause and Think Twice. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:26-28. [PMID: 35737486 DOI: 10.1080/15265161.2022.2075053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
|
19
|
DeMarco JP, Ford PJ, Rose SL. Implicit Fuzzy Specifications, Inferior to Explicit Balancing. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:21-23. [PMID: 35737490 DOI: 10.1080/15265161.2022.2075970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
|
20
|
Char D. Important Design Questions for Algorithmic Ethics Consultation. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:38-40. [PMID: 35737487 DOI: 10.1080/15265161.2022.2075054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
|
21
|
Demaree-Cotton J, Earp BD, Savulescu J. How to Use AI Ethically for Ethical Decision-Making. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:1-3. [PMID: 35737501 DOI: 10.1080/15265161.2022.2075968] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
|
22
|
Pilkington B, Binkley C. Disproof of Concept: Resolving Ethical Dilemmas Using Algorithms. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:81-83. [PMID: 35737493 DOI: 10.1080/15265161.2022.2087789] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
|
23
|
Chambers T. An All-Too-Human Enterprise. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:33-35. [PMID: 35737502 DOI: 10.1080/15265161.2022.2075969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
- Tod Chambers
- Northwestern University Feinberg School of Medicine
| |
Collapse
|
24
|
Coin A, Dubljević V. Using Algorithms to Make Ethical Judgements: METHAD vs. the ADC Model. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:41-43. [PMID: 35737500 DOI: 10.1080/15265161.2022.2075967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
- Allen Coin
- North Carolina State University at Raleigh
| | | |
Collapse
|
25
|
Biller-Andorno N, Ferrario A, Gloeckler S. In Search of a Mission: Artificial Intelligence in Clinical Ethics. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:23-25. [PMID: 35737488 DOI: 10.1080/15265161.2022.2075055] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
|