1
|
Ulloa R, Richter AC, Makhortykh M, Urman A, Kacperski CS. Representativeness and face-ism: Gender bias in image search. NEW MEDIA & SOCIETY 2024; 26:3541-3567. [PMID: 38774557 PMCID: PMC11102855 DOI: 10.1177/14614448221100699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/24/2024]
Abstract
Implicit and explicit gender biases in media representations of individuals have long existed. Women are less likely to be represented in gender-neutral media content (representation bias), and their face-to-body ratio in images is often lower (face-ism bias). In this article, we look at representativeness and face-ism in search engine image results. We systematically queried four search engines (Google, Bing, Baidu, Yandex) from three locations, using two browsers and in two waves, with gender-neutral (person, intelligent person) and gendered (woman, intelligent woman, man, intelligent man) terminology, accessing the top 100 image results. We employed automatic identification for the individual's gender expression (female/male) and the calculation of the face-to-body ratio of individuals depicted. We find that, as in other forms of media, search engine images perpetuate biases to the detriment of women, confirming the existence of the representation and face-ism biases. In-depth algorithmic debiasing with a specific focus on gender bias is overdue.
Collapse
Affiliation(s)
- Roberto Ulloa
- GESIS—Leibniz Institute for the Social Sciences, Germany
| | | | | | | | | |
Collapse
|
2
|
Wang X, Wu YC, Ji X, Fu H. Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices. Front Artif Intell 2024; 7:1320277. [PMID: 38836021 PMCID: PMC11148221 DOI: 10.3389/frai.2024.1320277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 05/01/2024] [Indexed: 06/06/2024] Open
Abstract
Introduction Algorithmic decision-making systems are widely used in various sectors, including criminal justice, employment, and education. While these systems are celebrated for their potential to enhance efficiency and objectivity, they also pose risks of perpetuating and amplifying societal biases and discrimination. This paper aims to provide an indepth analysis of the types of algorithmic discrimination, exploring both the challenges and potential solutions. Methods The methodology includes a systematic literature review, analysis of legal documents, and comparative case studies across different geographic regions and sectors. This multifaceted approach allows for a thorough exploration of the complexity of algorithmic bias and its regulation. Results We identify five primary types of algorithmic bias: bias by algorithmic agents, discrimination based on feature selection, proxy discrimination, disparate impact, and targeted advertising. The analysis of the U.S. legal and regulatory framework reveals a landscape of principled regulations, preventive controls, consequential liability, self-regulation, and heteronomy regulation. A comparative perspective is also provided by examining the status of algorithmic fairness in the EU, Canada, Australia, and Asia. Conclusion Real-world impacts are demonstrated through case studies focusing on criminal risk assessments and hiring algorithms, illustrating the tangible effects of algorithmic discrimination. The paper concludes with recommendations for interdisciplinary research, proactive policy development, public awareness, and ongoing monitoring to promote fairness and accountability in algorithmic decision-making. As the use of AI and automated systems expands globally, this work highlights the importance of developing comprehensive, adaptive approaches to combat algorithmic discrimination and ensure the socially responsible deployment of these powerful technologies.
Collapse
Affiliation(s)
| | - Ying Cheng Wu
- School of Law, University of Washington, Seattle, WA, United States
| | - Xueliang Ji
- Faculty of Law, The Chinese University of Hong Kong, Sha Tin, Hong Kong SAR, China
| | - Hongpeng Fu
- Khoury College of Computer Sciences, Northeastern University, Seattle, WA, United States
| |
Collapse
|
3
|
Prinzi F, Currieri T, Gaglio S, Vitabile S. Shallow and deep learning classifiers in medical image analysis. Eur Radiol Exp 2024; 8:26. [PMID: 38438821 PMCID: PMC10912073 DOI: 10.1186/s41747-024-00428-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 01/03/2024] [Indexed: 03/06/2024] Open
Abstract
An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians' decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between "shallow" learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and "deep" learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• Deep classifiers implement automatic feature extraction and classification.• The classifier selection is based on data and computational resources availability, task, and explanation needs.
Collapse
Affiliation(s)
- Francesco Prinzi
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
- Department of Computer Science and Technology, University of Cambridge, Cambridge, CB2 1TN, UK
| | - Tiziana Currieri
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
| | - Salvatore Gaglio
- Department of Engineering, University of Palermo, Palermo, Italy
- Institute for High-Performance Computing and Networking, National Research Council (ICAR-CNR), Palermo, Italy
| | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy.
| |
Collapse
|
4
|
Wu CC, Islam MM, Poly TN, Weng YC. Artificial Intelligence in Kidney Disease: A Comprehensive Study and Directions for Future Research. Diagnostics (Basel) 2024; 14:397. [PMID: 38396436 PMCID: PMC10887584 DOI: 10.3390/diagnostics14040397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/03/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a promising tool in the field of healthcare, with an increasing number of research articles evaluating its applications in the domain of kidney disease. To comprehend the evolving landscape of AI research in kidney disease, a bibliometric analysis is essential. The purposes of this study are to systematically analyze and quantify the scientific output, research trends, and collaborative networks in the application of AI to kidney disease. This study collected AI-related articles published between 2012 and 20 November 2023 from the Web of Science. Descriptive analyses of research trends in the application of AI in kidney disease were used to determine the growth rate of publications by authors, journals, institutions, and countries. Visualization network maps of country collaborations and author-provided keyword co-occurrences were generated to show the hotspots and research trends in AI research on kidney disease. The initial search yielded 673 articles, of which 631 were included in the analyses. Our findings reveal a noteworthy exponential growth trend in the annual publications of AI applications in kidney disease. Nephrology Dialysis Transplantation emerged as the leading publisher, accounting for 4.12% (26 out of 631 papers), followed by the American Journal of Transplantation at 3.01% (19/631) and Scientific Reports at 2.69% (17/631). The primary contributors were predominantly from the United States (n = 164, 25.99%), followed by China (n = 156, 24.72%) and India (n = 62, 9.83%). In terms of institutions, Mayo Clinic led with 27 contributions (4.27%), while Harvard University (n = 19, 3.01%) and Sun Yat-Sen University (n = 16, 2.53%) secured the second and third positions, respectively. This study summarized AI research trends in the field of kidney disease through statistical analysis and network visualization. The findings show that the field of AI in kidney disease is dynamic and rapidly progressing and provides valuable information for recognizing emerging patterns, technological shifts, and interdisciplinary collaborations that contribute to the advancement of knowledge in this critical domain.
Collapse
Affiliation(s)
- Chieh-Chen Wu
- Department of Healthcare Information and Management, School of Health and Medical Engineering, Ming Chuan University, Taipei 111, Taiwan;
| | - Md. Mohaimenul Islam
- Outcomes and Translational Sciences, College of Pharmacy, The Ohio State University, Columbus, OH 43210, USA;
| | - Tahmina Nasrin Poly
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan;
| | - Yung-Ching Weng
- Department of Healthcare Information and Management, School of Health and Medical Engineering, Ming Chuan University, Taipei 111, Taiwan;
| |
Collapse
|
5
|
O'Connell S, Cannon DM, Broin PÓ. Predictive modelling of brain disorders with magnetic resonance imaging: A systematic review of modelling practices, transparency, and interpretability in the use of convolutional neural networks. Hum Brain Mapp 2023; 44:6561-6574. [PMID: 37909364 PMCID: PMC10681646 DOI: 10.1002/hbm.26521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 09/28/2023] [Accepted: 10/10/2023] [Indexed: 11/03/2023] Open
Abstract
Brain disorders comprise several psychiatric and neurological disorders which can be characterized by impaired cognition, mood alteration, psychosis, depressive episodes, and neurodegeneration. Clinical diagnoses primarily rely on a combination of life history information and questionnaires, with a distinct lack of discriminative biomarkers in use for psychiatric disorders. Symptoms across brain conditions are associated with functional alterations of cognitive and emotional processes, which can correlate with anatomical variation; structural magnetic resonance imaging (MRI) data of the brain are therefore an important focus of research, particularly for predictive modelling. With the advent of large MRI data consortia (such as the Alzheimer's Disease Neuroimaging Initiative) facilitating a greater number of MRI-based classification studies, convolutional neural networks (CNNs)-deep learning models well suited to image processing tasks-have become increasingly popular for research into brain conditions. This has resulted in a myriad of studies reporting impressive predictive performances, demonstrating the potential clinical value of deep learning systems. However, methodologies can vary widely across studies, making them difficult to compare and/or reproduce, potentially limiting their clinical application. Here, we conduct a qualitative systematic literature review of 55 studies carrying out CNN-based predictive modelling of brain disorders using MRI data and evaluate them based on three principles-modelling practices, transparency, and interpretability. We propose several recommendations to enhance the potential for the integration of CNNs into clinical care.
Collapse
Affiliation(s)
- Shane O'Connell
- School of Mathematical and Statistical Sciences, College of Science and EngineeringUniversity of GalwayGalwayIreland
| | - Dara M. Cannon
- Clinical Neuroimaging Laboratory, Galway Neuroscience Centre, College of MedicineNursing and Health SciencesUniversity of GalwayGalwayIreland
| | - Pilib Ó. Broin
- School of Mathematical and Statistical Sciences, College of Science and EngineeringUniversity of GalwayGalwayIreland
| |
Collapse
|
6
|
Bernardi FA, Alves D, Crepaldi N, Yamada DB, Lima VC, Rijo R. Data Quality in Health Research: Integrative Literature Review. J Med Internet Res 2023; 25:e41446. [PMID: 37906223 PMCID: PMC10646672 DOI: 10.2196/41446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 04/18/2023] [Accepted: 07/14/2023] [Indexed: 11/02/2023] Open
Abstract
BACKGROUND Decision-making and strategies to improve service delivery must be supported by reliable health data to generate consistent evidence on health status. The data quality management process must ensure the reliability of collected data. Consequently, various methodologies to improve the quality of services are applied in the health field. At the same time, scientific research is constantly evolving to improve data quality through better reproducibility and empowerment of researchers and offers patient groups tools for secured data sharing and privacy compliance. OBJECTIVE Through an integrative literature review, the aim of this work was to identify and evaluate digital health technology interventions designed to support the conducting of health research based on data quality. METHODS A search was conducted in 6 electronic scientific databases in January 2022: PubMed, SCOPUS, Web of Science, Institute of Electrical and Electronics Engineers Digital Library, Cumulative Index of Nursing and Allied Health Literature, and Latin American and Caribbean Health Sciences Literature. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and flowchart were used to visualize the search strategy results in the databases. RESULTS After analyzing and extracting the outcomes of interest, 33 papers were included in the review. The studies covered the period of 2017-2021 and were conducted in 22 countries. Key findings revealed variability and a lack of consensus in assessing data quality domains and metrics. Data quality factors included the research environment, application time, and development steps. Strategies for improving data quality involved using business intelligence models, statistical analyses, data mining techniques, and qualitative approaches. CONCLUSIONS The main barriers to health data quality are technical, motivational, economical, political, legal, ethical, organizational, human resources, and methodological. The data quality process and techniques, from precollection to gathering, postcollection, and analysis, are critical for the final result of a study or the quality of processes and decision-making in a health care organization. The findings highlight the need for standardized practices and collaborative efforts to enhance data quality in health research. Finally, context guides decisions regarding data quality strategies and techniques. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) RR2-10.1101/2022.05.31.22275804.
Collapse
Affiliation(s)
| | - Domingos Alves
- Ribeirão Preto School of Medicine, University of Sao Paulo, Ribeirão Preto, Brazil
| | - Nathalia Crepaldi
- Ribeirão Preto School of Medicine, University of Sao Paulo, Ribeirão Preto, Brazil
| | - Diego Bettiol Yamada
- Ribeirão Preto School of Medicine, University of Sao Paulo, Ribeirão Preto, Brazil
| | - Vinícius Costa Lima
- Ribeirão Preto School of Medicine, University of Sao Paulo, Ribeirão Preto, Brazil
| | - Rui Rijo
- Ribeirão Preto School of Medicine, University of Sao Paulo, Ribeirão Preto, Brazil
- Polytechnic Institute of Leiria, Leiria, Portugal
- Institute for Systems and Computers Engineering, Coimbra, Portugal
- Center for Research in Health Technologies and Services, Porto, Portugal
| |
Collapse
|
7
|
Basereh M, Caputo A, Brennan R. Automatic transparency evaluation for open knowledge extraction systems. J Biomed Semantics 2023; 14:12. [PMID: 37653549 PMCID: PMC10468861 DOI: 10.1186/s13326-023-00293-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 07/30/2023] [Indexed: 09/02/2023] Open
Abstract
BACKGROUND This paper proposes Cyrus, a new transparency evaluation framework, for Open Knowledge Extraction (OKE) systems. Cyrus is based on the state-of-the-art transparency models and linked data quality assessment dimensions. It brings together a comprehensive view of transparency dimensions for OKE systems. The Cyrus framework is used to evaluate the transparency of three linked datasets, which are built from the same corpus by three state-of-the-art OKE systems. The evaluation is automatically performed using a combination of three state-of-the-art FAIRness (Findability, Accessibility, Interoperability, Reusability) assessment tools and a linked data quality evaluation framework, called Luzzu. This evaluation includes six Cyrus data transparency dimensions for which existing assessment tools could be identified. OKE systems extract structured knowledge from unstructured or semi-structured text in the form of linked data. These systems are fundamental components of advanced knowledge services. However, due to the lack of a transparency framework for OKE, most OKE systems are not transparent. This means that their processes and outcomes are not understandable and interpretable. A comprehensive framework sheds light on different aspects of transparency, allows comparison between the transparency of different systems by supporting the development of transparency scores, gives insight into the transparency weaknesses of the system, and ways to improve them. Automatic transparency evaluation helps with scalability and facilitates transparency assessment. The transparency problem has been identified as critical by the European Union Trustworthy Artificial Intelligence (AI) guidelines. In this paper, Cyrus provides the first comprehensive view of transparency dimensions for OKE systems by merging the perspectives of the FAccT (Fairness, Accountability, and Transparency), FAIR, and linked data quality research communities. RESULTS In Cyrus, data transparency includes ten dimensions which are grouped in two categories. In this paper, six of these dimensions, i.e., provenance, interpretability, understandability, licensing, availability, interlinking have been evaluated automatically for three state-of-the-art OKE systems, using the state-of-the-art metrics and tools. Covid-on-the-Web is identified to have the highest mean transparency. CONCLUSIONS This is the first research to study the transparency of OKE systems that provides a comprehensive set of transparency dimensions spanning ethics, trustworthy AI, and data quality approaches to transparency. It also demonstrates how to perform automated transparency evaluation that combines existing FAIRness and linked data quality assessment tools for the first time. We show that state-of-the-art OKE systems vary in the transparency of the linked data generated and that these differences can be automatically quantified leading to potential applications in trustworthy AI, compliance, data protection, data governance, and future OKE system design and testing.
Collapse
Affiliation(s)
- Maryam Basereh
- School of Computing, Dublin City University, Dublin, Ireland.
| | - Annalina Caputo
- School of Computing, Dublin City University, Dublin, Ireland
| | - Rob Brennan
- ADAPT Centre, School of Computer Science, University College Dublin, Dublin, Ireland
| |
Collapse
|
8
|
Algorithmic Fairness in AI. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2023. [DOI: 10.1007/s12599-023-00787-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
|
9
|
Bellenguez O, Brauner N, Tsoukiàs A. Is there an ethical Operational Research practice? And what this implies for our research? EURO JOURNAL ON DECISION PROCESSES 2023. [DOI: 10.1016/j.ejdp.2023.100029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
10
|
Hofeditz L, Clausen S, Rieß A, Mirbabaie M, Stieglitz S. Applying XAI to an AI-based system for candidate management to mitigate bias and discrimination in hiring. ELECTRONIC MARKETS 2022; 32:2207-2233. [PMID: 36568961 PMCID: PMC9764302 DOI: 10.1007/s12525-022-00600-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 09/30/2022] [Indexed: 06/17/2023]
Abstract
UNLABELLED Assuming that potential biases of Artificial Intelligence (AI)-based systems can be identified and controlled for (e.g., by providing high quality training data), employing such systems to augment human resource (HR)-decision makers in candidate selection provides an opportunity to make selection processes more objective. However, as the final hiring decision is likely to remain with humans, prevalent human biases could still cause discrimination. This work investigates the impact of an AI-based system's candidate recommendations on humans' hiring decisions and how this relation could be moderated by an Explainable AI (XAI) approach. We used a self-developed platform and conducted an online experiment with 194 participants. Our quantitative and qualitative findings suggest that the recommendations of an AI-based system can reduce discrimination against older and female candidates but appear to cause fewer selections of foreign-race candidates. Contrary to our expectations, the same XAI approach moderated these effects differently depending on the context. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s12525-022-00600-9.
Collapse
Affiliation(s)
- Lennart Hofeditz
- Universität Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, Germany
| | - Sünje Clausen
- Universität Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, Germany
| | - Alexander Rieß
- Universität Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, Germany
| | - Milad Mirbabaie
- Paderborn University, Warburger Str. 100, 33098 Paderborn, Germany
| | - Stefan Stieglitz
- Universität Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, Germany
| |
Collapse
|
11
|
Barati M, Ansari B. Effects of algorithmic control on power asymmetry and inequality within organizations. JOURNAL OF MANAGEMENT CONTROL 2022. [DOI: 10.1007/s00187-022-00347-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
12
|
Kuppler M, Kern C, Bach RL, Kreuter F. From fair predictions to just decisions? Conceptualizing algorithmic fairness and distributive justice in the context of data-driven decision-making. FRONTIERS IN SOCIOLOGY 2022; 7:883999. [PMID: 36299413 PMCID: PMC9589041 DOI: 10.3389/fsoc.2022.883999] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
Prediction algorithms are regularly used to support and automate high-stakes policy decisions about the allocation of scarce public resources. However, data-driven decision-making raises problems of algorithmic fairness and justice. So far, fairness and justice are frequently conflated, with the consequence that distributive justice concerns are not addressed explicitly. In this paper, we approach this issue by distinguishing (a) fairness as a property of the algorithm used for the prediction task from (b) justice as a property of the allocation principle used for the decision task in data-driven decision-making. The distinction highlights the different logic underlying concerns about fairness and justice and permits a more systematic investigation of the interrelations between the two concepts. We propose a new notion of algorithmic fairness called error fairness which requires prediction errors to not differ systematically across individuals. Drawing on sociological and philosophical discourse on local justice, we present a principled way to include distributive justice concerns into data-driven decision-making. We propose that allocation principles are just if they adhere to well-justified distributive justice principles. Moving beyond the one-sided focus on algorithmic fairness, we thereby make a first step toward the explicit implementation of distributive justice into data-driven decision-making.
Collapse
Affiliation(s)
- Matthias Kuppler
- Department of Social Sciences, University of Siegen, Siegen, Germany
| | - Christoph Kern
- School of Social Sciences, University of Mannheim, Mannheim, Germany
- Joint Program in Survey Methodology, University of Maryland, College Park, MD, United States
| | - Ruben L. Bach
- School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Frauke Kreuter
- Joint Program in Survey Methodology, University of Maryland, College Park, MD, United States
- Department of Statistics, LMU Munich, Munich, Germany
| |
Collapse
|
13
|
Morik KJ, Kotthaus H, Fischer R, Mücke S, Jakobs M, Piatkowski N, Pauly A, Heppe L, Heinrich D. Yes we care!-Certification for machine learning methods through the care label framework. Front Artif Intell 2022; 5:975029. [PMID: 36213164 PMCID: PMC9532619 DOI: 10.3389/frai.2022.975029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 08/24/2022] [Indexed: 11/13/2022] Open
Abstract
Machine learning applications have become ubiquitous. Their applications range from embedded control in production machines over process optimization in diverse areas (e.g., traffic, finance, sciences) to direct user interactions like advertising and recommendations. This has led to an increased effort of making machine learning trustworthy. Explainable and fair AI have already matured. They address the knowledgeable user and the application engineer. However, there are users that want to deploy a learned model in a similar way as their washing machine. These stakeholders do not want to spend time in understanding the model, but want to rely on guaranteed properties. What are the relevant properties? How can they be expressed to the stake- holder without presupposing machine learning knowledge? How can they be guaranteed for a certain implementation of a machine learning model? These questions move far beyond the current state of the art and we want to address them here. We propose a unified framework that certifies learning methods via care labels. They are easy to understand and draw inspiration from well-known certificates like textile labels or property cards of electronic devices. Our framework considers both, the machine learning theory and a given implementation. We test the implementation's compliance with theoretical properties and bounds.
Collapse
Affiliation(s)
- Katharina J. Morik
- Faculty of Computer Science, TU Dortmund University, Dortmund, Germany
- *Correspondence: Katharina J. Morik
| | - Helena Kotthaus
- Faculty of Computer Science, TU Dortmund University, Dortmund, Germany
| | - Raphael Fischer
- Faculty of Computer Science, TU Dortmund University, Dortmund, Germany
| | - Sascha Mücke
- Faculty of Computer Science, TU Dortmund University, Dortmund, Germany
| | - Matthias Jakobs
- Faculty of Computer Science, TU Dortmund University, Dortmund, Germany
| | - Nico Piatkowski
- Fraunhofer Institute for Intelligent Analysis and Information Systems, Sankt Augustin, Germany
| | - Andreas Pauly
- Faculty of Computer Science, TU Dortmund University, Dortmund, Germany
| | - Lukas Heppe
- Faculty of Computer Science, TU Dortmund University, Dortmund, Germany
| | - Danny Heinrich
- Faculty of Computer Science, TU Dortmund University, Dortmund, Germany
| |
Collapse
|
14
|
From algorithmic governance to govern algorithm. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
15
|
Hilliard A, Guenole N, Leutner F. Robots are judging me: Perceived fairness of algorithmic recruitment tools. Front Psychol 2022; 13:940456. [PMID: 35959005 PMCID: PMC9358218 DOI: 10.3389/fpsyg.2022.940456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 07/05/2022] [Indexed: 11/17/2022] Open
Abstract
Recent years have seen rapid advancements in selection assessments, shifting away from human and toward algorithmic judgments of candidates. Indeed, algorithmic recruitment tools have been created to screen candidates’ resumes, assess psychometric characteristics through game-based assessments, and judge asynchronous video interviews, among other applications. While research into candidate reactions to these technologies is still in its infancy, early research in this regard has explored user experiences and fairness perceptions. In this article, we review applicants’ perceptions of the procedural fairness of algorithmic recruitment tools based on key findings from seven key studies, sampling over 1,300 participants between them. We focus on the sub-facets of behavioral control, the extent to which individuals feel their behavior can influence an outcome, and social presence, whether there is the perceived opportunity for a social connection and empathy. While perceptions of overall procedural fairness are mixed, we find that fairness perceptions concerning behavioral control and social presence are mostly negative. Participants feel less confident that they are able to influence the outcome of algorithmic assessments compared to human assessments because they are more objective and less susceptible to manipulation. Participants also feel that the human element is lost when these tools are used since there is a lack of perceived empathy and interpersonal warmth. Since this field of research is relatively under-explored, we end by proposing a research agenda, recommending that future studies could examine the role of individual differences, demographics, and neurodiversity in influencing fairness perceptions of algorithmic recruitment.
Collapse
Affiliation(s)
- Airlie Hilliard
- Institute of Management Studies, Goldsmiths, University of London, London, United Kingdom
- Holistic AI, London, United Kingdom
| | - Nigel Guenole
- Institute of Management Studies, Goldsmiths, University of London, London, United Kingdom
| | - Franziska Leutner
- Institute of Management Studies, Goldsmiths, University of London, London, United Kingdom
- HireVue, Inc., London, United Kingdom
| |
Collapse
|
16
|
Shin D, Lim JS, Ahmad N, Ibahrine M. Understanding user sensemaking in fairness and transparency in algorithms: algorithmic sensemaking in over-the-top platform. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01525-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
17
|
Hameed H, Usman M, Khan MZ, Hussain A, Abbas H, Imran MA, Abbasi QH. Privacy-Preserving British Sign Language Recognition Using Deep Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:4316-4319. [PMID: 36086044 DOI: 10.1109/embc48229.2022.9871491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Sign language is a means of communication between the deaf community and normal hearing people who use hand gestures, facial expressions, and body language to communicate. It has the same level of complexity as spoken language, but it does not employ the same sentence structure as English. The motions in sign language comprise a range of distinct hand and finger articulations that are occasionally synchronized with the head, face, and body. Existing sign language recognition systems are mainly camera-based, which have fundamental limitations of poor lighting conditions, potential training challenges with longer video sequence data, and serious privacy concerns. This study presents a first of its kind, contact-less and privacy-preserving British sign language (BSL) Recognition system using Radar and deep learning algorithms. Six most common emotions are considered in this proof of concept study, namely confused, depressed, happy, hate, lonely, and sad. The collected data is represented in the form of spectrograms. Three state-of-the-art deep learning models, namely, InceptionV3, VGG19, and VGG16 models then extract spatiotemporal features from the spectrogram. Finally, BSL emotions are accurately identified by classifying the spectrograms into considered emotion signs. Comparative simulation results demonstrate that a maximum classifying accuracy of 93.33% is obtained on all classes using the VGG16 model.
Collapse
|
18
|
Chen CL, Golubchik L, Pal R. Achieving Transparency Report Privacy in Linear Time. ACM JOURNAL OF DATA AND INFORMATION QUALITY 2022. [DOI: 10.1145/3460001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
An accountable
algorithmic transparency report (ATR)
should
ideally
investigate (a)
transparency
of the underlying algorithm, and (b)
fairness
of the algorithmic decisions, and at the same time preserve data subjects’
privacy
. However, a provably formal study of the impact to data subjects’ privacy caused by the utility of releasing an ATR (that investigates transparency and fairness), has yet to be addressed in the literature. The far-fetched benefit of such a study lies in the methodical characterization of privacy-utility trade-offs for release of ATRs in public, and their consequential application-specific impact on the dimensions of society, politics, and economics. In this paper, we first investigate and demonstrate potential privacy hazards brought on by the deployment of transparency and fairness measures in released ATRs.
To preserve data subjects’ privacy, we then propose a linear-time optimal-privacy scheme
, built upon standard
linear fractional programming (LFP)
theory, for announcing ATRs, subject to constraints controlling the tolerance of privacy perturbation on the utility of transparency schemes. Subsequently, we quantify the privacy-utility trade-offs induced by our scheme, and analyze the impact of privacy perturbation on fairness measures in ATRs. To the best of our knowledge, this is the first analytical work that simultaneously addresses trade-offs between the triad of privacy, utility, and fairness, applicable to algorithmic transparency reports.
Collapse
Affiliation(s)
| | | | - Ranjan Pal
- University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
19
|
Combining Telecom Data with Heterogeneous Data Sources for Traffic and Emission Assessments—An Agent-Based Approach. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2022. [DOI: 10.3390/ijgi11070366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
To create quality decision-making tools that would contribute to transport sustainability, we need to build models relying on accurate, timely, and sufficiently disaggregated data. In spite of today’s ubiquity of big data, practical applications are still limited and have not reached technology readiness. Among them, passively generated telecom data are promising for studying travel-pattern generation. The objective of this study is twofold. First, to demonstrate how telecom data can be fused with other data sources and used to feed up a traffic model. Second, to simulate traffic using an agent-based approach and assess the emission produced by the model’s scenario. Taking Novi Sad as a case study, we simulated the traffic composition at 1-s resolution using the GAMA platform and calculated its emission at 1-h resolution. We used telecom data together with population and GIS data to calculate spatial-temporal movement and imported it to the ABM. Traffic flow was calibrated and validated with data from automatic vehicle counters, while air quality data was used to validate emissions. The results demonstrate the value of using diverse data sets for the creation of decision-making tools. We believe that this study is a positive endeavor toward combining big data and ABM in urban studies.
Collapse
|
20
|
Schmitz-Luhn B, Chandler J. Ethical and Legal Aspects of Technology-Assisted Care in Neurodegenerative Disease. J Pers Med 2022; 12:jpm12061011. [PMID: 35743795 PMCID: PMC9225587 DOI: 10.3390/jpm12061011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/17/2022] [Accepted: 06/18/2022] [Indexed: 11/16/2022] Open
Abstract
Technological solutions are increasingly seen as a way to respond to the demands of managing complex chronic conditions, especially neurodegenerative diseases such as Parkinson’s Disease. All of these new possibilities provide a variety of chances to improve the lives of affected persons and their families, friends, and caregivers. However, there are also a number of challenges that should be considered in order to safeguard the interests of affected persons. In this article, we discuss the ethical and legal considerations associated with the use of technology-assisted care in the context of neurodegenerative conditions.
Collapse
Affiliation(s)
- Bjoern Schmitz-Luhn
- Center for Life Ethics, Bonn University, 53113 Bonn, Germany
- Correspondence: ; Tel.: +49-228-73-66100
| | - Jennifer Chandler
- Bertram Loeb Research Chair, Centre for Health Law, Policy and Ethics, University of Ottawa, Ottawa, ON K1N 6N5, Canada;
| | | |
Collapse
|
21
|
Park J, Arunachalam R, Silenzio V, Singh VK. Fairness in Mobile Phone–Based Mental Health Assessment Algorithms: Exploratory Study. JMIR Form Res 2022; 6:e34366. [PMID: 35699997 PMCID: PMC9240929 DOI: 10.2196/34366] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 03/27/2022] [Accepted: 04/10/2022] [Indexed: 11/13/2022] Open
Abstract
Background
Approximately 1 in 5 American adults experience mental illness every year. Thus, mobile phone–based mental health prediction apps that use phone data and artificial intelligence techniques for mental health assessment have become increasingly important and are being rapidly developed. At the same time, multiple artificial intelligence–related technologies (eg, face recognition and search results) have recently been reported to be biased regarding age, gender, and race. This study moves this discussion to a new domain: phone-based mental health assessment algorithms. It is important to ensure that such algorithms do not contribute to gender disparities through biased predictions across gender groups.
Objective
This research aimed to analyze the susceptibility of multiple commonly used machine learning approaches for gender bias in mobile mental health assessment and explore the use of an algorithmic disparate impact remover (DIR) approach to reduce bias levels while maintaining high accuracy.
Methods
First, we performed preprocessing and model training using the data set (N=55) obtained from a previous study. Accuracy levels and differences in accuracy across genders were computed using 5 different machine learning models. We selected the random forest model, which yielded the highest accuracy, for a more detailed audit and computed multiple metrics that are commonly used for fairness in the machine learning literature. Finally, we applied the DIR approach to reduce bias in the mental health assessment algorithm.
Results
The highest observed accuracy for the mental health assessment was 78.57%. Although this accuracy level raises optimism, the audit based on gender revealed that the performance of the algorithm was statistically significantly different between the male and female groups (eg, difference in accuracy across genders was 15.85%; P<.001). Similar trends were obtained for other fairness metrics. This disparity in performance was found to reduce significantly after the application of the DIR approach by adapting the data used for modeling (eg, the difference in accuracy across genders was 1.66%, and the reduction is statistically significant with P<.001).
Conclusions
This study grounds the need for algorithmic auditing in phone-based mental health assessment algorithms and the use of gender as a protected attribute to study fairness in such settings. Such audits and remedial steps are the building blocks for the widespread adoption of fair and accurate mental health assessment algorithms in the future.
Collapse
Affiliation(s)
- Jinkyung Park
- School of Communication & Information, Rutgers University, New Brunswick, NJ, United States
| | | | - Vincent Silenzio
- School of Public Health, Rutgers University, Newark, NJ, United States
| | - Vivek K Singh
- School of Communication & Information, Rutgers University, New Brunswick, NJ, United States
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
22
|
Cognitive architectures for artificial intelligence ethics. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01452-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractAs artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their creators beyond those commonly discussed (e.g., trolley problems and variants of it) and to which solutions cannot be hard-coded and are often still up for debate. Given the sensitivity of such social and ethical dilemmas and the implications of these for human society at large, when and if our AI make the “wrong” choice we need to understand how they got there in order to make corrections and prevent recurrences. This is particularly true in situations where human livelihoods are at stake (e.g., health, well-being, finance, law) or when major individual or household decisions are taken. Doing so requires opening up the “black box” of AI; especially as they act, interact, and adapt in a human world and how they interact with other AI in this world. In this article, we argue for the application of cognitive architectures for ethical AI. In particular, for their potential contributions to AI transparency, explainability, and accountability. We need to understand how our AI get to the solutions they do, and we should seek to do this on a deeper level in terms of the machine-equivalents of motivations, attitudes, values, and so on. The path to future AI is long and winding but it could arrive faster than we think. In order to harness the positive potential outcomes of AI for humans and society (and avoid the negatives), we need to understand AI more fully in the first place and we expect this will simultaneously contribute towards greater understanding of their human counterparts also.
Collapse
|
23
|
Public AI canvas for AI-enabled public value: A design science approach. GOVERNMENT INFORMATION QUARTERLY 2022. [DOI: 10.1016/j.giq.2022.101722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
24
|
Sharon I, Drach-Zahavy A, Srulovici E. The Effect of Outcome vs. Process Accountability-Focus on Performance: A Meta-Analysis. Front Psychol 2022; 13:795117. [PMID: 35572269 PMCID: PMC9094407 DOI: 10.3389/fpsyg.2022.795117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 04/06/2022] [Indexed: 11/23/2022] Open
Abstract
Background The foundation of a safe practice is accountability, especially outcome- rather than process-focused accountability, particularly during pandemics such as COVID-19. Accountability is an essential behavior that promotes congruence between nursing actions and standards associated with quality of care. Moreover, the scant research examining whether one accountability focus is superior in motivating humans to better task performance yields inconclusive results. Aims Systematically examine the effect of an outcome- vs. process-accountability focus on performance and identify any moderating variables. Design Systematic review and meta-analysis. Data sources PsycINFO, Medline, PubMed, Scopus, and CINAHL databases, with all publications to November 2020. Review methods A systematic search using Systematic Reviews and Meta-Analyses (PRISMA) guidelines was performed. Statistical analysis and forest plots were performed using MetaXL 5.3. Heterogeneity was presented using I2 statistics and Q tests, and possible publication bias was assessed with a Doi plot and the LFK index. Results Seven studies representing nine experiments involving 1,080 participants were included. The pooled effect of the nine experiments on task performance failed to show significant differences (mean = −0.09; 95% Confidence Interval [95%CI]: −0.21, 0.03), but a significant moderating effect of task complexity was demonstrated. Specifically, outcome accountability exerts a beneficial effect in complex tasks (mean = −0.48 [95%CI: −0.62, −0.33]) whereas process accountability improves the performance in simpler tasks (mean = 0.96 [95%CI: 0.72, 1.20]). Conclusion These findings demonstrated that accountability focus by itself cannot serve as a sole motivator of better performance, because task complexity moderates the link between accountability focus and task performance. Outcome accountability exerts a beneficial effect for more-complex tasks, whereas process accountability improves the performance of simpler tasks. These findings are crucial in nursing, where it is typically assumed that a focus on outcomes is more important than a focus on processes.
Collapse
Affiliation(s)
- Ira Sharon
- Faculty of Social Welfare and Health Sciences, Department of Nursing, University of Haifa, Haifa, Israel
| | - Anat Drach-Zahavy
- Faculty of Social Welfare and Health Sciences, Department of Nursing, University of Haifa, Haifa, Israel
| | - Einav Srulovici
- Faculty of Social Welfare and Health Sciences, Department of Nursing, University of Haifa, Haifa, Israel
| |
Collapse
|
25
|
A Survey of Artificial Intelligence Challenges: Analyzing the Definitions, Relationships, and Evolutions. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12084054] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
In recent years, artificial intelligence has had a tremendous impact on every field, and several definitions of its different types have been provided. In the literature, most articles focus on the extraordinary capabilities of artificial intelligence. Recently, some challenges such as security, safety, fairness, robustness, and energy consumption have been reported during the development of intelligent systems. As the usage of intelligent systems increases, the number of new challenges increases. Obviously, during the evolution of artificial narrow intelligence to artificial super intelligence, the viewpoint on the challenges such as security will be changed. In addition, the recent development of human-level intelligence cannot appropriately happen without considering whole challenges in designing intelligent systems. Considering the mentioned situation, no study in the literature summarizes the challenges in designing artificial intelligence. In this paper, a review of the challenges is presented. Then, some important research questions about the future dynamism of challenges and their relationships are answered.
Collapse
|
26
|
Basereh M, Caputo A, Brennan R. AccTEF: A Transparency and Accountability Evaluation Framework for Ontology-Based Systems. INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING 2022. [DOI: 10.1142/s1793351x22400013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper proposes a new accountability and transparency evaluation framework (AccTEF) for ontology-based systems (OSysts). AccTEF is based on an analysis of the relation between a set of widely accepted data governance principles, i.e. findable, accessible, interoperable, reusable (FAIR) and accountability and transparency concepts. The evaluation of accountability and transparency of input ontologies and vocabularies of OSysts are addressed by analyzing the relation between vocabulary and ontology quality evaluation metrics, FAIR and accountability and transparency concepts. An ontology-based knowledge extraction pipeline is used as a use case in this study. Discovering the relation between FAIR and accountability and transparency helps in identifying and mitigating risks associated with deploying OSysts. This also allows providing design guidelines that help accountability and transparency to be embedded in OSysts. We found that FAIR can be used as a transparency indicator. We also found that the studied vocabulary and ontology quality evaluation metrics do not cover FAIR, accountability and transparency. Accordingly, we suggest these concepts should be considered as vocabulary and ontology quality evaluation aspects. To the best of our knowledge, it is the first time that the relation between FAIR and accountability and transparency concepts has been found and used for evaluation.
Collapse
Affiliation(s)
- Maryam Basereh
- School of Computing, Dublin City University, Glasnevin Campus, Dublin, Dublin 9, Ireland
| | - Annalina Caputo
- ADAPT Centre, School of Computing, Dublin City University, Glasnevin Campus, Dublin, Ireland
| | - Rob Brennan
- ADAPT Centre, School of Computing, Dublin City University, Dublin, Dublin 9, Ireland
| |
Collapse
|
27
|
Predicting the future impact of Computer Science researchers: Is there a gender bias? Scientometrics 2022. [DOI: 10.1007/s11192-022-04337-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractThe advent of large-scale bibliographic databases and powerful prediction algorithms led to calls for data-driven approaches for targeting scarce funds at researchers with high predicted future scientific impact. The potential side-effects and fairness implications of such approaches are unknown, however. Using a large-scale bibliographic data set of N = 111,156 Computer Science researchers active from 1993 to 2016, I build and evaluate a realistic scientific impact prediction model. Given the persistent under-representation of women in Computer Science, the model is audited for disparate impact based on gender. Random forests and Gradient Boosting Machines are used to predict researchers’ h-index in 2010 from their bibliographic profiles in 2005. Based on model predictions, it is determined whether the researcher will become a high-performer with an h-index in the top-25% of the discipline-specific h-index distribution. The models predict the future h-index with an accuracy of $$R^2 = 0.875$$
R
2
=
0.875
and correctly classify 91.0% of researchers as high-performers and low-performers. Overall accuracy does not vary strongly across researcher gender. Nevertheless, there is indication of disparate impact against women. The models under-estimate the true h-index of female researchers more strongly than the h-index of male researchers. Further, women are 8.6% less likely to be predicted to become high-performers than men. In practice, hiring, tenure, and funding decisions that are based on model predictions risk to perpetuate the under-representation of women in Computer Science.
Collapse
|
28
|
Malins S, Figueredo G, Jilani T, Long Y, Andrews J, Rawsthorne M, Manolescu C, Clos J, Higton F, Waldram D, Hunt D, Perez Vallejos E, Moghaddam N. Developing An Automated Assessment of In-Session Patient Activation for Psychological Therapy: A Co-Development Approach (Preprint). JMIR Med Inform 2022; 10:e38168. [DOI: 10.2196/38168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 06/07/2022] [Accepted: 06/27/2022] [Indexed: 11/13/2022] Open
|
29
|
König PD. Challenges in enabling user control over algorithm-based services. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01395-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractAlgorithmic systems that provide services to people by supporting or replacing human decision-making promise greater convenience in various areas. The opacity of these applications, however, means that it is not clear how much they truly serve their users. A promising way to address the issue of possible undesired biases consists in giving users control by letting them configure a system and aligning its performance with users’ own preferences. However, as the present paper argues, this form of control over an algorithmic system demands an algorithmic literacy that also entails a certain way of making oneself knowable: users must interrogate their own dispositions and see how these can be formalized such that they can be translated into the algorithmic system. This may, however, extend already existing practices through which people are monitored and probed and means that exerting such control requires users to direct a computational mode of thinking at themselves.
Collapse
|
30
|
Willems J, Schmidthuber L, Vogel D, Ebinger F, Vanderelst D. Ethics of robotized public services: The role of robot design and its actions. GOVERNMENT INFORMATION QUARTERLY 2022. [DOI: 10.1016/j.giq.2022.101683] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
31
|
Chacon A, Kausel EE, Reyes T. A longitudinal approach for understanding algorithm use. JOURNAL OF BEHAVIORAL DECISION MAKING 2022. [DOI: 10.1002/bdm.2275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Alvaro Chacon
- School of Engineering Pontificia Universidad Católica de Chile Santiago Chile
| | - Edgar E. Kausel
- School of Management Pontificia Universidad Católica de Chile Santiago Chile
| | - Tomas Reyes
- School of Engineering Pontificia Universidad Católica de Chile Santiago Chile
| |
Collapse
|
32
|
Kleanthous S, Kasinidou M, Barlas P, Otterbacher J. Perception of fairness in algorithmic decisions: Future developers' perspective. PATTERNS (NEW YORK, N.Y.) 2022; 3:100380. [PMID: 35079711 PMCID: PMC8767291 DOI: 10.1016/j.patter.2021.100380] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/13/2021] [Accepted: 10/05/2021] [Indexed: 12/04/2022]
Abstract
In this work, we investigate how students in fields adjacent to algorithms development perceive fairness, accountability, transparency, and ethics in algorithmic decision-making. Participants (N = 99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making using scenarios, in addition to defining algorithmic fairness and providing their view on possible causes of unfairness, transparency approaches, and accountability. The findings indicate that “agreeing” with a decision does not mean that the person “deserves the outcome,” perceiving the factors used in the decision-making as “appropriate” does not make the decision of the system “fair,” and perceiving a system's decision as “not fair” is affecting the participants' “trust” in the system. Furthermore, fairness is most likely to be defined as the use of “objective factors,” and participants identify the use of “sensitive attributes” as the most likely cause of unfairness. Appropriate factors used in the decision-making does not ensure perceived fairness Trust to the system is strongly affected by the user's perception of fairness Algorithmic fairness is defined as the use of objective factors Sensitive attributes can be the most likely cause of unfairness
Fairness, accountability, transparency, and ethics (FATE) in algorithmic systems is gaining a lot of attention lately. With the continuous advancement of machine learning and artificial intelligence, research and tech companies are coming across incidents where algorithmic systems are making non-objective decisions that may reproduce and/or amplify social stereotypes and inequalities. There is a great effort by the research community on developing frameworks of fairness and algorithmic models to alleviate biases; however, we first need to understand how people perceive the complex construct of algorithmic fairness. In this work, we investigate how young and future developers perceive these concepts. Our results can inform future research on (1) understanding perceptions of algorithmic FATE, (2) highlighting the needs for systematic training and education on FATE, and (3) raising awareness among young developers on the potential impact that the systems they are developing have in society.
Collapse
Affiliation(s)
- Styliani Kleanthous
- Cyprus Center for Algorithmic Transparency, Open University of Cyprus, Faculty of Pure & Applied Sciences, 33 Yiannou Kranidioti Avenue, 2220 Latsia, Nicosia, Cyprus.,Transparency in Algorithms Group, CYENS Centre of Excellence, 23 Dimarchias Square, 1016 Nicosia, Cyprus
| | - Maria Kasinidou
- Cyprus Center for Algorithmic Transparency, Open University of Cyprus, Faculty of Pure & Applied Sciences, 33 Yiannou Kranidioti Avenue, 2220 Latsia, Nicosia, Cyprus
| | - Pınar Barlas
- Transparency in Algorithms Group, CYENS Centre of Excellence, 23 Dimarchias Square, 1016 Nicosia, Cyprus
| | - Jahna Otterbacher
- Cyprus Center for Algorithmic Transparency, Open University of Cyprus, Faculty of Pure & Applied Sciences, 33 Yiannou Kranidioti Avenue, 2220 Latsia, Nicosia, Cyprus.,Transparency in Algorithms Group, CYENS Centre of Excellence, 23 Dimarchias Square, 1016 Nicosia, Cyprus
| |
Collapse
|
33
|
Sikstrom L, Maslej MM, Hui K, Findlay Z, Buchman DZ, Hill SL. Conceptualising fairness: three pillars for medical algorithms and health equity. BMJ Health Care Inform 2022; 29:e100459. [PMID: 35012941 PMCID: PMC8753410 DOI: 10.1136/bmjhci-2021-100459] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 12/14/2021] [Indexed: 12/25/2022] Open
Abstract
OBJECTIVES Fairness is a core concept meant to grapple with different forms of discrimination and bias that emerge with advances in Artificial Intelligence (eg, machine learning, ML). Yet, claims to fairness in ML discourses are often vague and contradictory. The response to these issues within the scientific community has been technocratic. Studies either measure (mathematically) competing definitions of fairness, and/or recommend a range of governance tools (eg, fairness checklists or guiding principles). To advance efforts to operationalise fairness in medicine, we synthesised a broad range of literature. METHODS We conducted an environmental scan of English language literature on fairness from 1960-July 31, 2021. Electronic databases Medline, PubMed and Google Scholar were searched, supplemented by additional hand searches. Data from 213 selected publications were analysed using rapid framework analysis. Search and analysis were completed in two rounds: to explore previously identified issues (a priori), as well as those emerging from the analysis (de novo). RESULTS Our synthesis identified 'Three Pillars for Fairness': transparency, impartiality and inclusion. We draw on these insights to propose a multidimensional conceptual framework to guide empirical research on the operationalisation of fairness in healthcare. DISCUSSION We apply the conceptual framework generated by our synthesis to risk assessment in psychiatry as a case study. We argue that any claim to fairness must reflect critical assessment and ongoing social and political deliberation around these three pillars with a range of stakeholders, including patients. CONCLUSION We conclude by outlining areas for further research that would bolster ongoing commitments to fairness and health equity in healthcare.
Collapse
Affiliation(s)
- Laura Sikstrom
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Anthropology, University of Toronto, Toronto, Ontario, Canada
| | - Marta M Maslej
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Katrina Hui
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Zoe Findlay
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| | - Daniel Z Buchman
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Sean L Hill
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
34
|
Hermann E. Leveraging Artificial Intelligence in Marketing for Social Good-An Ethical Perspective. JOURNAL OF BUSINESS ETHICS : JBE 2022; 179:43-61. [PMID: 34054170 PMCID: PMC8150633 DOI: 10.1007/s10551-021-04843-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 05/12/2021] [Indexed: 05/08/2023]
Abstract
Artificial intelligence (AI) is (re)shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications (will) provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To reconcile some of these tensions and account for the AI-for-social-good perspective, the authors make suggestions of how AI in marketing can be leveraged to promote societal and environmental well-being.
Collapse
Affiliation(s)
- Erik Hermann
- Wireless Systems,
IHP - Leibniz-Institut für innovative Mikroelektronik
, Frankfurt (Oder), Germany
| |
Collapse
|
35
|
User Perception of Algorithmic Digital Marketing in Conditions of Scarcity. INFORM SYST 2022. [DOI: 10.1007/978-3-030-95947-0_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
36
|
Mökander J, Axente M, Casolari F, Floridi L. Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation. Minds Mach (Dordr) 2021; 32:241-268. [PMID: 34754142 PMCID: PMC8569069 DOI: 10.1007/s11023-021-09577-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 10/14/2021] [Indexed: 11/04/2022]
Abstract
The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that the AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.
Collapse
Affiliation(s)
- Jakob Mökander
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS UK
| | - Maria Axente
- UK All Party Parliamentary Group on AI (APPG AI), London, UK
| | - Federico Casolari
- Department of Legal Studies, University of Bologna, via Zamboni 27/29, 40126 Bologna, Italy
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS UK.,The Alan Turing Institute, The British Library, 2QR, 96 Euston Rd, London, NW1 2DB UK
| |
Collapse
|
37
|
Langer M, König CJ. Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human resource management. HUMAN RESOURCE MANAGEMENT REVIEW 2021. [DOI: 10.1016/j.hrmr.2021.100881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
38
|
Ethics-based auditing of automated decision-making systems: intervention points and policy implications. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01286-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
AbstractOrganisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations verify claims about their ADMS and (b) provide decision-subjects with justifications for the outputs produced by ADMS. In this article, we outline the conditions under which EBA procedures can be feasible and effective in practice. First, we argue that EBA is best understood as a ‘soft’ yet ‘formal’ governance mechanism. This implies that the main responsibility of auditors should be to spark ethical deliberation at key intervention points throughout the software development process and ensure that there is sufficient documentation to respond to potential inquiries. Second, we frame AMDS as parts of larger sociotechnical systems to demonstrate that to be feasible and effective, EBA procedures must link to intervention points that span all levels of organisational governance and all phases of the software lifecycle. The main function of EBA should, therefore, be to inform, formalise, assess, and interlink existing governance structures. Finally, we discuss the policy implications of our findings. To support the emergence of feasible and effective EBA procedures, policymakers and regulators could provide standardised reporting formats, facilitate knowledge exchange, provide guidance on how to resolve normative tensions, and create an independent body to oversee EBA of ADMS.
Collapse
|
39
|
Dolata M, Feuerriegel S, Schwabe G. A sociotechnical view of algorithmic fairness. INFORMATION SYSTEMS JOURNAL 2021. [DOI: 10.1111/isj.12370] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Mateusz Dolata
- Department of Informatics University of Zurich Zurich Switzerland
| | - Stefan Feuerriegel
- Department of Management, Technology, and Economics ETH Zurich Zurich Switzerland
- LMU Munich School of Management LMU Munich Munich Germany
| | - Gerhard Schwabe
- Department of Informatics University of Zurich Zurich Switzerland
| |
Collapse
|
40
|
Nordström M. AI under great uncertainty: implications and decision strategies for public policy. AI & SOCIETY 2021; 37:1703-1714. [PMID: 34511737 PMCID: PMC8421460 DOI: 10.1007/s00146-021-01263-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 08/18/2021] [Indexed: 11/25/2022]
Abstract
Decisions where there is not enough information for a well-informed decision due to unidentified consequences, options, or undetermined demarcation of the decision problem are called decisions under great uncertainty. This paper argues that public policy decisions on how and if to implement decision-making processes based on machine learning and AI for public use are such decisions. Decisions on public policy on AI are uncertain due to three features specific to the current landscape of AI, namely (i) the vagueness of the definition of AI, (ii) uncertain outcomes of AI implementations and (iii) pacing problems. Given that many potential applications of AI in the public sector concern functions central to the public sphere, decisions on the implementation of such applications are particularly sensitive. Therefore, it is suggested that public policy-makers and decision-makers in the public sector can adopt strategies from the argumentative approach in decision theory to mitigate the established great uncertainty. In particular, the notions of framing and temporal strategies are considered.
Collapse
Affiliation(s)
- Maria Nordström
- Division of Philosophy, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
41
|
Marabelli M, Newell S, Handunge V. The lifecycle of algorithmic decision-making systems: Organizational choices and ethical challenges. JOURNAL OF STRATEGIC INFORMATION SYSTEMS 2021. [DOI: 10.1016/j.jsis.2021.101683] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
42
|
Digital Transformation and Artificial Intelligence Applied to Business: Legal Regulations, Economic Impact and Perspective. LAWS 2021. [DOI: 10.3390/laws10030070] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Digital transformation can be defined as the integration of new technologies into all areas of a company. This technological integration will ultimately imply a need to transform traditional business models. Similarly, artificial intelligence has been one of the most disruptive technologies of recent decades, with a high potential impact on business and people. Cognitive approaches that simulate both human behavior and thinking are leading to advanced analytical models that help companies to boost sales and customer engagement, improve their operational efficiency, improve their services and, in short, generate new relevant information from data. These decision-making models are based on descriptive, predictive and prescriptive analytics. This necessitates the existence of a legal framework that regulates all digital changes with uniformity between countries and helps a proper digital transformation process under a clear regulation. On the other hand, it is essential that this digital disruption is not slowed down by the regulatory framework. This work will demonstrate that AI and digital transformation will be an intrinsic part of many applications and will therefore be universally deployed. However, this implementation will have to be done under common regulations and in line with the new reality.
Collapse
|
43
|
Kwok C, Chan NK. Towards a political theory of data justice: a public good perspective. JOURNAL OF INFORMATION COMMUNICATION & ETHICS IN SOCIETY 2021. [DOI: 10.1108/jices-11-2020-0117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
This study aims to develop an interdisciplinary political theory of data justice by connecting three major political theories of the public good with empirical studies about the functions of big data and offering normative principles for restricting and guiding the state’s data practices from a public good perspective.
Design/methodology/approach
Drawing on three major political theories of the public good – the market failure approach, the basic rights approach and the democratic approach – and critical data studies, this study synthesizes existing studies on the promises and perils of big data for public good purposes. The outcome is a conceptual paper that maps philosophical discussions about the conditions under which the state has a legitimate right to collect and use big data for public goods purposes.
Findings
This study argues that market failure, basic rights protection and deepening democracy can be normative grounds for justifying the state’s right to data collection and utilization, from the perspective of political theories of the public good. The state’s data practices, however, should be guided by three political principles, namely, the principle of transparency and accountability; the principle of fairness; and the principle of democratic legitimacy. The paper draws on empirical studies and practical examples to explicate these principles.
Originality/value
Bringing together normative political theory and critical data studies, this study contributes to a more philosophically rigorous understanding of how and why big data should be used for public good purposes while discussing the normative boundaries of such data practices.
Collapse
|
44
|
|
45
|
Hermann E, Hermann G, Tremblay JC. Ethical Artificial Intelligence in Chemical Research and Development: A Dual Advantage for Sustainability. SCIENCE AND ENGINEERING ETHICS 2021; 27:45. [PMID: 34231042 PMCID: PMC8260511 DOI: 10.1007/s11948-021-00325-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 06/25/2021] [Indexed: 06/13/2023]
Abstract
Artificial intelligence can be a game changer to address the global challenge of humanity-threatening climate change by fostering sustainable development. Since chemical research and development lay the foundation for innovative products and solutions, this study presents a novel chemical research and development process backed with artificial intelligence and guiding ethical principles to account for both process- and outcome-related sustainability. Particularly in ethically salient contexts, ethical principles have to accompany research and development powered by artificial intelligence to promote social and environmental good and sustainability (beneficence) while preventing any harm (non-maleficence) for all stakeholders (i.e., companies, individuals, society at large) affected.
Collapse
Affiliation(s)
- Erik Hermann
- IHP - Leibniz-Institut für innovative Mikroelektronik, Frankfurt (Oder), Germany.
| | | | | |
Collapse
|
46
|
Mökander J, Morley J, Taddeo M, Floridi L. Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations. SCIENCE AND ENGINEERING ETHICS 2021; 27:44. [PMID: 34231029 PMCID: PMC8260507 DOI: 10.1007/s11948-021-00319-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 06/08/2021] [Indexed: 06/13/2023]
Abstract
Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems (ADMS) can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of automation. In this article, we consider the feasibility and efficacy of ethics-based auditing (EBA) as a governance mechanism that allows organisations to validate claims made about their ADMS. Building on previous work, we define EBA as a structured process whereby an entity's present or past behaviour is assessed for consistency with relevant principles or norms. We then offer three contributions to the existing literature. First, we provide a theoretical explanation of how EBA can contribute to good governance by promoting procedural regularity and transparency. Second, we propose seven criteria for how to design and implement EBA procedures successfully. Third, we identify and discuss the conceptual, technical, social, economic, organisational, and institutional constraints associated with EBA. We conclude that EBA should be considered an integral component of multifaced approaches to managing the ethical risks posed by ADMS.
Collapse
Affiliation(s)
- Jakob Mökander
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Jessica Morley
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| |
Collapse
|
47
|
Kempeneer S. A big data state of mind: Epistemological challenges to accountability and transparency in data-driven regulation. GOVERNMENT INFORMATION QUARTERLY 2021. [DOI: 10.1016/j.giq.2021.101578] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
48
|
What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. ARTIF INTELL 2021. [DOI: 10.1016/j.artint.2021.103473] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
49
|
Reimagining data responsibility: 10 new approaches toward a culture of trust in re-using data to address critical public needs. DATA & POLICY 2021. [DOI: 10.1017/dap.2021.4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
Abstract
Data and data science offer tremendous potential to address some of our most intractable public problems (including the Covid-19 pandemic). At the same time, recent years have shown some of the risks of existing and emerging technologies. An updated framework is required to balance potential and risk, and to ensure that data is used responsibly. Data responsibility is not itself a new concept. However, amid a rapidly changing technology landscape, it has become increasingly clear that the concept may need updating, in order to keep up with new trends such as big data, open data, the Internet of things, and artificial intelligence, and machine learning. This paper seeks to outline 10 approaches and innovations for data responsibility in the 21st century. The 10 emerging concepts we have identified include:
End-to-end data responsibility
Decision provenance
Professionalizing data stewardship
From data science to question science
Contextual consent
Responsibility by design
Data asymmetries and data collaboratives
Personally identifiable inference
Group privacy
Data assemblies
Each of these is described at greater length in the paper, and illustrated with examples from around the world. Put together, they add up to a framework or outline for policy makers, scholars, and activists who seek to harness the potential of data to solve complex social problems and advance the public good. Needless to say, the 10 approaches outlined here represent just a start. We envision this paper more as an exercise in agenda-setting than a comprehensive survey.
Collapse
|
50
|
Parent-Rocheleau X, Parker SK. Algorithms as work designers: How algorithmic management influences the design of jobs. HUMAN RESOURCE MANAGEMENT REVIEW 2021. [DOI: 10.1016/j.hrmr.2021.100838] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|