1
|
Zhang Z, Ling D, Tian W, Zhou C, Song M, Fang S. Public participation and outgoing audit of natural resources: Evidence from tripartite evolutionary game in China. ENVIRONMENTAL RESEARCH 2023; 236:116734. [PMID: 37500046 DOI: 10.1016/j.envres.2023.116734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 07/14/2023] [Accepted: 07/23/2023] [Indexed: 07/29/2023]
Abstract
Public participation is essential to the success of ecological civilization. Whether public participation can effectively play a role in the outgoing audit of natural resources (OANR) is an important issue that remains to be explored. This paper uses the tripartite evolutionary game to explore the mechanism of the audit subjects, the leading cadres, and the public in the OANR. The research finds that there is a two-way linkage relationship between the audit subjects and the leading cadres. The audit subjects and the leading cadres affect the behavior strategies of the public in the indirect way and direct way, respectively. However, the public lacks the path to directly affect the other two subjects. The tripartite ideal audit model of "the audit subjects conduct due diligence audits, leading cadres perform duties, the public participate" cannot be realized. The external effect of the public's strategic choice is not enough to make the profit or loss of leading cadres change structurally and then change their behaviors. This paper demonstrates the reasons why the public cannot effectively participate in the OANR at the current stage from three aspects, which are the interpretation of the equations for replication dynamics, the particularity of the audit system, and the effectiveness of public participation. Three suggestions are put forward which are encouraging citizens' indirect participation in the OANR, disclosing information about the OANR, and improving citizens' awareness of the OANR. This paper has important guiding significance for other developing countries to promote public participation in natural resource auditing.
Collapse
Affiliation(s)
- Zhenhua Zhang
- Institute of Green Finance, Lanzhou University, Lanzhou, 730000, Gansu, China.
| | - Dan Ling
- School of Management, Lanzhou University, Lanzhou, 730000, Gansu, China.
| | - Wenjia Tian
- School of Economics, Lanzhou University, Lanzhou, 730000, Gansu, China.
| | - Cheng Zhou
- School of Public Administration, Nanjing Normal University, Nanjing 210023, Jiangsu, China.
| | - Malin Song
- School of Statistics and Applied Mathematics, Anhui University of Finance and Economics, Bengbu, 233030, China.
| | - Shuai Fang
- School of Economics and Management, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
2
|
Al Mamun A, Yang Q, Naznen F, Aziz NA, Masud MM. Modelling the mass adoption potential of food waste composting among rural Chinese farmers. Heliyon 2023; 9:e18998. [PMID: 37609413 PMCID: PMC10440538 DOI: 10.1016/j.heliyon.2023.e18998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 08/02/2023] [Accepted: 08/04/2023] [Indexed: 08/24/2023] Open
Abstract
As a safe alternative to hazardous agrochemicals, food waste compost could prevent human health hazards and environmental degradation. Food waste composting has not garnered much popularity among farmers given their sole dependence on synthetic fertilizers for high yields and commercial returns. Hence, this study aimed to identify the factors influencing farmers' adoption of food waste composting for regular use. Empirical data were collected from 399 farmers residing in different second-tier cities in China through face-to-face interviews using a structured questionnaire. The partial least squares-structural equation modelling was used to statistically examine the models and construct correlations. Based on the study outcomes, the perceived usefulness of food waste compost, awareness of the consequences, social influence, anticipated guilt, and attitude towards food waste composting substantially impacted food waste composting intention. Intriguingly, the perceived value of sustainability and ascription of responsibility did not have a significant impact on food waste composting intention, whereas food waste composting intention substantially influenced food waste composting behavior. The results of the multi-group analysis revealed differences in the relationship between awareness of consequences and food waste composting intention across genders and educational levels. This intriguing finding provides new avenues for future research and offers novel insights into the practical applications and promotion of food waste composting. These results will improve the relevant aspects among farmers for eco-friendly farming practices, innovate food waste management strategies, and mitigate environmental deterioration resulting from hazardous agrochemicals. This study expands the current body of literature by providing government regulators and other social enterprises with effective laws, policies, and strategy development guidelines for adopting natural composting on a large scale and enhancing the nutritional value of food to prevent unforeseen health risks caused by toxic chemicals.
Collapse
Affiliation(s)
- Abdullah Al Mamun
- UKM - Graduate School of Business, Universiti Kebangsaan Malaysia, UKM, Bangi, 43600, Selangor Darul Ehsan, Malaysia
| | - Qing Yang
- UKM - Graduate School of Business, Universiti Kebangsaan Malaysia, UKM, Bangi, 43600, Selangor Darul Ehsan, Malaysia
| | - Farzana Naznen
- UCSI Graduate Business School, UCSI University, Cheras, 56000, Kuala Lumpur, Malaysia
| | - Norzalita Abd Aziz
- UKM - Graduate School of Business, Universiti Kebangsaan Malaysia, UKM, Bangi, 43600, Selangor Darul Ehsan, Malaysia
| | - Muhammad Mehedi Masud
- Faculty of Business and Economics, University of Malaya, 50603, Kuala Lumpur, Malaysia
| |
Collapse
|
3
|
Brereton TA, Malik MM, Lifson M, Greenwood JD, Peterson KJ, Overgaard SM. The Role of Artificial Intelligence Model Documentation in Translational Science: Scoping Review. Interact J Med Res 2023; 12:e45903. [PMID: 37450330 PMCID: PMC10382950 DOI: 10.2196/45903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/10/2023] [Accepted: 05/11/2023] [Indexed: 07/18/2023] Open
Abstract
BACKGROUND Despite the touted potential of artificial intelligence (AI) and machine learning (ML) to revolutionize health care, clinical decision support tools, herein referred to as medical modeling software (MMS), have yet to realize the anticipated benefits. One proposed obstacle is the acknowledged gaps in AI translation. These gaps stem partly from the fragmentation of processes and resources to support MMS transparent documentation. Consequently, the absence of transparent reporting hinders the provision of evidence to support the implementation of MMS in clinical practice, thereby serving as a substantial barrier to the successful translation of software from research settings to clinical practice. OBJECTIVE This study aimed to scope the current landscape of AI- and ML-based MMS documentation practices and elucidate the function of documentation in facilitating the translation of ethical and explainable MMS into clinical workflows. METHODS A scoping review was conducted in accordance with PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. PubMed was searched using Medical Subject Headings key concepts of AI, ML, ethical considerations, and explainability to identify publications detailing AI- and ML-based MMS documentation, in addition to snowball sampling of selected reference lists. To include the possibility of implicit documentation practices not explicitly labeled as such, we did not use documentation as a key concept but as an inclusion criterion. A 2-stage screening process (title and abstract screening and full-text review) was conducted by 1 author. A data extraction template was used to record publication-related information; barriers to developing ethical and explainable MMS; available standards, regulations, frameworks, or governance strategies related to documentation; and recommendations for documentation for papers that met the inclusion criteria. RESULTS Of the 115 papers retrieved, 21 (18.3%) papers met the requirements for inclusion. Ethics and explainability were investigated in the context of AI- and ML-based MMS documentation and translation. Data detailing the current state and challenges and recommendations for future studies were synthesized. Notable themes defining the current state and challenges that required thorough review included bias, accountability, governance, and explainability. Recommendations identified in the literature to address present barriers call for a proactive evaluation of MMS, multidisciplinary collaboration, adherence to investigation and validation protocols, transparency and traceability requirements, and guiding standards and frameworks that enhance documentation efforts and support the translation of AI- and ML-based MMS. CONCLUSIONS Resolving barriers to translation is critical for MMS to deliver on expectations, including those barriers identified in this scoping review related to bias, accountability, governance, and explainability. Our findings suggest that transparent strategic documentation, aligning translational science and regulatory science, will support the translation of MMS by coordinating communication and reporting and reducing translational barriers, thereby furthering the adoption of MMS.
Collapse
Affiliation(s)
- Tracey A Brereton
- Center for Digital Health, Mayo Clinic, Rochester, MN, United States
| | - Momin M Malik
- Center for Digital Health, Mayo Clinic, Rochester, MN, United States
| | - Mark Lifson
- Center for Digital Health, Mayo Clinic, Rochester, MN, United States
| | - Jason D Greenwood
- Department of Family Medicine, Mayo Clinic, Rochester, MN, United States
| | - Kevin J Peterson
- Center for Digital Health, Mayo Clinic, Rochester, MN, United States
| | | |
Collapse
|
4
|
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, Oxford, UK
| | | | | |
Collapse
|
5
|
Mökander J, Sheth M, Watson DS, Floridi L. The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems. Minds Mach (Dordr) 2023. [DOI: 10.1007/s11023-022-09620-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
AbstractOrganisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical challenges. Nevertheless, pragmatic problem-solving demands that things should be sorted so that their grouping will promote successful actions for some specific end. In this article, we review and compare previous attempts to classify AI systems for the purpose of implementing AI governance in practice. We find that attempts to classify AI systems proposed in previous literature use one of three mental models: the Switch, i.e., a binary approach according to which systems either are or are not considered AI systems depending on their characteristics; the Ladder, i.e., a risk-based approach that classifies systems according to the ethical risks they pose; and the Matrix, i.e., a multi-dimensional classification of systems that take various aspects into account, such as context, input data, and decision-model. Each of these models for classifying AI systems comes with its own set of strengths and weaknesses. By conceptualising different ways of classifying AI systems into simple mental models, we hope to provide organisations that design, deploy, or regulate AI systems with the vocabulary needed to demarcate the material scope of their AI governance frameworks.
Collapse
|
6
|
Roberts H, Zhang J, Bariach B, Cowls J, Gilburt B, Juneja P, Tsamados A, Ziosi M, Taddeo M, Floridi L. Artificial intelligence in support of the circular economy: ethical considerations and a path forward. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01596-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractThe world’s current model for economic development is unsustainable. It encourages high levels of resource extraction, consumption, and waste that undermine positive environmental outcomes. Transitioning to a circular economy (CE) model of development has been proposed as a sustainable alternative. Artificial intelligence (AI) is a crucial enabler for CE. It can aid in designing robust and sustainable products, facilitate new circular business models, and support the broader infrastructures needed to scale circularity. However, to date, considerations of the ethical implications of using AI to achieve a transition to CE have been limited. This article addresses this gap. It outlines how AI is and can be used to transition towards CE, analyzes the ethical risks associated with using AI for this purpose, and supports some recommendations to policymakers and industry on how to minimise these risks.
Collapse
|
7
|
An external stability audit framework to test the validity of personality prediction in AI hiring. Data Min Knowl Discov 2022; 36:2153-2193. [PMID: 36161238 PMCID: PMC9483468 DOI: 10.1007/s10618-022-00861-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 08/05/2022] [Indexed: 11/09/2022]
Abstract
Automated hiring systems are among the fastest-developing of all high-stakes AI systems. Among these are algorithmic personality tests that use insights from psychometric testing, and promise to surface personality traits indicative of future success based on job seekers’ resumes or social media profiles. We interrogate the validity of such systems using stability of the outputs they produce, noting that reliability is a necessary, but not a sufficient, condition for validity. Crucially, rather than challenging or affirming the assumptions made in psychometric testing — that personality is a meaningful and measurable construct, and that personality traits are indicative of future success on the job — we frame our audit methodology around testing the underlying assumptions made by the vendors of the algorithmic personality tests themselves. Our main contribution is the development of a socio-technical framework for auditing the stability of algorithmic systems. This contribution is supplemented with an open-source software library that implements the technical components of the audit, and can be used to conduct similar stability audits of algorithmic systems. We instantiate our framework with the audit of two real-world personality prediction systems, namely, Humantic AI and Crystal. The application of our audit framework demonstrates that both these systems show substantial instability with respect to key facets of measurement, and hence cannot be considered valid testing instruments.
Collapse
|
8
|
Zicari RV. Assessing Trustworthy AI in Times of COVID-19: Deep Learning for Predicting a Multiregional Score Conveying the Degree of Lung Compromise in COVID-19 Patients. IEEE TRANSACTIONS ON TECHNOLOGY AND SOCIETY 2022; 3:272-289. [PMID: 36573115 PMCID: PMC9762021 DOI: 10.1109/tts.2022.3195114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 07/13/2022] [Accepted: 07/18/2022] [Indexed: 12/30/2022]
Abstract
This article's main contributions are twofold: 1) to demonstrate how to apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare and 2) to investigate the research question of what does "trustworthy AI" mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multiregional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient's lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia, Italy, since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses sociotechnical scenarios to identify ethical, technical, and domain-specific issues in the use of the AI system in the context of the pandemic.
Collapse
Affiliation(s)
- Roberto V. Zicari
- Department of Business Management and Analytics, Arcada University of Applied Sciences, Helsinki, Finland
| |
Collapse
|
9
|
Mökander J, Floridi L. Operationalising AI governance through ethics-based auditing: an industry case study. AI AND ETHICS 2022; 3:451-468. [PMID: 35669570 PMCID: PMC9152664 DOI: 10.1007/s43681-022-00171-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 04/29/2022] [Indexed: 12/31/2022]
Abstract
Ethics-based auditing (EBA) is a structured process whereby an entity's past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA-such as the feasibility and effectiveness of different auditing procedures-have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit. While previous literature concerning EBA has focussed on proposing or analysing evaluation metrics or visualisation techniques, our findings suggest that the main difficulties large multinational organisations face when conducting EBA mirror classical governance challenges. These include ensuring harmonised standards across decentralised organisations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes. The case study presented in this article contributes to the existing literature by providing a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective.
Collapse
Affiliation(s)
- Jakob Mökander
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Department of Legal Studies, University of Bologna, Via Zamboni 33, 40126 Bologna, Italy
| |
Collapse
|
10
|
Trust and ethics in AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01473-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
11
|
Abstract
AbstractArtificial intelligence (AI) governance and auditing promise to bridge the gap between AI ethics principles and the responsible use of AI systems, but they require assessment mechanisms and metrics. Effective AI governance is not only about legal compliance; organizations can strive to go beyond legal requirements by proactively considering the risks inherent in their AI systems. In the past decade, investors have become increasingly active in advancing corporate social responsibility and sustainability practices. Including nonfinancial information related to environmental, social, and governance (ESG) issues in investment analyses has become mainstream practice among investors. However, the AI auditing literature is mostly silent on the role of investors. The current study addresses two research questions: (1) how companies’ responsible use of AI is included in ESG investment analyses and (2) what connections can be found between principles of responsible AI and ESG ranking criteria. We conducted a series of expert interviews and analyzed the data using thematic analysis. Awareness of AI issues, measuring AI impacts, and governing AI processes emerged as the three main themes in the analysis. The findings indicate that AI is still a relatively unknown topic for investors, and taking the responsible use of AI into account in ESG analyses is not an established practice. However, AI is recognized as a potentially material issue for various industries and companies, indicating that its incorporation into ESG evaluations may be justified. There is a need for standardized metrics for AI responsibility, while critical bottlenecks and asymmetrical knowledge relations must be tackled.
Collapse
|
12
|
A Code of Digital Ethics: laying the foundation for digital ethics in a science and technology company. AI & SOCIETY 2022. [DOI: 10.1007/s00146-021-01376-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
AbstractThe rapid and dynamic nature of digital transformation challenges companies that wish to develop and deploy novel digital technologies. Like other actors faced with this transformation, companies need to find robust ways to ethically guide their innovations and business decisions. Digital ethics has recently featured in a plethora of both practical corporate guidelines and compilations of high-level principles, but there remains a gap concerning the development of sound ethical guidance in specific business contexts. As a multinational science and technology company faced with a broad range of digital ventures and associated ethical challenges, Merck KGaA has laid the foundations for bridging this gap by developing a Code of Digital Ethics (CoDE) tailored for this context. Following a comprehensive analysis of existing digital ethics guidelines, we used a reconstructive social research approach to identify 20 relevant principles and derive a code designed as a multi-purpose tool. Versatility was prioritised by defining non-prescriptive guidelines that are open to different perspectives and thus well-suited for operationalisation for varied business purposes. We also chose a clear nested structure that highlights the relationships between five core and fifteen subsidiary principles as well as the different levels of reference—data and algorithmic systems—to which they apply. The CoDE will serve Merck KGaA and its new Digital Ethics Advisory Panel to guide ethical reflection, evaluation and decision-making across the full spectrum of digital developments encountered and undertaken by the company whilst also offering an opportunity to increase transparency for external partners, and thus trust.
Collapse
|
13
|
Mökander J, Axente M, Casolari F, Floridi L. Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation. Minds Mach (Dordr) 2021; 32:241-268. [PMID: 34754142 PMCID: PMC8569069 DOI: 10.1007/s11023-021-09577-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 10/14/2021] [Indexed: 11/04/2022]
Abstract
The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that the AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.
Collapse
Affiliation(s)
- Jakob Mökander
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS UK
| | - Maria Axente
- UK All Party Parliamentary Group on AI (APPG AI), London, UK
| | - Federico Casolari
- Department of Legal Studies, University of Bologna, via Zamboni 27/29, 40126 Bologna, Italy
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS UK.,The Alan Turing Institute, The British Library, 2QR, 96 Euston Rd, London, NW1 2DB UK
| |
Collapse
|
14
|
Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Minds Mach (Dordr) 2021; 31:239-256. [PMID: 34720418 PMCID: PMC8550007 DOI: 10.1007/s11023-021-09563-w] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 06/01/2021] [Indexed: 11/06/2022]
Abstract
As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service.’
Collapse
|
15
|
Ethics-based auditing of automated decision-making systems: intervention points and policy implications. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01286-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
AbstractOrganisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations verify claims about their ADMS and (b) provide decision-subjects with justifications for the outputs produced by ADMS. In this article, we outline the conditions under which EBA procedures can be feasible and effective in practice. First, we argue that EBA is best understood as a ‘soft’ yet ‘formal’ governance mechanism. This implies that the main responsibility of auditors should be to spark ethical deliberation at key intervention points throughout the software development process and ensure that there is sufficient documentation to respond to potential inquiries. Second, we frame AMDS as parts of larger sociotechnical systems to demonstrate that to be feasible and effective, EBA procedures must link to intervention points that span all levels of organisational governance and all phases of the software lifecycle. The main function of EBA should, therefore, be to inform, formalise, assess, and interlink existing governance structures. Finally, we discuss the policy implications of our findings. To support the emergence of feasible and effective EBA procedures, policymakers and regulators could provide standardised reporting formats, facilitate knowledge exchange, provide guidance on how to resolve normative tensions, and create an independent body to oversee EBA of ADMS.
Collapse
|