1
|
Abubakar AM, Seymour RG, Gardner A, Lambert I, Fyson R, Wright N. Cognitive impairment and exploitation: connecting fragments of a bigger picture through data. J Public Health (Oxf) 2024; 46:498-505. [PMID: 39358202 PMCID: PMC11638330 DOI: 10.1093/pubmed/fdae266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 08/19/2024] [Accepted: 09/13/2024] [Indexed: 10/04/2024] Open
Abstract
BACKGROUND Exploitation poses a significant public health concern. This paper highlights 'jigsaw pieces' of statistical evidence, indicating cognitive impairment as a pre- or co-existing factor in exploitation. METHODS We reviewed English Safeguarding Adults Collection (SAC) data and Safeguarding Adults Reviews (SARs) from 2017 to 22. Data relevant to exploitation and cognitive impairment were analysed using summary statistics and 'analysis of variance'. RESULTS Despite estimates suggesting cognitive impairments may be prevalent among people experiencing exploitation in England, national datasets miss opportunities to illuminate this issue. Although SAC data include statistics on support needs and various forms of abuse and exploitation, they lack intersectional data. Significant regional variations in recorded safeguarding investigations and potential conflation between abuse and exploitation also suggest data inconsistencies. Increased safeguarding investigations for people who were not previously in contact with services indicate that adults may be 'slipping through the net'. SARs, although representing serious cases, provide stronger evidence linking cognitive impairment with risks of exploitation. CONCLUSIONS This study identifies opportunities to collect detailed information on cognitive impairment and exploitation. The extremely limited quantitative evidence-base could be enhanced using existing data channels to build a more robust picture, as well as improve prevention, identification and response efforts for 'at-risk' adults.
Collapse
Affiliation(s)
- Aisha M Abubakar
- Rights Lab, University of Nottingham, University Park, Nottingham NG7 2RD, UK
| | - Rowland G Seymour
- School of Mathematics, University of Birmingham, Birmingham B15 2TT, UK
| | - Alison Gardner
- Rights Lab, University of Nottingham, University Park, Nottingham NG7 2RD, UK
- School of Sociology and Social Policy, University of Nottingham, University Park, Nottingham NG7 2RD, UK
| | - Imogen Lambert
- Rights Lab, University of Nottingham, University Park, Nottingham NG7 2RD, UK
| | - Rachel Fyson
- School of Sociology and Social Policy, University of Nottingham, University Park, Nottingham NG7 2RD, UK
| | - Nicola Wright
- Rights Lab, University of Nottingham, University Park, Nottingham NG7 2RD, UK
- School of Health Sciences, Queens Medical Centre, University of Nottingham, Nottingham NG7 2UH, UK
| |
Collapse
|
2
|
Olvera RG, Plagens C, Ellison S, Klingler K, Kuntz AK, Chase RP. Community-Engaged Data Science (CEDS): A Case Study of Working with Communities to Use Data to Inform Change. J Community Health 2024; 49:1062-1072. [PMID: 38958892 DOI: 10.1007/s10900-024-01377-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/12/2024] [Indexed: 07/04/2024]
Abstract
Data-informed decision making is a critical goal for many community-based public health research initiatives. However, community partners often encounter challenges when interacting with data. The Community-Engaged Data Science (CEDS) model offers a goal-oriented, iterative guide for communities to collaborate with research data scientists through data ambassadors. This study presents a case study of CEDS applied to research on the opioid epidemic in 18 counties in Ohio as part of the HEALing Communities Study (HCS). Data ambassadors provided a pivotal role in empowering community coalitions to translate data into action using key steps of CEDS which included: data landscapes identifying available data in the community; data action plans from logic models based on community data needs and gaps of data; data collection/sharing agreements; and data systems including portals and dashboards. Throughout the CEDS process, data ambassadors emphasized sustainable data workflows, supporting continued data engagement beyond the HCS. The implementation of CEDS in Ohio underscored the importance of relationship building, timing of implementation, understanding communities' data preferences, and flexibility when working with communities. Researchers should consider implementing CEDS and integrating a data ambassador in community-based research to enhance community data engagement and drive data-informed interventions to improve public health outcomes.
Collapse
Affiliation(s)
- Ramona G Olvera
- The Center for the Advancement of Team Science, Analytics, and Systems Thinking (CATALYST), College of Medicine, The Ohio State University, 700 Ackerman Road, Suite 4000, Columbus, OH, 43202, USA.
| | - Courtney Plagens
- College of Medicine, HEALing Communities Study, The Ohio State University, Columbus, OH, USA
| | - Sylvia Ellison
- College of Medicine, HEALing Communities Study, The Ohio State University, Columbus, OH, USA
| | - Kesla Klingler
- College of Medicine, HEALing Communities Study, The Ohio State University, Columbus, OH, USA
| | - Amy K Kuntz
- College of Medicine, HEALing Communities Study, The Ohio State University, Columbus, OH, USA
| | - Rachel P Chase
- Research Information Technology, College of Medicine, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
3
|
Ryan M. We're only human after all: a critique of human-centred AI. AI & SOCIETY 2024; 40:1303-1319. [PMID: 40225308 PMCID: PMC11985563 DOI: 10.1007/s00146-024-01976-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 05/18/2024] [Indexed: 04/15/2025]
Abstract
The use of a 'human-centred' artificial intelligence approach (HCAI) has substantially increased over the past few years in academic texts (1600 +); institutions (27 Universities have HCAI labs, such as Stanford, Sydney, Berkeley, and Chicago); in tech companies (e.g., Microsoft, IBM, and Google); in politics (e.g., G7, G20, UN, EU, and EC); and major institutional bodies (e.g., World Bank, World Economic Forum, UNESCO, and OECD). Intuitively, it sounds very appealing: placing human concerns at the centre of AI development and use. However, this paper will use insights from the works of Michel Foucault (mostly The Order of Things) to argue that the HCAI approach is deeply problematic in its assumptions. In particular, this paper will criticise four main assumptions commonly found within HCAI: human-AI hybridisation is desirable and unproblematic; humans are not currently at the centre of the AI universe; we should use humans as a way to guide AI development; AI is the next step in a continuous path of human progress; and increasing human control over AI will reduce harmful bias. This paper will contribute to the field of philosophy of technology by using Foucault's analysis to examine assumptions found in HCAI [it provides a Foucauldian conceptual analysis of a current approach (human-centredness) that aims to influence the design and development of a transformative technology (AI)], it will contribute to AI ethics debates by offering a critique of human-centredness in AI (by choosing Foucault, it provides a bridge between older ideas with contemporary issues), and it will also contribute to Foucault studies (by using his work to engage in contemporary debates, such as AI).
Collapse
Affiliation(s)
- Mark Ryan
- Wageningen Economic Research, Wageningen University and Research, Droevendaalsesteeg 4, 6708 PB Wageningen, The Netherlands
| |
Collapse
|
4
|
Wang X, Wu YC, Ji X, Fu H. Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices. Front Artif Intell 2024; 7:1320277. [PMID: 38836021 PMCID: PMC11148221 DOI: 10.3389/frai.2024.1320277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 05/01/2024] [Indexed: 06/06/2024] Open
Abstract
Introduction Algorithmic decision-making systems are widely used in various sectors, including criminal justice, employment, and education. While these systems are celebrated for their potential to enhance efficiency and objectivity, they also pose risks of perpetuating and amplifying societal biases and discrimination. This paper aims to provide an indepth analysis of the types of algorithmic discrimination, exploring both the challenges and potential solutions. Methods The methodology includes a systematic literature review, analysis of legal documents, and comparative case studies across different geographic regions and sectors. This multifaceted approach allows for a thorough exploration of the complexity of algorithmic bias and its regulation. Results We identify five primary types of algorithmic bias: bias by algorithmic agents, discrimination based on feature selection, proxy discrimination, disparate impact, and targeted advertising. The analysis of the U.S. legal and regulatory framework reveals a landscape of principled regulations, preventive controls, consequential liability, self-regulation, and heteronomy regulation. A comparative perspective is also provided by examining the status of algorithmic fairness in the EU, Canada, Australia, and Asia. Conclusion Real-world impacts are demonstrated through case studies focusing on criminal risk assessments and hiring algorithms, illustrating the tangible effects of algorithmic discrimination. The paper concludes with recommendations for interdisciplinary research, proactive policy development, public awareness, and ongoing monitoring to promote fairness and accountability in algorithmic decision-making. As the use of AI and automated systems expands globally, this work highlights the importance of developing comprehensive, adaptive approaches to combat algorithmic discrimination and ensure the socially responsible deployment of these powerful technologies.
Collapse
Affiliation(s)
| | - Ying Cheng Wu
- School of Law, University of Washington, Seattle, WA, United States
| | - Xueliang Ji
- Faculty of Law, The Chinese University of Hong Kong, Sha Tin, Hong Kong SAR, China
| | - Hongpeng Fu
- Khoury College of Computer Sciences, Northeastern University, Seattle, WA, United States
| |
Collapse
|
5
|
Govers M, van Amelsvoort P. A theoretical essay on socio-technical systems design thinking in the era of digital transformation. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2023. [DOI: 10.1007/s11612-023-00675-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Abstract
AbstractDigital technology is here to stay. Currently, digital technologies are unleashing the fourth industrial revolution. This so-called digital transformation is about the integration of digital technology into all areas of society. Within organisations, work is fundamentally changing which impacts how organisations will operate and deliver value to customers. Furthermore, but often forgotten, it is also about a cultural change that requires organisations to continually challenge their status quo, experiment, and get comfortable with failure.Digital possibilities are emerging which cannot be viewed separately from social effects in organised (eco-)systems and for people in those systems. The challenge is to jointly optimise technical and social aspects for creating both added value in a sustainable manner and improve quality of working life. As we have an ‘organisational choice’, technical possibilities can be aligned with social needs and requirements, and vice versa. This alignment forms the basis of socio-technical systems (STS) thinking, which is necessary for developing sustainable organisational solutions. Sociotechnical theory and practice originally have a focus on optimising social and technical aspects in organisations. Therefore, we choose in this essay for an STS perspective, especially for the STS Design (STS-D) approach which is elaborated by the Lowlands STS school of thought. As digital technologies offer new affordances and constraints for organisational design, we aim, with this essay, to merge STS‑D with digital thinking.We start with a brief sketch of the understanding of current digital technologies. After this, we discuss organisational design in terms of the division of labour and the penetration of digital technology into the nature of work. Then, the STS-D’s core design principles and design sequence, specifically from the Lowlands school of thought, are introduced and adapted for digital thinking. This is followed by a section on design routines for unlocking the potential for designing future, digital-receptive workplaces and organisations. We end the essay with some closing remarks and reflections.
Collapse
|
6
|
From algorithmic governance to govern algorithm. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
7
|
Timotijevic L, Carr I, De La Cueva J, Eftimov T, Hodgkins CE, Koroušić Seljak B, Mikkelsen BE, Selnes T, Van't Veer P, Zimmermann K. Responsible Governance for a Food and Nutrition E-Infrastructure: Case Study of the Determinants and Intake Data Platform. Front Nutr 2022; 8:795802. [PMID: 35402471 PMCID: PMC8984108 DOI: 10.3389/fnut.2021.795802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 12/23/2021] [Indexed: 11/13/2022] Open
Abstract
The focus of the current paper is on a design of responsible governance of food consumer science e-infrastructure using the case study Determinants and Intake Data Platform (DI Data Platform). One of the key challenges for implementation of the DI Data Platform is how to develop responsible governance that observes the ethical and legal frameworks of big data research and innovation, whilst simultaneously capitalizing on huge opportunities offered by open science and the use of big data in food consumer science research. We address this challenge with a specific focus on four key governance considerations: data type and technology; data ownership and intellectual property; data privacy and security; and institutional arrangements for ethical governance. The paper concludes with a set of responsible research governance principles that can inform the implementation of DI Data Platform, and in particular: consider both individual and group privacy; monitor the power and control (e.g., between the scientist and the research participant) in the process of research; question the veracity of new knowledge based on big data analytics; understand the diverse interpretations of scientists' responsibility across different jurisdictions.
Collapse
Affiliation(s)
- Lada Timotijevic
- School of Psychology, University of Surrey, Guildford, United Kingdom
- *Correspondence: Lada Timotijevic
| | - Indira Carr
- School of Law, University of Surrey, Guildford, United Kingdom
| | | | | | - Charo E. Hodgkins
- School of Psychology, University of Surrey, Guildford, United Kingdom
| | | | - Bent E. Mikkelsen
- Department of Geosciences and Natural Resource Management, University of Copenhagen, Copenhagen, Denmark
| | - Trond Selnes
- Wageningen Economic Research, Wageningen University and Research Centre, Wageningen, Netherlands
| | - Pieter Van't Veer
- Wageningen Economic Research, Wageningen University and Research Centre, Wageningen, Netherlands
| | - Karin Zimmermann
- Wageningen Economic Research, Wageningen University and Research Centre, Wageningen, Netherlands
| |
Collapse
|
8
|
Algorithmic augmentation of democracy: considering whether technology can enhance the concepts of democracy and the rule of law through four hypotheticals. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01170-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
9
|
Ryan M, Antoniou J, Brooks L, Jiya T, Macnish K, Stahl B. Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality. SCIENCE AND ENGINEERING ETHICS 2021; 27:16. [PMID: 33686527 PMCID: PMC7977017 DOI: 10.1007/s11948-021-00293-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Accepted: 02/10/2021] [Indexed: 06/12/2023]
Abstract
This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)-using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues, (from the literature), into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety of different social domains, fields, and applications of AI, there is overlap and correlation between the organisations' ethical concerns. This more detailed understanding of ethics in AI + BD is required to ensure that the multitude of suggested ways of addressing them can be targeted and succeed in mitigating the pertinent ethical issues that are often discussed in the literature.
Collapse
Affiliation(s)
- Mark Ryan
- Wageningen Economic Research, Wageningen University and Research, Wageningen, The Netherlands
| | | | | | | | | | | |
Collapse
|
10
|
Measuring objective and subjective well-being: dimensions and data sources. INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS 2020. [DOI: 10.1007/s41060-020-00224-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
AbstractWell-being is an important value for people’s lives, and it could be considered as an index of societal progress. Researchers have suggested two main approaches for the overall measurement of well-being, the objective and the subjective well-being. Both approaches, as well as their relevant dimensions, have been traditionally captured with surveys. During the last decades, new data sources have been suggested as an alternative or complement to traditional data. This paper aims to present the theoretical background of well-being, by distinguishing between objective and subjective approaches, their relevant dimensions, the new data sources used for their measurement and relevant studies. We also intend to shed light on still barely unexplored dimensions and data sources that could potentially contribute as a key for public policing and social development.
Collapse
|
11
|
Jose JM, Yilmaz E, Magalhães J, Castells P, Ferro N, Silva MJ, Martins F. bias goggles: Graph-Based Computation of the Bias of Web Domains Through the Eyes of Users. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7148229 DOI: 10.1007/978-3-030-45439-5_52] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Ethical issues, along with transparency, disinformation, and bias, are in the focus of our information society. In this work, we propose the bias goggles model, for computing the bias characteristics of web domains to user-defined concepts based on the structure of the web graph. For supporting the model, we exploit well-known propagation models and the newly introduced Biased-PR PageRank algorithm, that models various behaviours of biased surfers. An implementation discussion, along with a preliminary evaluation over a subset of the greek web graph, shows the applicability of the model even in real-time for small graphs, and showcases rather promising and interesting results. Finally, we pinpoint important directions for future work. A constantly evolving prototype of the bias goggles system is readily available.
Collapse
|
12
|
Gasparri G. Risks and Opportunities of RegTech and SupTech Developments. Front Artif Intell 2019; 2:14. [PMID: 33733103 PMCID: PMC7861216 DOI: 10.3389/frai.2019.00014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 07/15/2019] [Indexed: 11/24/2022] Open
Affiliation(s)
- Giorgio Gasparri
- Commissione Nazionale per le Società e la Borsa, Rome, Italy.,Financial Markets Law, University of Naples Federico II, Naples, Italy
| |
Collapse
|
13
|
Fothergill BT, Knight W, Stahl BC, Ulnicane I. Responsible Data Governance of Neuroscience Big Data. Front Neuroinform 2019; 13:28. [PMID: 31110477 PMCID: PMC6499198 DOI: 10.3389/fninf.2019.00028] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Accepted: 03/29/2019] [Indexed: 12/17/2022] Open
Abstract
Current discussions of the ethical aspects of big data are shaped by concerns regarding the social consequences of both the widespread adoption of machine learning and the ways in which biases in data can be replicated and perpetuated. We instead focus here on the ethical issues arising from the use of big data in international neuroscience collaborations. Neuroscience innovation relies upon neuroinformatics, large-scale data collection and analysis enabled by novel and emergent technologies. Each step of this work involves aspects of ethics, ranging from concerns for adherence to informed consent or animal protection principles and issues of data re-use at the stage of data collection, to data protection and privacy during data processing and analysis, and issues of attribution and intellectual property at the data-sharing and publication stages. Significant dilemmas and challenges with far-reaching implications are also inherent, including reconciling the ethical imperative for openness and validation with data protection compliance and considering future innovation trajectories or the potential for misuse of research results. Furthermore, these issues are subject to local interpretations within different ethical cultures applying diverse legal systems emphasising different aspects. Neuroscience big data require a concerted approach to research across boundaries, wherein ethical aspects are integrated within a transparent, dialogical data governance process. We address this by developing the concept of "responsible data governance," applying the principles of Responsible Research and Innovation (RRI) to the challenges presented by the governance of neuroscience big data in the Human Brain Project (HBP).
Collapse
Affiliation(s)
- B. Tyr Fothergill
- Centre for Computing and Social Responsibility, School of Computer Science and Informatics, Faculty of Computing, Engineering and Media, De Montfort University, Leicester, United Kingdom
| | | | | | | |
Collapse
|