1
|
Sovrano F, Hine E, Anzolut S, Bacchelli A. Simplifying software compliance: AI technologies in drafting technical documentation for the AI Act. EMPIRICAL SOFTWARE ENGINEERING 2025; 30:91. [PMID: 40191404 PMCID: PMC11965209 DOI: 10.1007/s10664-025-10645-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 03/21/2025] [Indexed: 04/09/2025]
Abstract
The European AI Act has introduced specific technical documentation requirements for AI systems. Compliance with them is challenging due to the need for advanced knowledge of both legal and technical aspects, which is rare among software developers and legal professionals. Consequently, small and medium-sized enterprises may face high costs in meeting these requirements. In this study, we explore how contemporary AI technologies, including ChatGPT and an existing compliance tool (DoXpert), can aid software developers in creating technical documentation that complies with the AI Act. We specifically demonstrate how these AI tools can identify gaps in existing documentation according to the provisions of the AI Act. Using open-source high-risk AI systems as case studies, we collaborated with legal experts to evaluate how closely tool-generated assessments align with expert opinions. Findings show partial alignment, important issues with ChatGPT (3.5 and 4), and a moderate (and statistically significant) correlation between DoXpert and expert judgments, according to the Rank Biserial Correlation analysis. Nonetheless, these findings underscore the potential of AI to combine with human analysis and alleviate the compliance burden, supporting the broader goal of fostering responsible and transparent AI development under emerging regulatory frameworks.
Collapse
Affiliation(s)
- Francesco Sovrano
- ETH Zurich, Collegium Helveticum, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Emmie Hine
- Yale Digital Ethics Center, New Haven, CT USA
- University of Bologna, Bologna, Italy
- KU Leuven, Leuven, Belgium
| | | | | |
Collapse
|
2
|
Goktas P, Grzybowski A. Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI. J Clin Med 2025; 14:1605. [PMID: 40095575 PMCID: PMC11900311 DOI: 10.3390/jcm14051605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2025] [Revised: 02/06/2025] [Accepted: 02/22/2025] [Indexed: 03/19/2025] Open
Abstract
Background/Objectives: Artificial intelligence (AI) is transforming healthcare, enabling advances in diagnostics, treatment optimization, and patient care. Yet, its integration raises ethical, regulatory, and societal challenges. Key concerns include data privacy risks, algorithmic bias, and regulatory gaps that struggle to keep pace with AI advancements. This study aims to synthesize a multidisciplinary framework for trustworthy AI in healthcare, focusing on transparency, accountability, fairness, sustainability, and global collaboration. It moves beyond high-level ethical discussions to provide actionable strategies for implementing trustworthy AI in clinical contexts. Methods: A structured literature review was conducted using PubMed, Scopus, and Web of Science. Studies were selected based on relevance to AI ethics, governance, and policy in healthcare, prioritizing peer-reviewed articles, policy analyses, case studies, and ethical guidelines from authoritative sources published within the last decade. The conceptual approach integrates perspectives from clinicians, ethicists, policymakers, and technologists, offering a holistic "ecosystem" view of AI. No clinical trials or patient-level interventions were conducted. Results: The analysis identifies key gaps in current AI governance and introduces the Regulatory Genome-an adaptive AI oversight framework aligned with global policy trends and Sustainable Development Goals. It introduces quantifiable trustworthiness metrics, a comparative analysis of AI categories for clinical applications, and bias mitigation strategies. Additionally, it presents interdisciplinary policy recommendations for aligning AI deployment with ethical, regulatory, and environmental sustainability goals. This study emphasizes measurable standards, multi-stakeholder engagement strategies, and global partnerships to ensure that future AI innovations meet ethical and practical healthcare needs. Conclusions: Trustworthy AI in healthcare requires more than technical advancements-it demands robust ethical safeguards, proactive regulation, and continuous collaboration. By adopting the recommended roadmap, stakeholders can foster responsible innovation, improve patient outcomes, and maintain public trust in AI-driven healthcare.
Collapse
Affiliation(s)
- Polat Goktas
- UCD School of Computer Science, University College Dublin, D04 V1W8 Dublin, Ireland;
| | - Andrzej Grzybowski
- Department of Ophthalmology, University of Warmia and Mazury, 10-719 Olsztyn, Poland
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 61-553 Poznan, Poland
| |
Collapse
|
3
|
Wang X, Zhang T, Gong H, Li J, Wu B, Chen B, Zhao S. Game-theoretic analysis of governance and corruption in China's pharmaceutical industry. Front Med (Lausanne) 2024; 11:1439864. [PMID: 39206179 PMCID: PMC11349649 DOI: 10.3389/fmed.2024.1439864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Accepted: 07/25/2024] [Indexed: 09/04/2024] Open
Abstract
Introduction With the rapid development of China's pharmaceutical industry, issues of corruption and regulatory effectiveness have become increasingly prominent, posing critical challenges to public health safety and the industry's sustainable development. Methods This paper adopts a bounded rationality perspective and employs a game-theoretic evolutionary approach to establish a tripartite evolutionary game model involving pharmaceutical companies, third-party auditing organizations, and health insurance regulatory agencies. It analyzes the stable strategies of the parties involved and the sensitivity of key parameters within this tripartite game system. Results The study reveals that adherence to health insurance regulations by pharmaceutical companies, refusal of bribes by third-party auditing organizations, and the implementation of lenient regulations by health insurance agencies can form an effective governance equilibrium. This equilibrium state contributes to reducing corruption in the pharmaceutical industry, balancing the interests of all parties, and promoting healthy industry development. Discussion Pharmaceutical companies must balance compliance costs against the risks of non-compliance benefits while maximizing profits; third-party auditing organizations need to choose between fulfilling their duties and accepting bribes, considering their economic benefits and professional reputation; health insurance regulatory agencies adjust their strategies between strict and lenient regulation to maximize social welfare. The paper suggests enhancing policy support, strengthening compliance supervision, improving audit independence, and adjusting regulatory strategies to optimize governance in the pharmaceutical industry. Additionally, the research highlights the role of collaborative efforts among the three parties in achieving sustainable governance. Furthermore, the study conducts a numerical simulation analysis to demonstrate the impact of various parameters on the evolutionary stability of the system, providing practical insights into the implementation of regulatory policies. This research offers new insights for policy formulation and governance in China's pharmaceutical sector, providing significant reference value for guiding the industry's sustainable development.
Collapse
Affiliation(s)
- Xi Wang
- Faculty of Humanities and Social Sciences, Macau Polytechnic University, Macau, Macau SAR, China
| | - Tao Zhang
- Faculty of Humanities and Social Sciences, Macau Polytechnic University, Macau, Macau SAR, China
| | - Hanxiang Gong
- The Second Affiliated Hospital, Guangzhou Medical University, Guangzhou, Guangdong, China
| | - Jinghua Li
- School of Public Health, Guangzhou Medical University, Guangzhou, Guangdong, China
| | - Baoling Wu
- Faculty of Humanities and Social Sciences, Macau Polytechnic University, Macau, Macau SAR, China
| | - Baoxin Chen
- Pingshan Hospital, Southern Medical University, Shenzhen, Guangdong, China
| | - Shufang Zhao
- Faculty of Humanities and Social Sciences, Macau Polytechnic University, Macau, Macau SAR, China
| |
Collapse
|
4
|
Liefgreen A, Weinstein N, Wachter S, Mittelstadt B. Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it. AI & SOCIETY 2023; 39:2183-2199. [PMID: 39309255 PMCID: PMC11415467 DOI: 10.1007/s00146-023-01684-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 04/21/2023] [Indexed: 09/25/2024]
Abstract
Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values such as transparency and fairness from the outset. Drawing on insights from psychological theories, we assert the need to understand the values that underlie decisions made by individuals involved in creating and deploying AI systems. We describe how this understanding can be leveraged to increase engagement with de-biasing and fairness-enhancing practices within the AI healthcare industry, ultimately leading to sustained behavioral change via autonomy-supportive communication strategies rooted in motivational and social psychology theories. In developing these pathways to engagement, we consider the norms and needs that govern the AI healthcare domain, and we evaluate incentives for maintaining the status quo against economic, legal, and social incentives for behavior change in line with transparency and fairness values.
Collapse
Affiliation(s)
- Alice Liefgreen
- Hillary Rodham Clinton School of Law, University of Swansea, Swansea, SA2 8PP UK
- School of Psychology and Clinical Language Sciences, University of Reading, Whiteknights Road, Reading, RG6 6AL UK
| | - Netta Weinstein
- School of Psychology and Clinical Language Sciences, University of Reading, Whiteknights Road, Reading, RG6 6AL UK
| | - Sandra Wachter
- Oxford Internet Institute, University of Oxford, 1 St. Giles, Oxford, OX1 3JS UK
| | - Brent Mittelstadt
- Oxford Internet Institute, University of Oxford, 1 St. Giles, Oxford, OX1 3JS UK
| |
Collapse
|
5
|
Mökander J, Sheth M, Watson DS, Floridi L. The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems. Minds Mach (Dordr) 2023. [DOI: 10.1007/s11023-022-09620-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
AbstractOrganisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical challenges. Nevertheless, pragmatic problem-solving demands that things should be sorted so that their grouping will promote successful actions for some specific end. In this article, we review and compare previous attempts to classify AI systems for the purpose of implementing AI governance in practice. We find that attempts to classify AI systems proposed in previous literature use one of three mental models: the Switch, i.e., a binary approach according to which systems either are or are not considered AI systems depending on their characteristics; the Ladder, i.e., a risk-based approach that classifies systems according to the ethical risks they pose; and the Matrix, i.e., a multi-dimensional classification of systems that take various aspects into account, such as context, input data, and decision-model. Each of these models for classifying AI systems comes with its own set of strengths and weaknesses. By conceptualising different ways of classifying AI systems into simple mental models, we hope to provide organisations that design, deploy, or regulate AI systems with the vocabulary needed to demarcate the material scope of their AI governance frameworks.
Collapse
|
6
|
Mökander J, Sheth M, Gersbro-Sundler M, Blomgren P, Floridi L. Challenges and best practices in corporate AI governance: Lessons from the biopharmaceutical industry. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1068361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best reap the benefits of automation whilst managing the associated risks. As a first step, many companies have committed themselves to various sets of ethics principles aimed at guiding the design and use of AI systems. So far so good. But how can well-intentioned ethical principles be translated into effective practice? And what challenges await companies that attempt to operationalize AI governance? In this article, we address these questions by drawing on our first-hand experience of shaping and driving the roll-out of AI governance within AstraZeneca, a biopharmaceutical company. The examples we discuss highlight challenges that any organization attempting to operationalize AI governance will have to face. These include questions concerning how to define the material scope of AI governance, how to harmonize standards across decentralized organizations, and how to measure the impact of specific AI governance initiatives. By showcasing how AstraZeneca managed these operational questions, we hope to provide project managers, CIOs, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks within other organizations with generalizable best practices. In essence, companies seeking to operationalize AI governance are encouraged to build on existing policies and governance structures, use pragmatic and action-oriented terminology, focus on risk management in development and procurement, and empower employees through continuous education and change management.
Collapse
|