1
|
Kemper EHM, Erenstein H, Boverhof BJ, Redekop K, Andreychenko AE, Dietzel M, Groot Lipman KBW, Huisman M, Klontzas ME, Vos F, IJzerman M, Starmans MPA, Visser JJ. ESR Essentials: how to get to valuable radiology AI: the role of early health technology assessment-practice recommendations by the European Society of Medical Imaging Informatics. Eur Radiol 2025; 35:3432-3441. [PMID: 39636421 PMCID: PMC12081502 DOI: 10.1007/s00330-024-11188-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 08/30/2024] [Accepted: 09/16/2024] [Indexed: 12/07/2024]
Abstract
AI tools in radiology are revolutionising the diagnosis, evaluation, and management of patients. However, there is a major gap between the large number of developed AI tools and those translated into daily clinical practice, which can be primarily attributed to limited usefulness and trust in current AI tools. Instead of technically driven development, little effort has been put into value-based development to ensure AI tools will have a clinically relevant impact on patient care. An iterative comprehensive value evaluation process covering the complete AI tool lifecycle should be part of radiology AI development. For value assessment of health technologies, health technology assessment (HTA) is an extensively used and comprehensive method. While most aspects of value covered by HTA apply to radiology AI, additional aspects, including transparency, explainability, and robustness, are unique to radiology AI and crucial in its value assessment. Additionally, value assessment should already be included early in the design stage to determine the potential impact and subsequent requirements of the AI tool. Such early assessment should be systematic, transparent, and practical to ensure all stakeholders and value aspects are considered. Hence, early value-based development by incorporating early HTA will lead to more valuable AI tools and thus facilitate translation to clinical practice. CLINICAL RELEVANCE STATEMENT: This paper advocates for the use of early value-based assessments. These assessments promote a comprehensive evaluation on how an AI tool in development can provide value in clinical practice and thus help improve the quality of these tools and the clinical process they support. KEY POINTS: Value in radiology AI should be perceived as a comprehensive term including health technology assessment domains and AI-specific domains. Incorporation of an early health technology assessment for radiology AI during development will lead to more valuable radiology AI tools. Comprehensive and transparent value assessment of radiology AI tools is essential for their widespread adoption.
Collapse
Affiliation(s)
- Erik H M Kemper
- Department of Radiology and Nuclear Medicine, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Hendrik Erenstein
- Department of Medical Imaging and Radiation Therapy, The Hanze University of Applied Sciences, Groningen, The Netherlands
- Department of Radiotherapy, University of Groningen, University Medical Centre Groningen, Groningen, The Netherlands
- Research Group Healthy Ageing, Allied Health Care and Nursing, The Hanze University of Applied Sciences, Groningen, The Netherlands
| | - Bart-Jan Boverhof
- Erasmus School of Health Policy & Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Ken Redekop
- Erasmus School of Health Policy & Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | | | - Matthias Dietzel
- Department of Radiology, University Hospital Erlangen, Erlangen, Germany
| | - Kevin B W Groot Lipman
- Department of Radiology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
- Department of Thoracic Oncology, Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Merel Huisman
- Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen, The Netherlands
| | - Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece
- Division of Radiology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden
- Computational Biomedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece
| | - Frans Vos
- Department of Radiology and Nuclear Medicine, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
- Department of Imaging Physics, Delft University of Technology, Delft, The Netherlands
| | - Maarten IJzerman
- Erasmus School of Health Policy & Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Martijn P A Starmans
- Department of Radiology and Nuclear Medicine, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
- Department of Pathology, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Jacob J Visser
- Department of Radiology and Nuclear Medicine, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands.
| |
Collapse
|
2
|
Nittas V, Ormond KE, Vayena E, Blasimme A. Realizing the promise of machine learning in precision oncology: expert perspectives on opportunities and challenges. BMC Cancer 2025; 25:276. [PMID: 39962436 PMCID: PMC11834663 DOI: 10.1186/s12885-025-13621-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 01/31/2025] [Indexed: 02/20/2025] Open
Abstract
BACKGROUND The ability of machine learning (ML) to process and learn from large quantities of heterogeneous patient data is gaining attention in the precision oncology community. Some remarkable developments have taken place in the domain of image classification tasks in areas such as digital pathology and diagnostic radiology. The application of ML approaches to the analysis of DNA data, including tumor-derived genomic profiles, microRNAs, and cancer epigenetic signatures, while relatively more recent, has demonstrated some utility in identifying driver variants and molecular signatures with possible prognostic and therapeutic applications. METHODS We conducted semi-structured interviews with academic and clinical experts to capture the status quo, challenges, opportunities, ethical implications, and future directions. RESULTS Our participants agreed that machine learning in precision oncology is in infant stages, with clinical integration still rare. Overall, participants equated ongoing developments with better clinical workflows and improved treatment decisions for more cancer patients. They underscored the ability of machine learning to tackle the dynamic nature of cancer, break down the complexity of molecular data, and support decision-making. Our participants emphasized obstacles related to molecular data access, clinical utility, and guidelines. The availability of reliable and well-curated data to train and validate machine learning algorithms and integrate multiple data sources were described as constraints yet necessary for future clinical implementation. Frequently mentioned ethical challenges included privacy risks, equity, explainability, trust, and incidental findings, with privacy being the most polarizing. While participants recognized the issue of hype surrounding machine learning in precision oncology, they agreed that, in an assistive role, it represents the future of precision oncology. CONCLUSIONS Given the unique nature of medical AI, our findings highlight the field's potential and remaining challenges. ML will continue to advance cancer research and provide opportunities for patient-centric, personalized, and efficient precision oncology. Yet, the field must move beyond hype and toward concrete efforts to overcome key obstacles, such as ensuring access to molecular data, establishing clinical utility, developing guidelines and regulations, and meaningfully addressing ethical challenges.
Collapse
Affiliation(s)
- Vasileios Nittas
- Epidemiology, Biostatistics and Prevention Institute, University of Zurich, Hirschengraben 84, Zurich, 8001, Switzerland
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Hottingerstrasse 10, Zurich, 8092, Switzerland
| | - Kelly E Ormond
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Hottingerstrasse 10, Zurich, 8092, Switzerland
| | - Effy Vayena
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Hottingerstrasse 10, Zurich, 8092, Switzerland
| | - Alessandro Blasimme
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Hottingerstrasse 10, Zurich, 8092, Switzerland.
| |
Collapse
|
3
|
Bélisle-Pipon JC. Commentary: Implications of causality in artificial intelligence. Why Causal AI is easier said than done. Front Artif Intell 2025; 7:1488359. [PMID: 39911917 PMCID: PMC11794494 DOI: 10.3389/frai.2024.1488359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Accepted: 12/16/2024] [Indexed: 02/07/2025] Open
|
4
|
Tsekea S, Mandoga E. The ethics of artificial intelligence use in university libraries in Zimbabwe. Front Res Metr Anal 2025; 9:1522423. [PMID: 39897937 PMCID: PMC11782261 DOI: 10.3389/frma.2024.1522423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Accepted: 12/09/2024] [Indexed: 02/04/2025] Open
Abstract
Introduction The emergence of artificial intelligence (AI) has revolutionised higher education teaching and learning. AI has the power to analyse large amounts of data and make intelligent predictions thus changing the whole teaching and learning processes. However, such a rise has led to institutions questioning the morality of these applications. The changes have left librarians and educators worried about the major ethical questions surrounding privacy, equality of information, protection of intellectual property, cheating, misinformation and job security. Libraries have always been concerned about ethics and many go out of their way to make sure communities are educated about the ethical question. However, the emergence of artificial intelligence has caught them unaware. Methods This research investigates the preparedness of higher education librarians to support the ethical use of information within the higher and tertiary education fraternity. A qualitative approach was used for this study. Interviews were done with thirty purposively selected librarians and academics from universities in Zimbabwe. Results Findings indicated that many university libraries in Zimbabwe are still at the adoption stage of artificial intelligence. It was also found that institutions and libraries are not yet prepared for AI use and are still crafting policies on the use of AI. Discussion Libraries seem prepared to adopt AI. They are also prepared to offer training on how to protect intellectual property but have serious challenges in issues of transparency, data security, plagiarism detection and concerns about job losses. However, with no major ethical policies having been crafted on AI use, it becomes challenging for libraries to full adopt its usage.
Collapse
Affiliation(s)
- Stephen Tsekea
- Department of Information Science and Records Management, Zimbabwe Open University, Harare, Zimbabwe
| | - Edward Mandoga
- Department of Teacher Development, Zimbabwe Open University, Harare, Zimbabwe
| |
Collapse
|
5
|
Uwimana A, Gnecco G, Riccaboni M. Artificial intelligence for breast cancer detection and its health technology assessment: A scoping review. Comput Biol Med 2025; 184:109391. [PMID: 39579663 DOI: 10.1016/j.compbiomed.2024.109391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 10/01/2024] [Accepted: 11/07/2024] [Indexed: 11/25/2024]
Abstract
BACKGROUND Recent healthcare advancements highlight the potential of Artificial Intelligence (AI) - and especially, among its subfields, Machine Learning (ML) - in enhancing Breast Cancer (BC) clinical care, leading to improved patient outcomes and increased radiologists' efficiency. While medical imaging techniques have significantly contributed to BC detection and diagnosis, their synergy with AI algorithms has consistently demonstrated superior diagnostic accuracy, reduced False Positives (FPs), and enabled personalized treatment strategies. Despite the burgeoning enthusiasm for leveraging AI for early and effective BC clinical care, its widespread integration into clinical practice is yet to be realized, and the evaluation of AI-based health technologies in terms of health and economic outcomes remains an ongoing endeavor. OBJECTIVES This scoping review aims to investigate AI (and especially ML) applications that have been implemented and evaluated across diverse clinical tasks or decisions in breast imaging and to explore the current state of evidence concerning the assessment of AI-based technologies for BC clinical care within the context of Health Technology Assessment (HTA). METHODS We conducted a systematic literature search following the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) checklist in PubMed and Scopus to identify relevant studies on AI (and particularly ML) applications in BC detection and diagnosis. We limited our search to studies published from January 2015 to October 2023. The Minimum Information about CLinical Artificial Intelligence Modeling (MI-CLAIM) checklist was used to assess the quality of AI algorithms development, evaluation, and reporting quality in the reviewed articles. The HTA Core Model® was also used to analyze the comprehensiveness, robustness, and reliability of the reported results and evidence in AI-systems' evaluations to ensure rigorous assessment of AI systems' utility and cost-effectiveness in clinical practice. RESULTS Of the 1652 initially identified articles, 104 were deemed eligible for inclusion in the review. Most studies examined the clinical effectiveness of AI-based systems (78.84%, n= 82), with one study focusing on safety in clinical settings, and 13.46% (n=14) focusing on patients' benefits. Of the studies, 31.73% (n=33) were ethically approved to be carried out in clinical practice, whereas 25% (n=26) evaluated AI systems legally approved for clinical use. Notably, none of the studies addressed the organizational implications of AI systems in clinical practice. Of the 104 studies, only two of them focused on cost-effectiveness analysis, and were analyzed separately. The average percentage scores for the first 102 AI-based studies' quality assessment based on the MI-CLAIM checklist criteria were 84.12%, 83.92%, 83.98%, 74.51%, and 14.7% for study design, data and optimization, model performance, model examination, and reproducibility, respectively. Notably, 20.59% (n=21) of these studies relied on large-scale representative real-world breast screening datasets, with only 10.78% (n =11) studies demonstrating the robustness and generalizability of the evaluated AI systems. CONCLUSION In bridging the gap between cutting-edge developments and seamless integration of AI systems into clinical workflows, persistent challenges encompass data quality and availability, ethical and legal considerations, robustness and trustworthiness, scalability, and alignment with existing radiologists' workflow. These hurdles impede the synthesis of comprehensive, robust, and reliable evidence to substantiate these systems' clinical utility, relevance, and cost-effectiveness in real-world clinical workflows. Consequently, evaluating AI-based health technologies through established HTA methodologies becomes complicated. We also highlight potential significant influences on AI systems' effectiveness of various factors, such as operational dynamics, organizational structure, the application context of AI systems, and practices in breast screening or examination reading of AI support tools in radiology. Furthermore, we emphasize substantial reciprocal influences on decision-making processes between AI systems and radiologists. Thus, we advocate for an adapted assessment framework specifically designed to address these potential influences on AI systems' effectiveness, mainly addressing system-level transformative implications for AI systems rather than focusing solely on technical performance and task-level evaluations.
Collapse
Affiliation(s)
| | | | - Massimo Riccaboni
- IMT School for Advanced Studies, Lucca, Italy; IUSS University School for Advanced Studies, Pavia, Italy.
| |
Collapse
|
6
|
Bélisle-Pipon JC. Why we need to be careful with LLMs in medicine. Front Med (Lausanne) 2024; 11:1495582. [PMID: 39697212 PMCID: PMC11652181 DOI: 10.3389/fmed.2024.1495582] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Accepted: 11/19/2024] [Indexed: 12/20/2024] Open
|
7
|
Di Bidino R, Daugbjerg S, Papavero SC, Haraldsen IH, Cicchetti A, Sacchini D. Health technology assessment framework for artificial intelligence-based technologies. Int J Technol Assess Health Care 2024; 40:e61. [PMID: 39568412 DOI: 10.1017/s0266462324000308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2024]
Abstract
OBJECTIVES Artificial intelligence (AI)-based health technologies (AIHTs) have already been applied in clinical practice. However, there is currently no standardized framework for evaluating them based on the principles of health technology assessment (HTA). METHODS A two-round Delphi survey was distributed to a panel of experts to determine the significance of incorporating topics outlined in the EUnetHTA Core Model and twenty additional ones identified through literature reviews. Each panelist assigned scores to each topic. Topics were categorized as critical to include (scores 7-9), important but not critical (scores 4-6), and not important (scores 1-3). A 70 percent cutoff was used to determine high agreement. RESULTS Our panel of 46 experts indicated that 48 out of the 65 proposed topics are critical and should be included in an HTA framework for AIHTs. Among the ten most crucial topics, the following emerged: accuracy of the AI model (97.78 percent), patient safety (95.65 percent), benefit-harm balance evaluated from an ethical standpoint (95.56 percent), and bias in data (91.30 percent). Importantly, our findings highlight that the Core Model is insufficient in capturing all relevant topics for AI-based technologies, as 14 out of the additional 20 topics were identified as crucial. CONCLUSION It is imperative to determine the level of agreement on AI-relevant HTA topics to establish a robust assessment framework. This framework will play a foundational role in evaluating AI tools for the early diagnosis of dementia, which is the focus of the European project AI-Mind currently being developed.
Collapse
Affiliation(s)
- Rossella Di Bidino
- Graduate School of Health Economics and Management, Universita Cattolica del SacroCuore (ALTEMS), 00168Rome, Italy
- Departement of Health Technologies and Innovation, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168Rome, Italy
| | - Signe Daugbjerg
- Graduate School of Health Economics and Management, Universita Cattolica del SacroCuore (ALTEMS), 00168Rome, Italy
| | - Sara C Papavero
- Graduate School of Health Economics and Management, Universita Cattolica del SacroCuore (ALTEMS), 00168Rome, Italy
| | - Ira H Haraldsen
- Department of Neurology, Division of Clinical Neuroscience, Oslo University Hospital, Norway
| | - Americo Cicchetti
- Directorate-General for Health Programming, Ministry of Health, Italy
| | - Dario Sacchini
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168Rome, Italy
- Department of Healthcare Surveillance and Bioethics, Universita Cattolica del Sacro Cuore, 00168Rome, Italy
| |
Collapse
|
8
|
Goodman C, Treloar E. Is health technology assessment ready for generative pretrained transformer large language models? Report of a fishbowl inquiry. Int J Technol Assess Health Care 2024; 40:e48. [PMID: 39498482 PMCID: PMC11569908 DOI: 10.1017/s0266462324000382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 05/08/2024] [Accepted: 06/17/2024] [Indexed: 11/07/2024]
Abstract
OBJECTIVES The Health Technology Assessment International (HTAi) 2023 Annual Meeting included a novel "fishbowl" session intended to 1) probe the role of HTA in the emergence of generative pretrained transformer (GPT) large language models (LLMs) into health care and 2) demonstrate the semistructured, interactive fishbowl process applied to an emerging "hot topic" by diverse international participants. METHODS The fishbowl process is a format for conducting medium-to-large group discussions. Participants are separated into an inner group and an outer group on the periphery. The inner group responds to a set of questions, whereas the outer group listens actively. During the session, participants voluntarily enter and leave the inner group. The questions for this fishbowl were: What are current and potential future applications of GPT LLMs in health care? How can HTA assess intended and unintended impacts of GPT LLM applications in health care? How might GPT be used to improve HTA methodology? RESULTS Participants offered approximately sixty responses across the three questions. Among the prominent themes were: improving operational efficiency, terminology and language, training and education, evidence synthesis, detecting and minimizing biases, stakeholder engagement, and recognizing and accounting for ethical, legal, and social implications. CONCLUSIONS The interactive fishbowl format enabled the sharing of real-time input on how GPT LLMs and related disruptive technologies will influence what technologies will be assessed, how they will be assessed, and how they might be used to improve HTA. It offers novel perspectives from the HTA community and aligns with certain aspects of ongoing HTA and evidence framework development.
Collapse
Affiliation(s)
- Clifford Goodman
- Independent Consultant, Health Care Technology & Policy, Bethesda, MD, USA
| | - Ellie Treloar
- Discipline of Surgery, University of Adelaide, AdelaideSA, Australia
| |
Collapse
|
9
|
Rhim J, Gallois H, Ravitsky V, Bélisle-Pipon JC. Beyond Consent: The MAMLS in the Room. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024; 24:85-88. [PMID: 39283388 DOI: 10.1080/15265161.2024.2388737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/20/2024]
|
10
|
Gurnani B, Kaur K, Lalgudi VG, Kundu G, Mimouni M, Liu H, Jhanji V, Prakash G, Roy AS, Shetty R, Gurav JS. Role of artificial intelligence, machine learning and deep learning models in corneal disorders - A narrative review. J Fr Ophtalmol 2024; 47:104242. [PMID: 39013268 DOI: 10.1016/j.jfo.2024.104242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 05/13/2024] [Accepted: 05/15/2024] [Indexed: 07/18/2024]
Abstract
In the last decade, artificial intelligence (AI) has significantly impacted ophthalmology, particularly in managing corneal diseases, a major reversible cause of blindness. This review explores AI's transformative role in the corneal subspecialty, which has adopted advanced technology for superior clinical judgment, early diagnosis, and personalized therapy. While AI's role in anterior segment diseases is less documented compared to glaucoma and retinal pathologies, this review highlights its integration into corneal diagnostics through imaging techniques like slit-lamp biomicroscopy, anterior segment optical coherence tomography (AS-OCT), and in vivo confocal biomicroscopy. AI has been pivotal in refining decision-making and prognosis for conditions such as keratoconus, infectious keratitis, and dystrophies. Multi-disease deep learning neural networks (MDDNs) have shown diagnostic ability in classifying corneal diseases using AS-OCT images, achieving notable metrics like an AUC of 0.910. AI's progress over two decades has significantly improved the accuracy of diagnosing conditions like keratoconus and microbial keratitis. For instance, AI has achieved a 90.7% accuracy rate in classifying bacterial and fungal keratitis and an AUC of 0.910 in differentiating various corneal diseases. Convolutional neural networks (CNNs) have enhanced the analysis of color-coded corneal maps, yielding up to 99.3% diagnostic accuracy for keratoconus. Deep learning algorithms have also shown robust performance in detecting fungal hyphae on in vivo confocal microscopy, with precise quantification of hyphal density. AI models combining tomography scans and visual acuity have demonstrated up to 97% accuracy in keratoconus staging according to the Amsler-Krumeich classification. However, the review acknowledges the limitations of current AI models, including their reliance on binary classification, which may not capture the complexity of real-world clinical presentations with multiple coexisting disorders. Challenges also include dependency on data quality, diverse imaging protocols, and integrating multimodal images for a generalized AI diagnosis. The need for interpretability in AI models is emphasized to foster trust and applicability in clinical settings. Looking ahead, AI has the potential to unravel the intricate mechanisms behind corneal pathologies, reduce healthcare's carbon footprint, and revolutionize diagnostic and management paradigms. Ethical and regulatory considerations will accompany AI's clinical adoption, marking an era where AI not only assists but augments ophthalmic care.
Collapse
Affiliation(s)
- B Gurnani
- Department of Cataract, Cornea, External Disease, Trauma, Ocular Surface and Refractive Surgery, ASG Eye Hospital, Jodhpur, Rajasthan, India.
| | - K Kaur
- Department of Cataract, Pediatric Ophthalmology and Strabismus, ASG Eye Hospital, Jodhpur, Rajasthan, India
| | - V G Lalgudi
- Department of Cornea, Refractive surgery, Ira G Ross Eye Institute, Jacobs School of Medicine and Biomedical Sciences, State University of New York (SUNY), Buffalo, USA
| | - G Kundu
- Department of Cornea and Refractive Surgery, Narayana Nethralaya, Bangalore, India
| | - M Mimouni
- Department of Ophthalmology, Rambam Health Care Campus affiliated with the Bruce and Ruth Rappaport Faculty of Medicine, Technion-Israel Institute of Technology, Haifa, Israel
| | - H Liu
- Department of Ophthalmology, University of Ottawa Eye Institute, Ottawa, Canada
| | - V Jhanji
- UPMC Eye Center, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - G Prakash
- Department of Ophthalmology, School of Medicine, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - A S Roy
- Narayana Nethralaya Foundation, Bangalore, India
| | - R Shetty
- Department of Cornea and Refractive Surgery, Narayana Nethralaya, Bangalore, India
| | - J S Gurav
- Department of Opthalmology, Armed Forces Medical College, Pune, India
| |
Collapse
|
11
|
Misra R, Keane PA, Hogg HDJ. How should we train clinicians for artificial intelligence in healthcare? Future Healthc J 2024; 11:100162. [PMID: 39371537 PMCID: PMC11452832 DOI: 10.1016/j.fhj.2024.100162] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 06/25/2024] [Accepted: 07/02/2024] [Indexed: 10/08/2024]
Affiliation(s)
- Rohan Misra
- West Hertfordshire Teaching Hospitals NHS Trust, Watford, UK
| | - Pearse A. Keane
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Henry David Jeffry Hogg
- Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| |
Collapse
|
12
|
Farah L, Borget I, Martelli N, Vallee A. Suitability of the Current Health Technology Assessment of Innovative Artificial Intelligence-Based Medical Devices: Scoping Literature Review. J Med Internet Res 2024; 26:e51514. [PMID: 38739911 PMCID: PMC11130781 DOI: 10.2196/51514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 12/17/2023] [Accepted: 12/28/2023] [Indexed: 05/16/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI)-based medical devices have garnered attention due to their ability to revolutionize medicine. Their health technology assessment framework is lacking. OBJECTIVE This study aims to analyze the suitability of each health technology assessment (HTA) domain for the assessment of AI-based medical devices. METHODS We conducted a scoping literature review following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology. We searched databases (PubMed, Embase, and Cochrane Library), gray literature, and HTA agency websites. RESULTS A total of 10.1% (78/775) of the references were included. Data quality and integration are vital aspects to consider when describing and assessing the technical characteristics of AI-based medical devices during an HTA process. When it comes to implementing specialized HTA for AI-based medical devices, several practical challenges and potential barriers could be highlighted and should be taken into account (AI technological evolution timeline, data requirements, complexity and transparency, clinical validation and safety requirements, regulatory and ethical considerations, and economic evaluation). CONCLUSIONS The adaptation of the HTA process through a methodological framework for AI-based medical devices enhances the comparability of results across different evaluations and jurisdictions. By defining the necessary expertise, the framework supports the development of a skilled workforce capable of conducting robust and reliable HTAs of AI-based medical devices. A comprehensive adapted HTA framework for AI-based medical devices can provide valuable insights into the effectiveness, cost-effectiveness, and societal impact of AI-based medical devices, guiding their responsible implementation and maximizing their benefits for patients and health care systems.
Collapse
Affiliation(s)
- Line Farah
- Innovation Center for Medical Devices Department, Foch Hospital, Suresnes, France
- Groupe de Recherche et d'accueil en Droit et Economie de la Santé Department, University Paris-Saclay, Orsay, France
| | - Isabelle Borget
- Groupe de Recherche et d'accueil en Droit et Economie de la Santé Department, University Paris-Saclay, Orsay, France
- Department of Biostatistics and Epidemiology, Gustave Roussy, University Paris-Saclay, Villejuif, France
- Oncostat U1018, Inserm, Équipe Labellisée Ligue Contre le Cancer, University Paris-Saclay, Villejuif, France
| | - Nicolas Martelli
- Groupe de Recherche et d'accueil en Droit et Economie de la Santé Department, University Paris-Saclay, Orsay, France
- Pharmacy Department, Georges Pompidou European Hospital, Paris, France
| | - Alexandre Vallee
- Department of Epidemiology and Public Health, Foch Hospital, Suresnes, France
| |
Collapse
|
13
|
Cecchi R, Haja TM, Calabrò F, Fasterholdt I, Rasmussen BSB. Artificial intelligence in healthcare: why not apply the medico-legal method starting with the Collingridge dilemma? Int J Legal Med 2024; 138:1173-1178. [PMID: 38172326 DOI: 10.1007/s00414-023-03152-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 12/15/2023] [Indexed: 01/05/2024]
Abstract
Technology has greatly influenced and radically changed human life, from communication to creativity and from productivity to entertainment. The authors, starting from considerations concerning the implementation of new technologies with a strong impact on people's everyday lives, take up Collingridge's dilemma and relate it to the application of AI in healthcare. Collingridge's dilemma is an ethical and epistemological problem concerning the relationship between technology and society which involves two approaches. The proactive approach and socio-technological experimentation taken into account in the dilemma are discussed, the former taking health technology assessment (HTA) processes as a reference and the latter the AI studies conducted so far. As a possible prevention of the critical issues raised, the use of the medico-legal method is proposed, which classically lies between the prevention of possible adverse events and the reconstruction of how these occurred.The authors believe that this methodology, adopted as a European guideline in the medico-legal field for the assessment of medical liability, can be adapted to AI applied to the healthcare scenario and used for the assessment of liability issues. The topic deserves further investigation and will certainly be taken into consideration as a possible key to future scenarios.
Collapse
Affiliation(s)
- Rossana Cecchi
- Laboratory of Forensic Medicine, Department of Medicine and Surgery, University of Parma, Parma, Italy.
| | - Tudor Mihai Haja
- Laboratory of Forensic Medicine, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Francesco Calabrò
- Laboratory of Forensic Medicine, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Iben Fasterholdt
- CIMT - Centre for Innovative Medical Technology, Odense University Hospital, Odense, Denmark
- Program for Health System and Technology Evaluation, Toronto General Hospital Research Institute, University Health Network, Toronto, Canada
| | - Benjamin S B Rasmussen
- Department of Radiology & CAI-X - Centre for Clinical Artificial Intelligence, Odense University Hospital, Odense, Denmark
| |
Collapse
|
14
|
Reason T, Rawlinson W, Langham J, Gimblett A, Malcolm B, Klijn S. Artificial Intelligence to Automate Health Economic Modelling: A Case Study to Evaluate the Potential Application of Large Language Models. PHARMACOECONOMICS - OPEN 2024; 8:191-203. [PMID: 38340276 PMCID: PMC10884386 DOI: 10.1007/s41669-024-00477-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/01/2024] [Indexed: 02/12/2024]
Abstract
BACKGROUND Current generation large language models (LLMs) such as Generative Pre-Trained Transformer 4 (GPT-4) have achieved human-level performance on many tasks including the generation of computer code based on textual input. This study aimed to assess whether GPT-4 could be used to automatically programme two published health economic analyses. METHODS The two analyses were partitioned survival models evaluating interventions in non-small cell lung cancer (NSCLC) and renal cell carcinoma (RCC). We developed prompts which instructed GPT-4 to programme the NSCLC and RCC models in R, and which provided descriptions of each model's methods, assumptions and parameter values. The results of the generated scripts were compared to the published values from the original, human-programmed models. The models were replicated 15 times to capture variability in GPT-4's output. RESULTS GPT-4 fully replicated the NSCLC model with high accuracy: 100% (15/15) of the artificial intelligence (AI)-generated NSCLC models were error-free or contained a single minor error, and 93% (14/15) were completely error-free. GPT-4 closely replicated the RCC model, although human intervention was required to simplify an element of the model design (one of the model's fifteen input calculations) because it used too many sequential steps to be implemented in a single prompt. With this simplification, 87% (13/15) of the AI-generated RCC models were error-free or contained a single minor error, and 60% (9/15) were completely error-free. Error-free model scripts replicated the published incremental cost-effectiveness ratios to within 1%. CONCLUSION This study provides a promising indication that GPT-4 can have practical applications in the automation of health economic model construction. Potential benefits include accelerated model development timelines and reduced costs of development. Further research is necessary to explore the generalisability of LLM-based automation across a larger sample of models.
Collapse
Affiliation(s)
- Tim Reason
- Estima Scientific, Mediaworks, 191 Wood Ln, London, W12 7FP, UK.
| | | | - Julia Langham
- Estima Scientific, Mediaworks, 191 Wood Ln, London, W12 7FP, UK
| | - Andy Gimblett
- Estima Scientific, Mediaworks, 191 Wood Ln, London, W12 7FP, UK
| | | | - Sven Klijn
- Bristol Myers Squibb, Princeton, NJ, USA
| |
Collapse
|
15
|
Carapinha JL, Botes D, Carapinha R. Balancing innovation and ethics in AI governance for health technology assessment. J Med Econ 2024; 27:754-757. [PMID: 38711204 DOI: 10.1080/13696998.2024.2352821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 05/05/2024] [Indexed: 05/08/2024]
Affiliation(s)
- João L Carapinha
- Syenza, Anaheim, CA, USA
- Northeastern University School of Pharmacy, Boston, MA, USA
| | - Danélia Botes
- Health Economics and Outcomes Research Division, Syenza, Pretoria, South Africa
| | - René Carapinha
- Dynamic Intelligence Division, Syenza, Andorra la Vella, Andorra
| |
Collapse
|
16
|
Bélisle-Pipon JC, Powell M, English R, Malo MF, Ravitsky V, Bridge2AI–Voice Consortium, Bensoussan Y. Stakeholder perspectives on ethical and trustworthy voice AI in health care. Digit Health 2024; 10:20552076241260407. [PMID: 39055787 PMCID: PMC11271113 DOI: 10.1177/20552076241260407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 05/21/2024] [Indexed: 07/27/2024] Open
Abstract
Objective Voice as a health biomarker using artificial intelligence (AI) is gaining momentum in research. The noninvasiveness of voice data collection through accessible technology (such as smartphones, telehealth, and ambient recordings) or within clinical contexts means voice AI may help address health disparities and promote the inclusion of marginalized communities. However, the development of AI-ready voice datasets free from bias and discrimination is a complex task. The objective of this study is to better understand the perspectives of engaged and interested stakeholders regarding ethical and trustworthy voice AI, to inform both further ethical inquiry and technology innovation. Methods A questionnaire was administered to voice AI experts, clinicians, scholars, patients, trainees, and policy-makers who participated at the 2023 Voice AI Symposium organized by the Bridge2AI-Voice AI Consortium. The survey used a mix of Likert scale, ranking and open-ended questions. A total of 27 stakeholders participated in the study. Results The main results of the study are the identification of priorities in terms of ethical issues, an initial definition of ethically sourced data for voice AI, insights into the use of synthetic voice data, and proposals for acting on the trustworthiness of voice AI. The study shows a diversity of perspectives and adds nuance to the planning and development of ethical and trustworthy voice AI. Conclusions This study represents the first stakeholder survey related to voice as a biomarker of health published to date. This study sheds light on the critical importance of ethics and trustworthiness in the development of voice AI technologies for health applications.
Collapse
Affiliation(s)
| | - Maria Powell
- Vanderbilt University Medical Center, Department of Otolaryngology-Head & Neck Surgery, Nashville, TN, Canada
| | - Renee English
- Faculty of Health Sciences, Simon Fraser University, Burnaby, BC, Canada
| | | | - Vardit Ravitsky
- Hastings Center, Garrison, NY, USA
- Department of Global Health and Social Medicine, Harvard University, Cambridge, MA, USA
| | | | - Yael Bensoussan
- Department of Otolaryngology-Head & Neck Surgery, University of South Florida, Tampa, FL, USA
| |
Collapse
|
17
|
Miró Catalina Q, Femenia J, Fuster-Casanovas A, Marin-Gomez FX, Escalé-Besa A, Solé-Casals J, Vidal-Alaball J. Knowledge and Perception of the Use of AI and its Implementation in the Field of Radiology: Cross-Sectional Study. J Med Internet Res 2023; 25:e50728. [PMID: 37831495 PMCID: PMC10612005 DOI: 10.2196/50728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 08/31/2023] [Accepted: 09/25/2023] [Indexed: 10/14/2023] Open
Abstract
BACKGROUND Artificial Intelligence (AI) has been developing for decades, but in recent years its use in the field of health care has experienced an exponential increase. Currently, there is little doubt that these tools have transformed clinical practice. Therefore, it is important to know how the population perceives its implementation to be able to propose strategies for acceptance and implementation and to improve or prevent problems arising from future applications. OBJECTIVE This study aims to describe the population's perception and knowledge of the use of AI as a health support tool and its application to radiology through a validated questionnaire, in order to develop strategies aimed at increasing acceptance of AI use, reducing possible resistance to change and identifying possible sociodemographic factors related to perception and knowledge. METHODS A cross-sectional observational study was conducted using an anonymous and voluntarily validated questionnaire aimed at the entire population of Catalonia aged 18 years or older. The survey addresses 4 dimensions defined to describe users' perception of the use of AI in radiology, (1) "distrust and accountability," (2) "personal interaction," (3) "efficiency," and (4) "being informed," all with questions in a Likert scale format. Results closer to 5 refer to a negative perception of the use of AI, while results closer to 1 express a positive perception. Univariate and bivariate analyses were performed to assess possible associations between the 4 dimensions and sociodemographic characteristics. RESULTS A total of 379 users responded to the survey, with an average age of 43.9 (SD 17.52) years and 59.8% (n=226) of them identified as female. In addition, 89.8% (n=335) of respondents indicated that they understood the concept of AI. Of the 4 dimensions analyzed, "distrust and accountability" obtained a mean score of 3.37 (SD 0.53), "personal interaction" obtained a mean score of 4.37 (SD 0.60), "efficiency" obtained a mean score of 3.06 (SD 0.73) and "being informed" obtained a mean score of 3.67 (SD 0.57). In relation to the "distrust and accountability" dimension, women, people older than 65 years, the group with university studies, and the population that indicated not understanding the AI concept had significantly more distrust in the use of AI. On the dimension of "being informed," it was observed that the group with university studies rated access to information more positively and those who indicated not understanding the concept of AI rated it more negatively. CONCLUSIONS The majority of the sample investigated reported being familiar with the concept of AI, with varying degrees of acceptance of its implementation in radiology. It is clear that the most conflictive dimension is "personal interaction," whereas "efficiency" is where there is the greatest acceptance, being the dimension in which there are the best expectations for the implementation of AI in radiology.
Collapse
Affiliation(s)
- Queralt Miró Catalina
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
| | - Joaquim Femenia
- Faculty of Medicine, University of Vic-Central University of Catalonia, Vic, Spain
| | - Aïna Fuster-Casanovas
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
| | - Francesc X Marin-Gomez
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
| | - Anna Escalé-Besa
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
- Faculty of Medicine, University of Vic-Central University of Catalonia, Vic, Spain
| | - Jordi Solé-Casals
- Data and Signal Processing group, Faculty of Science, Technology and Engineering, University of Vic-Central University of Catalonia, Vic, Spain
- Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom
| | - Josep Vidal-Alaball
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
- Faculty of Medicine, University of Vic-Central University of Catalonia, Vic, Spain
| |
Collapse
|
18
|
Cresswell K, Rigby M, Magrabi F, Scott P, Brender J, Craven CK, Wong ZSY, Kukhareva P, Ammenwerth E, Georgiou A, Medlock S, De Keizer NF, Nykänen P, Prgomet M, Williams R. The need to strengthen the evaluation of the impact of Artificial Intelligence-based decision support systems on healthcare provision. Health Policy 2023; 136:104889. [PMID: 37579545 DOI: 10.1016/j.healthpol.2023.104889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 08/04/2023] [Indexed: 08/16/2023]
Abstract
Despite the renewed interest in Artificial Intelligence-based clinical decision support systems (AI-CDS), there is still a lack of empirical evidence supporting their effectiveness. This underscores the need for rigorous and continuous evaluation and monitoring of processes and outcomes associated with the introduction of health information technology. We illustrate how the emergence of AI-CDS has helped to bring to the fore the critical importance of evaluation principles and action regarding all health information technology applications, as these hitherto have received limited attention. Key aspects include assessment of design, implementation and adoption contexts; ensuring systems support and optimise human performance (which in turn requires understanding clinical and system logics); and ensuring that design of systems prioritises ethics, equity, effectiveness, and outcomes. Going forward, information technology strategy, implementation and assessment need to actively incorporate these dimensions. International policy makers, regulators and strategic decision makers in implementing organisations therefore need to be cognisant of these aspects and incorporate them in decision-making and in prioritising investment. In particular, the emphasis needs to be on stronger and more evidence-based evaluation surrounding system limitations and risks as well as optimisation of outcomes, whilst ensuring learning and contextual review. Otherwise, there is a risk that applications will be sub-optimally embodied in health systems with unintended consequences and without yielding intended benefits.
Collapse
Affiliation(s)
- Kathrin Cresswell
- The University of Edinburgh, Usher Institute, Edinburgh, United Kingdom.
| | - Michael Rigby
- Keele University, School of Social, Political and Global Studies and School of Primary, Community and Social Care, Keele, United Kingdom
| | - Farah Magrabi
- Macquarie University, Australian Institute of Health Innovation, Sydney, Australia
| | - Philip Scott
- University of Wales Trinity Saint David, Swansea, United Kingdom
| | - Jytte Brender
- Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Catherine K Craven
- University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Zoie Shui-Yee Wong
- St. Luke's International University, Graduate School of Public Health, Tokyo, Japan
| | - Polina Kukhareva
- Department of Biomedical Informatics, University of Utah, United States of America
| | - Elske Ammenwerth
- UMIT TIROL, Private University for Health Sciences and Health Informatics, Institute of Medical Informatics, Hall in Tirol, Austria
| | - Andrew Georgiou
- Macquarie University, Australian Institute of Health Innovation, Sydney, Australia
| | - Stephanie Medlock
- Amsterdam UMC location University of Amsterdam, Department of Medical Informatics, Meibergdreef 9, Amsterdam, the Netherlands; Amsterdam Public Health research institute, Digital Health and Quality of Care Amsterdam, the Netherlands
| | - Nicolette F De Keizer
- Amsterdam UMC location University of Amsterdam, Department of Medical Informatics, Meibergdreef 9, Amsterdam, the Netherlands; Amsterdam Public Health research institute, Digital Health and Quality of Care Amsterdam, the Netherlands
| | - Pirkko Nykänen
- Tampere University, Faculty for Information Technology and Communication Sciences, Finland
| | - Mirela Prgomet
- Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| | - Robin Williams
- The University of Edinburgh, Institute for the Study of Science, Technology and Innovation, Edinburgh, United Kingdom
| |
Collapse
|
19
|
Victor G, Bélisle-Pipon JC, Ravitsky V. Generative AI, Specific Moral Values: A Closer Look at ChatGPT's New Ethical Implications for Medical AI. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:65-68. [PMID: 37812098 PMCID: PMC10575680 DOI: 10.1080/15265161.2023.2250311] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
20
|
Bélisle-Pipon JC, Ravitsky V, Bensoussan Y. Individuals and (Synthetic) Data Points: Using Value-Sensitive Design to Foster Ethical Deliberations on Epistemic Transitions. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:69-72. [PMID: 37647464 PMCID: PMC11278678 DOI: 10.1080/15265161.2023.2237436] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
21
|
Bouhouita-Guermech S, Gogognon P, Bélisle-Pipon JC. Specific challenges posed by artificial intelligence in research ethics. Front Artif Intell 2023; 6:1149082. [PMID: 37483869 PMCID: PMC10358356 DOI: 10.3389/frai.2023.1149082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 06/13/2023] [Indexed: 07/25/2023] Open
Abstract
Background The twenty first century is often defined as the era of Artificial Intelligence (AI), which raises many questions regarding its impact on society. It is already significantly changing many practices in different fields. Research ethics (RE) is no exception. Many challenges, including responsibility, privacy, and transparency, are encountered. Research ethics boards (REB) have been established to ensure that ethical practices are adequately followed during research projects. This scoping review aims to bring out the challenges of AI in research ethics and to investigate if REBs are equipped to evaluate them. Methods Three electronic databases were selected to collect peer-reviewed articles that fit the inclusion criteria (English or French, published between 2016 and 2021, containing AI, RE, and REB). Two instigators independently reviewed each piece by screening with Covidence and then coding with NVivo. Results From having a total of 657 articles to review, we were left with a final sample of 28 relevant papers for our scoping review. The selected literature described AI in research ethics (i.e., views on current guidelines, key ethical concept and approaches, key issues of the current state of AI-specific RE guidelines) and REBs regarding AI (i.e., their roles, scope and approaches, key practices and processes, limitations and challenges, stakeholder perceptions). However, the literature often described REBs ethical assessment practices of projects in AI research as lacking knowledge and tools. Conclusion Ethical reflections are taking a step forward while normative guidelines adaptation to AI's reality is still dawdling. This impacts REBs and most stakeholders involved with AI. Indeed, REBs are not equipped enough to adequately evaluate AI research ethics and require standard guidelines to help them do so.
Collapse
Affiliation(s)
| | | | - Jean-Christophe Bélisle-Pipon
- School of Public Health, Université de Montréal, Montréal, QC, Canada
- Faculty of Health Sciences, Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
22
|
Walter W, Pohlkamp C, Meggendorfer M, Nadarajah N, Kern W, Haferlach C, Haferlach T. Artificial intelligence in hematological diagnostics: Game changer or gadget? Blood Rev 2023; 58:101019. [PMID: 36241586 DOI: 10.1016/j.blre.2022.101019] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 09/21/2022] [Accepted: 10/03/2022] [Indexed: 11/30/2022]
Abstract
The future of clinical diagnosis and treatment of hematologic diseases will inevitably involve the integration of artificial intelligence (AI)-based systems into routine practice to support the hematologists' decision making. Several studies have shown that AI-based models can already be used to automatically differentiate cells, reliably detect malignant cell populations, support chromosome banding analysis, and interpret clinical variants, contributing to early disease detection and prognosis. However, even the best tool can become useless if it is misapplied or the results are misinterpreted. Therefore, in order to comprehensively judge and correctly apply newly developed AI-based systems, the hematologist must have a basic understanding of the general concepts of machine learning. In this review, we provide the hematologist with a comprehensive overview of various machine learning techniques, their current implementations and approaches in different diagnostic subfields (e.g., cytogenetics, molecular genetics), and the limitations and unresolved challenges of the systems.
Collapse
Affiliation(s)
- Wencke Walter
- MLL Munich Leukemia Laboratory, Max-Lebsche-Platz 31, 81377 München, Germany.
| | - Christian Pohlkamp
- MLL Munich Leukemia Laboratory, Max-Lebsche-Platz 31, 81377 München, Germany.
| | - Manja Meggendorfer
- MLL Munich Leukemia Laboratory, Max-Lebsche-Platz 31, 81377 München, Germany.
| | - Niroshan Nadarajah
- MLL Munich Leukemia Laboratory, Max-Lebsche-Platz 31, 81377 München, Germany.
| | - Wolfgang Kern
- MLL Munich Leukemia Laboratory, Max-Lebsche-Platz 31, 81377 München, Germany.
| | - Claudia Haferlach
- MLL Munich Leukemia Laboratory, Max-Lebsche-Platz 31, 81377 München, Germany.
| | - Torsten Haferlach
- MLL Munich Leukemia Laboratory, Max-Lebsche-Platz 31, 81377 München, Germany.
| |
Collapse
|
23
|
Bélisle-Pipon JC, David PM. Digital Therapies (DTx) as New Tools within Physicians' Therapeutic Arsenal: Key Observations to Support their Effective and Responsible Development and Use. Pharmaceut Med 2023; 37:121-127. [PMID: 36653600 DOI: 10.1007/s40290-022-00459-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2022] [Indexed: 01/20/2023]
Abstract
In recent years, there has been a swift rise in the development of digital therapies (DTx). As a result of various technological advances and accessibility to patients, it is now possible to develop and offer therapeutic interventions in a digital manner. These take the form of an evidence-based intervention that is administered in digital form to prevent, manage, or treat a medical condition. What makes DTx significantly different from other types of digital applications or services (e.g., wellness applications) is that they are interventions authorised by regulatory agencies for the treatment, like a drug, of a health condition. Yielding actual therapeutic benefits and being at the crossroads of health and digital means that DTx are subject to both the upsides and downsides of both sectors. Thus, it is of particular interest to look at the facilitators and barriers to be foreseen in the development, assessment, and implementation of DTx. In this article, we will present key observations and outline the main challenges that may be faced in the development and integration of DTx into practice. It is certain that DTx can represent an interesting avenue for physicians to bring their prescribing role into the 21st century. We conclude with broad lessons that the emerging field of DTx can learn from decades of drug industry practice to avoid history repeating itself and to fast-track the development and ethical and optimal use of DTx.
Collapse
Affiliation(s)
- Jean-Christophe Bélisle-Pipon
- Faculty of Health Sciences, Simon Fraser University, 8888 University Drive, Burnaby, British Columbia, V5A 1S6, Canada.
| | - Pierre-Marie David
- Faculty of Pharmacy, Université de Montréal, 2900 Blvd Edouard Montpetit, Montréal, Québec, H3T 1J4, Canada
| |
Collapse
|
24
|
Zemplényi A, Tachkov K, Balkanyi L, Németh B, Petykó ZI, Petrova G, Czech M, Dawoud D, Goettsch W, Gutierrez Ibarluzea I, Hren R, Knies S, Lorenzovici L, Maravic Z, Piniazhko O, Savova A, Manova M, Tesar T, Zerovnik S, Kaló Z. Recommendations to overcome barriers to the use of artificial intelligence-driven evidence in health technology assessment. Front Public Health 2023; 11:1088121. [PMID: 37181704 PMCID: PMC10171457 DOI: 10.3389/fpubh.2023.1088121] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 04/03/2023] [Indexed: 05/16/2023] Open
Abstract
Background Artificial intelligence (AI) has attracted much attention because of its enormous potential in healthcare, but uptake has been slow. There are substantial barriers that challenge health technology assessment (HTA) professionals to use AI-generated evidence for decision-making from large real-world databases (e.g., based on claims data). As part of the European Commission-funded HTx H2020 (Next Generation Health Technology Assessment) project, we aimed to put forward recommendations to support healthcare decision-makers in integrating AI into the HTA processes. The barriers, addressed by the paper, are particularly focusing on Central and Eastern European (CEE) countries, where the implementation of HTA and access to health databases lag behind Western European countries. Methods We constructed a survey to rank the barriers to using AI for HTA purposes, completed by respondents from CEE jurisdictions with expertise in HTA. Using the results, two members of the HTx consortium from CEE developed recommendations on the most critical barriers. Then these recommendations were discussed in a workshop by a wider group of experts, including HTA and reimbursement decision-makers from both CEE countries and Western European countries, and summarized in a consensus report. Results Recommendations have been developed to address the top 15 barriers in areas of (1) human factor-related barriers, focusing on educating HTA doers and users, establishing collaborations and best practice sharing; (2) regulatory and policy-related barriers, proposing increasing awareness and political commitment and improving the management of sensitive information for AI use; (3) data-related barriers, suggesting enhancing standardization and collaboration with data networks, managing missing and unstructured data, using analytical and statistical approaches to address bias, using quality assessment tools and quality standards, improving reporting, and developing better conditions for the use of data; and (4) technological barriers, suggesting sustainable development of AI infrastructure. Conclusion In the field of HTA, the great potential of AI to support evidence generation and evaluation has not yet been sufficiently explored and realized. Raising awareness of the intended and unintended consequences of AI-based methods and encouraging political commitment from policymakers is necessary to upgrade the regulatory and infrastructural environment and knowledge base required to integrate AI into HTA-based decision-making processes better.
Collapse
Affiliation(s)
- Antal Zemplényi
- Center for Health Technology Assessment and Pharmacoeconomics Research, Faculty of Pharmacy, University of Pécs, Pécs, Hungary
- Syreon Research Institute, Budapest, Hungary
- *Correspondence: Antal Zemplényi,
| | - Konstantin Tachkov
- Department of Organization and Economics of Pharmacy, Faculty of Pharmacy, Medical University of Sofia, Sofia, Bulgaria
| | - Laszlo Balkanyi
- Medical Informatics R&D Center, Pannon University, Veszprém, Hungary
| | | | | | - Guenka Petrova
- Department of Organization and Economics of Pharmacy, Faculty of Pharmacy, Medical University of Sofia, Sofia, Bulgaria
| | - Marcin Czech
- Department of Pharmacoeconomics, Institute of Mother and Child, Warsaw, Poland
| | - Dalia Dawoud
- Science Policy and Research Programme, Science Evidence and Analytics Directorate, National Institute for Health and Care Excellence (NICE), London, United Kingdom
- Cairo University, Faculty of Pharmacy, Cairo, Egypt
| | - Wim Goettsch
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht University, Utrecht, Netherlands
- National Health Care Institute, Diemen, Netherlands
| | | | - Rok Hren
- Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia
| | - Saskia Knies
- National Health Care Institute, Diemen, Netherlands
| | - László Lorenzovici
- Syreon Research Romania, Tirgu Mures, Romania
- G. E. Palade University of Medicine, Pharmacy, Science and Technology, Tirgu Mures, Romania
| | | | - Oresta Piniazhko
- HTA Department of State Expert Centre of the Ministry of Health of Ukraine, Kyiv, Ukraine
| | - Alexandra Savova
- Department of Organization and Economics of Pharmacy, Faculty of Pharmacy, Medical University of Sofia, Sofia, Bulgaria
- National Council of Prices and Reimbursement of Medicinal Products, Sofia, Bulgaria
| | - Manoela Manova
- Department of Organization and Economics of Pharmacy, Faculty of Pharmacy, Medical University of Sofia, Sofia, Bulgaria
- National Council of Prices and Reimbursement of Medicinal Products, Sofia, Bulgaria
| | - Tomas Tesar
- Department of Organisation and Management of Pharmacy, Faculty of Pharmacy, Comenius University in Bratislava, Bratislava, Slovakia
| | | | - Zoltán Kaló
- Syreon Research Institute, Budapest, Hungary
- Centre for Health Technology Assessment, Semmelweis University, Budapest, Hungary
| |
Collapse
|
25
|
Couture V, Roy MC, Dez E, Laperle S, Bélisle-Pipon JC. Ethical Implications of Artificial Intelligence in Population Health and the Public’s Role in its Governance: Perspectives from a Citizen and Expert Panel (Preprint). J Med Internet Res 2022; 25:e44357. [PMID: 37104026 PMCID: PMC10176139 DOI: 10.2196/44357] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 02/14/2023] [Accepted: 03/10/2023] [Indexed: 03/12/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) systems are widely used in the health care sector. Mainly applied for individualized care, AI is increasingly aimed at population health. This raises important ethical considerations but also calls for responsible governance, considering that this will affect the population. However, the literature points to a lack of citizen participation in the governance of AI in health. Therefore, it is necessary to investigate the governance of the ethical and societal implications of AI in population health. OBJECTIVE This study aimed to explore the perspectives and attitudes of citizens and experts regarding the ethics of AI in population health, the engagement of citizens in AI governance, and the potential of a digital app to foster citizen engagement. METHODS We recruited a panel of 21 citizens and experts. Using a web-based survey, we explored their perspectives and attitudes on the ethical issues of AI in population health, the relative role of citizens and other actors in AI governance, and the ways in which citizens can be supported to participate in AI governance through a digital app. The responses of the participants were analyzed quantitatively and qualitatively. RESULTS According to the participants, AI is perceived to be already present in population health and its benefits are regarded positively, but there is a consensus that AI has substantial societal implications. The participants also showed a high level of agreement toward involving citizens into AI governance. They highlighted the aspects to be considered in the creation of a digital app to foster this involvement. They recognized the importance of creating an app that is both accessible and transparent. CONCLUSIONS These results offer avenues for the development of a digital app to raise awareness, to survey, and to support citizens' decision-making regarding the ethical, legal, and social issues of AI in population health.
Collapse
Affiliation(s)
| | | | - Emma Dez
- School of Research, Sciences Po Paris, Paris, France
| | - Samuel Laperle
- Department of Linguistics, Université du Québec à Montréal, Montréal, QC, Canada
| | | |
Collapse
|
26
|
Chen Y, Moreira P, Liu WW, Monachino M, Nguyen TLH, Wang A. Is there a gap between artificial intelligence applications and priorities in health care and nursing management? J Nurs Manag 2022; 30:3736-3742. [PMID: 36216773 PMCID: PMC10092524 DOI: 10.1111/jonm.13851] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 09/02/2022] [Accepted: 10/02/2022] [Indexed: 12/30/2022]
Abstract
AIM The article aims to outline a contrast between three priorities for nursing management proposed a decade ago and key features of the following 10 years of developments on artificial intelligence for health care and nursing management. This analysis intends to contribute to update the international debate on bridging the essence of health care and nursing management priorities and the focus of artificial intelligence developers. BACKGROUND Artificial intelligence research promises innovative approaches to supporting nurses' clinical decision-making and to conduct tasks not related to patient interaction, including administrative activities and patient records. Yet, even though there has been an increase in international research and development of artificial intelligence applications for nursing care during the past 10 years, it is unclear to what extent the priorities of nursing management have been embedded in the devised artificial intelligence solutions. EVALUATION Starting from three priorities for nursing management identified in 2011 in a special issue of the Journal Nursing Management, we went on to identify recent evidence concerning 10 years of artificial intelligence applications developed to support health care management and nursing activities since then. KEY ISSUE The article discusses to what extent priorities in health care and nursing management may have to be revised while adopting artificial intelligence applications or, alternatively, to what extent the direction of artificial intelligence developments may need to be revised to contribute to long acknowledged priorities of nursing management. CONCLUSION We have identified a conceptual gap between both sets of ideas and provide a discussion on the need to bridge that gap, while admitting that there may have been recent field developments still unreported in scientific literature. IMPLICATIONS FOR NURSING MANAGEMENT Artificial intelligence developers and health care nursing managers need to be more engaged in coordinating the future development of artificial intelligence applications with a renewed set of nursing management priorities.
Collapse
Affiliation(s)
- Yanjiao Chen
- Research Center on Social Work and Social Governance in Henan Province, Henan Normal University, Sociology Department, Xinxiang, China
| | - Paulo Moreira
- Shandong Provincial Qianfoshan Hospital, Jinan, Shandong, China.,Departamento de Ciencias da Gestao (Gestao em Saude), Atlantica Instituto Universitario, Oeiras, Portugal
| | - Wei-Wei Liu
- School of Social Work, Henan Normal University, Xinxiang, China
| | | | - Thi Le Ha Nguyen
- VNU University of Medicine and Pharmacy, Vietnam National University, Hanoi, Vietnam
| | - Aihua Wang
- Obstetrics Department, Kunming Maternal and Child Hospital, Kunming, China
| |
Collapse
|
27
|
Monteith S, Glenn T, Geddes J, Whybrow PC, Achtyes E, Bauer M. Expectations for Artificial Intelligence (AI) in Psychiatry. Curr Psychiatry Rep 2022; 24:709-721. [PMID: 36214931 PMCID: PMC9549456 DOI: 10.1007/s11920-022-01378-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/15/2022] [Indexed: 01/29/2023]
Abstract
PURPOSE OF REVIEW Artificial intelligence (AI) is often presented as a transformative technology for clinical medicine even though the current technology maturity of AI is low. The purpose of this narrative review is to describe the complex reasons for the low technology maturity and set realistic expectations for the safe, routine use of AI in clinical medicine. RECENT FINDINGS For AI to be productive in clinical medicine, many diverse factors that contribute to the low maturity level need to be addressed. These include technical problems such as data quality, dataset shift, black-box opacity, validation and regulatory challenges, and human factors such as a lack of education in AI, workflow changes, automation bias, and deskilling. There will also be new and unanticipated safety risks with the introduction of AI. The solutions to these issues are complex and will take time to discover, develop, validate, and implement. However, addressing the many problems in a methodical manner will expedite the safe and beneficial use of AI to augment medical decision making in psychiatry.
Collapse
Affiliation(s)
- Scott Monteith
- Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, MI, 49684, USA.
| | - Tasha Glenn
- ChronoRecord Association, Fullerton, CA, USA
| | - John Geddes
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
| | - Peter C Whybrow
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angeles, CA, USA
| | - Eric Achtyes
- Michigan State University College of Human Medicine, Grand Rapids, MI, 49684, USA
- Network180, Grand Rapids, MI, USA
| | - Michael Bauer
- Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
28
|
Luo X, Wu Y, Niu L, Huang L. Bibliometric Analysis of Health Technology Research: 1990~2020. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:9044. [PMID: 35897415 PMCID: PMC9330553 DOI: 10.3390/ijerph19159044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 07/19/2022] [Accepted: 07/19/2022] [Indexed: 12/10/2022]
Abstract
This paper aims to summarize the publishing trends, current status, research topics, and frontier evolution trends of health technology between 1990 and 2020 through various bibliometric analysis methods. In total, 6663 articles retrieved from the Web of Science core database were analyzed by Vosviewer and CiteSpace software. This paper found that: (1) The number of publications in the field of health technology increased exponentially; (2) there is no stable core group of authors in this research field, and the influence of the publishing institutions and journals in China is insufficient compared with those in Europe and the United States; (3) there are 21 core research topics in the field of health technology research, and these research topics can be divided into four classes: hot spots, potential hot spots, margin topics, and mature topics. C21 (COVID-19 prevention) and C10 (digital health technology) are currently two emerging research topics. (4) The number of research frontiers has increased in the past five years (2016-2020), and the research directions have become more diverse; rehabilitation, pregnancy, e-health, m-health, machine learning, and patient engagement are the six latest research frontiers.
Collapse
Affiliation(s)
| | | | | | - Lucheng Huang
- College of Economics and Management, Beijing University of Technology, Beijing 100124, China; (X.L.); (Y.W.); (L.N.)
| |
Collapse
|
29
|
Tachkov K, Zemplenyi A, Kamusheva M, Dimitrova M, Siirtola P, Pontén J, Nemeth B, Kalo Z, Petrova G. Barriers to Use Artificial Intelligence Methodologies in Health Technology Assessment in Central and East European Countries. Front Public Health 2022; 10:921226. [PMID: 35910914 PMCID: PMC9330148 DOI: 10.3389/fpubh.2022.921226] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 06/20/2022] [Indexed: 12/05/2022] Open
Abstract
The aim of this paper is to identify the barriers that are specifically relevant to the use of Artificial Intelligence (AI)-based evidence in Central and Eastern European (CEE) Health Technology Assessment (HTA) systems. The study relied on two main parallel sources to identify barriers to use AI methodologies in HTA in CEE, including a scoping literature review and iterative focus group meetings with HTx team members. Most of the other selected articles discussed AI from a clinical perspective (n = 25), and the rest are from regulatory perspective (n = 13), and transfer of knowledge point of view (n = 3). Clinical areas studied are quite diverse—from pediatric, diabetes, diagnostic radiology, gynecology, oncology, surgery, psychiatry, cardiology, infection diseases, and oncology. Out of all 38 articles, 25 (66%) describe the AI method and the rest are more focused on the utilization barriers of different health care services and programs. The potential barriers could be classified as data related, methodological, technological, regulatory and policy related, and human factor related. Some of the barriers are quite similar, especially concerning the technologies. Studies focusing on the AI usage for HTA decision making are scarce. AI and augmented decision making tools are a novel science, and we are in the process of adapting it to existing needs. HTA as a process requires multiple steps, multiple evaluations which rely on heterogenous data. Therefore, the observed range of barriers come as a no surprise, and experts in the field need to give their opinion on the most important barriers in order to develop recommendations to overcome them and to disseminate the practical application of these tools.
Collapse
Affiliation(s)
| | - Antal Zemplenyi
- Syreon Research Institute, Budapest, Hungary
- Center for Health Technology Assessment and Pharmacoeconomic Research, University of Pecs, Pecs, Hungary
| | - Maria Kamusheva
- Faculty of Pharmacy, Medical University of Sofia, Sofia, Bulgaria
| | - Maria Dimitrova
- Faculty of Pharmacy, Medical University of Sofia, Sofia, Bulgaria
| | - Pekka Siirtola
- Biomimetics and Intelligent Systems Group, University of Oulu, Oulu, Finland
| | - Johan Pontén
- Dental and Pharmaceutical Benefits Agency, Stockholm, Sweden
| | | | - Zoltan Kalo
- Syreon Research Institute, Budapest, Hungary
- Centre for Health Technology Assessment, Semmelweis University, Budapest, Hungary
| | - Guenka Petrova
- Faculty of Pharmacy, Medical University of Sofia, Sofia, Bulgaria
- *Correspondence: Guenka Petrova
| |
Collapse
|
30
|
Artificial intelligence ethics has a black box problem. AI & SOCIETY 2022. [DOI: 10.1007/s00146-021-01380-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|