1
|
Nor MI, Mohamed AA. Investigating the dynamics of tax evasion and revenue leakage in somali customs. PLoS One 2024; 19:e0303622. [PMID: 38843130 PMCID: PMC11156314 DOI: 10.1371/journal.pone.0303622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 04/30/2024] [Indexed: 06/09/2024] Open
Abstract
This study aims to investigate the dynamics of tax evasion and revenue leakage in the Somali customs framework, providing insights into the systemic opportunity structures, tax governance deficiencies, and personal incentive structures that facilitate these practices. By applying agency theory and rent-seeking theory, this research seeks to deepen the understanding of the complex relationship between individual motivations and systemic vulnerabilities in exacerbating corruption and tax evasion in a post-conflict governance context. By employing structural equation modeling (SEM) within the ADANCO-SEM analysis framework, this study analyzes primary survey data. This approach allows for a comprehensive examination of the relationships between systemic, governance, and personal factors contributing to corruption and tax evasion. The findings reveal a significant positive relationship between systemic opportunity structures, tax governance deficiencies, and personal incentive structures and the prevalence of tax evasion and corruption. Specifically, systemic opportunity structures were found to significantly influence both tax governance deficiencies and personal incentive structures, highlighting the intertwined nature of these factors in facilitating corrupt practices and tax evasion in Somali customs. This study underscores the urgent need for comprehensive reforms targeting systemic vulnerabilities, enhancing tax governance frameworks, and aligning personal incentives with the public interest. Practical applications include the adoption of advanced technological solutions for improved monitoring and transparency, as well as the development of targeted training programs for customs officials to foster ethical standards and compliance. This research contributes to the existing body of knowledge by providing a unique empirical examination of corruption and tax evasion in the context of Somali customs, a largely underexplored area in the literature. By integrating agency theory and rent-seeking theory, this study offers novel insights into the mechanisms of corruption and tax evasion, highlighting the importance of addressing both systemic and individual factors in combating these issues.
Collapse
|
2
|
Haber Y, Levkovich I, Hadar-Shoval D, Elyoseph Z. The Artificial Third: A Broad View of the Effects of Introducing Generative Artificial Intelligence on Psychotherapy. JMIR Ment Health 2024; 11:e54781. [PMID: 38787297 PMCID: PMC11137430 DOI: 10.2196/54781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 03/24/2024] [Accepted: 04/18/2024] [Indexed: 05/25/2024] Open
Abstract
Unlabelled This paper explores a significant shift in the field of mental health in general and psychotherapy in particular following generative artificial intelligence's new capabilities in processing and generating humanlike language. Following Freud, this lingo-technological development is conceptualized as the "fourth narcissistic blow" that science inflicts on humanity. We argue that this narcissistic blow has a potentially dramatic influence on perceptions of human society, interrelationships, and the self. We should, accordingly, expect dramatic changes in perceptions of the therapeutic act following the emergence of what we term the artificial third in the field of psychotherapy. The introduction of an artificial third marks a critical juncture, prompting us to ask the following important core questions that address two basic elements of critical thinking, namely, transparency and autonomy: (1) What is this new artificial presence in therapy relationships? (2) How does it reshape our perception of ourselves and our interpersonal dynamics? and (3) What remains of the irreplaceable human elements at the core of therapy? Given the ethical implications that arise from these questions, this paper proposes that the artificial third can be a valuable asset when applied with insight and ethical consideration, enhancing but not replacing the human touch in therapy.
Collapse
Affiliation(s)
- Yuval Haber
- The PhD Program of Hermeneutics and Cultural Studies, Interdisciplinary Studies Unit, Bar-Ilan University, Ramat Gan, Israel
| | | | - Dorit Hadar-Shoval
- Department of Psychology and Educational Counseling, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Zohar Elyoseph
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
- The Center for Psychobiological Research, Department of Psychology and Educational Counseling, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| |
Collapse
|
3
|
Barwise AK, Curtis S, Diedrich DA, Pickering BW. Using artificial intelligence to promote equitable care for inpatients with language barriers and complex medical needs: clinical stakeholder perspectives. J Am Med Inform Assoc 2024; 31:611-621. [PMID: 38099504 PMCID: PMC10873784 DOI: 10.1093/jamia/ocad224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 11/14/2023] [Indexed: 02/18/2024] Open
Abstract
OBJECTIVES Inpatients with language barriers and complex medical needs suffer disparities in quality of care, safety, and health outcomes. Although in-person interpreters are particularly beneficial for these patients, they are underused. We plan to use machine learning predictive analytics to reliably identify patients with language barriers and complex medical needs to prioritize them for in-person interpreters. MATERIALS AND METHODS This qualitative study used stakeholder engagement through semi-structured interviews to understand the perceived risks and benefits of artificial intelligence (AI) in this domain. Stakeholders included clinicians, interpreters, and personnel involved in caring for these patients or for organizing interpreters. Data were coded and analyzed using NVIVO software. RESULTS We completed 49 interviews. Key perceived risks included concerns about transparency, accuracy, redundancy, privacy, perceived stigmatization among patients, alert fatigue, and supply-demand issues. Key perceived benefits included increased awareness of in-person interpreters, improved standard of care and prioritization for interpreter utilization; a streamlined process for accessing interpreters, empowered clinicians, and potential to overcome clinician bias. DISCUSSION This is the first study that elicits stakeholder perspectives on the use of AI with the goal of improved clinical care for patients with language barriers. Perceived benefits and risks related to the use of AI in this domain, overlapped with known hazards and values of AI but some benefits were unique for addressing challenges with providing interpreter services to patients with language barriers. CONCLUSION Artificial intelligence to identify and prioritize patients for interpreter services has the potential to improve standard of care and address healthcare disparities among patients with language barriers.
Collapse
Affiliation(s)
- Amelia K Barwise
- Biomedical Ethics Research Program, Pulmonary and Critical Care Medicine, Mayo Clinic, Rochester, MN 55902, United States
| | - Susan Curtis
- Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN 55902, United States
| | - Daniel A Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55902, United States
| | - Brian W Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55902, United States
| |
Collapse
|
4
|
Semujanga B, Parent-Rocheleau X. Time-Based Stress and Procedural Justice: Can Transparency Mitigate the Effects of Algorithmic Compensation in Gig Work? INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2024; 21:86. [PMID: 38248549 PMCID: PMC10815495 DOI: 10.3390/ijerph21010086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/02/2024] [Accepted: 01/09/2024] [Indexed: 01/23/2024]
Abstract
The gig economy has led to a new management style, using algorithms to automate managerial decisions. Algorithmic management has aroused the interest of researchers, particularly regarding the prevalence of precarious working conditions and the health issues related to gig work. Despite algorithmically driven remuneration mechanisms' influence on work conditions, few studies have focused on the compensation dimension of algorithmic management. We investigate the effects of algorithmic compensation on gig workers in relation to perceptions of procedural justice and time-based stress, two important predictors of work-related health problems. Also, this study examines the moderating effect of algorithmic transparency in these relationships. Survey data were collected from 962 gig workers via a research panel. The results of hierarchical multiple regression analysis show that the degree of exposure to algorithmic compensation is positively related to time-based stress. However, contrary to our expectations, algorithmic compensation is also positively associated with procedural justice perceptions and our results indicate that this relation is enhanced at higher levels of perceived algorithmic transparency. Furthermore, transparency does not play a role in the relationship between algorithmic compensation and time-based stress. These findings suggest that perceived algorithmic transparency makes algorithmic compensation even fairer but does not appear to make it less stressful.
Collapse
Affiliation(s)
- Benjamin Semujanga
- Department of Human Resources Management, HEC Montréal, 3000 Côte Ste-Catherine, Montréal, QC H3T 2A7, Canada;
| | | |
Collapse
|
5
|
Baumgartner R, Arora P, Bath C, Burljaev D, Ciereszko K, Custers B, Ding J, Ernst W, Fosch-Villaronga E, Galanos V, Gremsl T, Hendl T, Kropp C, Lenk C, Martin P, Mbelu S, Morais Dos Santos Bruss S, Napiwodzka K, Nowak E, Roxanne T, Samerski S, Schneeberger D, Tampe-Mai K, Vlantoni K, Wiggert K, Williams R. Fair and equitable AI in biomedical research and healthcare: Social science perspectives. Artif Intell Med 2023; 144:102658. [PMID: 37783540 DOI: 10.1016/j.artmed.2023.102658] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 06/30/2023] [Accepted: 09/01/2023] [Indexed: 10/04/2023]
Abstract
Artificial intelligence (AI) offers opportunities but also challenges for biomedical research and healthcare. This position paper shares the results of the international conference "Fair medicine and AI" (online 3-5 March 2021). Scholars from science and technology studies (STS), gender studies, and ethics of science and technology formulated opportunities, challenges, and research and development desiderata for AI in healthcare. AI systems and solutions, which are being rapidly developed and applied, may have undesirable and unintended consequences including the risk of perpetuating health inequalities for marginalized groups. Socially robust development and implications of AI in healthcare require urgent investigation. There is a particular dearth of studies in human-AI interaction and how this may best be configured to dependably deliver safe, effective and equitable healthcare. To address these challenges, we need to establish diverse and interdisciplinary teams equipped to develop and apply medical AI in a fair, accountable and transparent manner. We formulate the importance of including social science perspectives in the development of intersectionally beneficent and equitable AI for biomedical research and healthcare, in part by strengthening AI health evaluation.
Collapse
Affiliation(s)
- Renate Baumgartner
- Center of Gender- and Diversity Research, University of Tübingen, Wilhelmstrasse 56, 72074 Tübingen, Germany; Athena Institute, Vrije Universiteit Amsterdam, De Boelelaan 1085, 1081 HV Amsterdam, The Netherlands.
| | - Payal Arora
- Erasmus School of Philosophy, Erasmus University Rotterdam, Burgemeester Oudlaan 50, 3062 PA Rotterdam, The Netherlands
| | - Corinna Bath
- Gender, Technology and Mobility, Institute for Flight Guidance, TU Braunschweig, Hermann-Blenk-Str. 27, 38108 Braunschweig, Germany
| | - Darja Burljaev
- Center of Gender- and Diversity Research, University of Tübingen, Wilhelmstrasse 56, 72074 Tübingen, Germany
| | - Kinga Ciereszko
- Department of Philosophy, Adam Mickiewicz University in Poznan, Szamarzewski Street 89C, 60-569 Poznan, Poland
| | - Bart Custers
- eLaw - Center for Law and Digital Technologies, Leiden University, Steenschuur 25, 2311 ES Leiden, Netherlands
| | - Jin Ding
- iHuman and Department of Sociological Studies, University of Sheffield, ICOSS, 219 Portobello, Sheffield S1 4DP, United Kingdom
| | - Waltraud Ernst
- Institute for Women's and Gender Studies, Johannes Kepler University Linz, Altenberger Strasse 69, 4040 Linz, Austria
| | - Eduard Fosch-Villaronga
- eLaw - Center for Law and Digital Technologies, Leiden University, Steenschuur 25, 2311 ES Leiden, Netherlands
| | - Vassilis Galanos
- Science, Technology and Innovation Studies, School of Social and Political Science, University of Edinburgh, Old Surgeons' Hall, High School Yards, Edinburgh EH1 1LZ, United Kingdom
| | - Thomas Gremsl
- Institute of Ethics and Social Teaching, Faculty of Catholic Theology, University of Graz, Heinrichstraße 78b/2, 8010 Graz, Austria
| | - Tereza Hendl
- Professorship for Ethics of Medicine, University of Augsburg, Stenglinstraße 2, 86156 Augsburg, Germany; Institute of Ethics, History and Theory of Medicine, Ludwig-Maximilians-University in Munich, Lessingstr. 2, 80336 Munich, Germany
| | - Cordula Kropp
- Center for Interdisciplinary Risk and Innovation Studies (ZIRIUS), University of Stuttgart, Seidenstraße 36, 70174 Stuttgart, Germany
| | - Christian Lenk
- Institute of the History, Philosophy and Ethics of Medicine, Ulm University, Parkstraße 11, 89073 Ulm, Germany
| | - Paul Martin
- iHuman and Department of Sociological Studies, University of Sheffield, ICOSS, 219 Portobello, Sheffield S1 4DP, United Kingdom
| | - Somto Mbelu
- Erasmus School of Philosophy, Erasmus University Rotterdam, 10A Ademola Close off Remi Fani Kayode Street, GRA Ikeja, Lagos, Nigeria
| | | | - Karolina Napiwodzka
- Department of Philosophy, Adam Mickiewicz University in Poznan, Szamarzewski Street 89C, 60-569 Poznan, Poland
| | - Ewa Nowak
- Department of Philosophy, Adam Mickiewicz University in Poznan, Szamarzewski Street 89C, 60-569 Poznan, Poland
| | - Tiara Roxanne
- Data & Society Institute, 228 Park Ave S PMB 83075, New York, NY 10003-1502, United States of America
| | - Silja Samerski
- Fachbereich Soziale Arbeit und Gesundheit, Hochschule Emden/Leer, Constantiaplatz 4, 26723 Emden, Germany
| | - David Schneeberger
- Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, Auenbruggerplatz 2, 8036 Graz, Austria
| | - Karolin Tampe-Mai
- Center for Interdisciplinary Risk and Innovation Studies (ZIRIUS), University of Stuttgart, Seidenstraße 36, 70174 Stuttgart, Germany
| | - Katerina Vlantoni
- Department of History and Philosophy of Science, School of Science, National and Kapodistrian University of Athens, Panepistimioupoli, Ilisia, Athens 15771, Greece
| | - Kevin Wiggert
- Institute of Sociology, Department Sociology of Technology and Innovation, Technical University of Berlin, Fraunhoferstraße 33-36, 10623 Berlin, Germany
| | - Robin Williams
- Science, Technology and Innovation Studies, School of Social and Political Science, University of Edinburgh, Old Surgeons' Hall, High School Yards, Edinburgh EH1 1LZ, United Kingdom
| |
Collapse
|
6
|
Sloane M, Solano-Kamaiko IR, Yuan J, Dasgupta A, Stoyanovich J. Introducing contextual transparency for automated decision systems. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-023-00623-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
|
7
|
Vodanović M, Subašić M, Milošević D, Savić Pavičin I. Artificial Intelligence in Medicine and Dentistry. Acta Stomatol Croat 2023; 57:70-84. [PMID: 37288152 PMCID: PMC10243707 DOI: 10.15644/asc57/1/8] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 03/01/2023] [Indexed: 09/14/2023] Open
Abstract
INTRODUCTION Artificial intelligence has been applied in various fields throughout history, but its integration into daily life is more recent. The first applications of AI were primarily in academia and government research institutions, but as technology has advanced, AI has also been applied in industry, commerce, medicine and dentistry. OBJECTIVE Considering that the possibilities of applying artificial intelligence are developing rapidly and that this field is one of the areas with the greatest increase in the number of newly published articles, the aim of this paper was to provide an overview of the literature and to give an insight into the possibilities of applying artificial intelligence in medicine and dentistry. In addition, the aim was to discuss its advantages and disadvantages. CONCLUSION The possibilities of applying artificial intelligence to medicine and dentistry are just being discovered. Artificial intelligence will greatly contribute to developments in medicine and dentistry, as it is a tool that enables development and progress, especially in terms of personalized healthcare that will lead to much better treatment outcomes.
Collapse
Affiliation(s)
- Marin Vodanović
- Department of Dental Anthropology, School of Dental Medicine, University of Zagreb, Croatia
- University Hospital Centre Zagreb, Croatia
| | - Marko Subašić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia
| | - Denis Milošević
- Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia
| | - Ivana Savić Pavičin
- Department of Dental Anthropology, School of Dental Medicine, University of Zagreb, Croatia
- University Hospital Centre Zagreb, Croatia
| |
Collapse
|
8
|
Simeoni C, Furlan E, Pham HV, Critto A, de Juan S, Trégarot E, Cornet CC, Meesters E, Fonseca C, Botelho AZ, Krause T, N'Guetta A, Cordova FE, Failler P, Marcomini A. Evaluating the combined effect of climate and anthropogenic stressors on marine coastal ecosystems: Insights from a systematic review of cumulative impact assessment approaches. THE SCIENCE OF THE TOTAL ENVIRONMENT 2023; 861:160687. [PMID: 36473660 DOI: 10.1016/j.scitotenv.2022.160687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 06/17/2023]
Abstract
Cumulative impacts increasingly threaten marine and coastal ecosystems. To address this issue, the research community has invested efforts on designing and testing different methodological approaches and tools that apply cumulative impact appraisal schemes for a sound evaluation of the complex interactions and dynamics among multiple pressures affecting marine and coastal ecosystems. Through an iterative scientometric and systematic literature review, this paper provides the state of the art of cumulative impact assessment approaches and applications. It gives a specific attention to cutting-edge approaches that explore and model inter-relations among climatic and anthropogenic pressures, vulnerability and resilience of marine and coastal ecosystems to these pressures, and the resulting changes in ecosystem services flow. Despite recent advances in computer sciences and the rising availability of big data for environmental monitoring and management, this literature review evidenced that the implementation of advanced complex system methods for cumulative risk assessment remains limited. Moreover, experts have only recently started integrating ecosystem services flow into cumulative impact appraisal frameworks, but more as a general assessment endpoint within the overall evaluation process (e.g. changes in the bundle of ecosystem services against cumulative impacts). The review also highlights a lack of integrated approaches and complex tools able to frame, explain, and model spatio-temporal dynamics of marine and coastal ecosystems' response to multiple pressures, as required under relevant EU legislation (e.g., Water Framework and Marine Strategy Framework Directives). Progress in understanding cumulative impacts, exploiting the functionalities of more sophisticated machine learning-based approaches (e.g., big data integration), will support decision-makers in the achievement of environmental and sustainability objectives.
Collapse
Affiliation(s)
- Christian Simeoni
- Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, I-30170 Venice, Italy; Centro Euro-Mediterraneo sui Cambiamenti Climatici and Università Ca' Foscari Venezia, CMCC@Ca'Foscari - Edificio Porta dell'Innovazione, 2nd floor - Via della Libertà, 12 - 30175 Venice, Italy
| | - Elisa Furlan
- Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, I-30170 Venice, Italy; Centro Euro-Mediterraneo sui Cambiamenti Climatici and Università Ca' Foscari Venezia, CMCC@Ca'Foscari - Edificio Porta dell'Innovazione, 2nd floor - Via della Libertà, 12 - 30175 Venice, Italy
| | - Hung Vuong Pham
- Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, I-30170 Venice, Italy; Centro Euro-Mediterraneo sui Cambiamenti Climatici and Università Ca' Foscari Venezia, CMCC@Ca'Foscari - Edificio Porta dell'Innovazione, 2nd floor - Via della Libertà, 12 - 30175 Venice, Italy
| | - Andrea Critto
- Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, I-30170 Venice, Italy; Centro Euro-Mediterraneo sui Cambiamenti Climatici and Università Ca' Foscari Venezia, CMCC@Ca'Foscari - Edificio Porta dell'Innovazione, 2nd floor - Via della Libertà, 12 - 30175 Venice, Italy.
| | - Silvia de Juan
- Instituto Mediterraneo de Estudios Avanzados, IMEDEA (CSIC-UIB), Miquel Marques 21, Esporles, Islas Baleares, Spain
| | - Ewan Trégarot
- Centre for Blue Governance, Portsmouth Business School, University of Portsmouth, Richmond Building, Portland Street, Portsmouth PO1 3DE, UK
| | - Cindy C Cornet
- Centre for Blue Governance, Portsmouth Business School, University of Portsmouth, Richmond Building, Portland Street, Portsmouth PO1 3DE, UK
| | - Erik Meesters
- Wageningen Marine Research, Wageningen University and Research, 1781, AG, Den Helder, the Netherlands; Aquatic Ecology and Water Quality Management, Wageningen University and Research, 6700, AA, Wageningen, the Netherlands
| | - Catarina Fonseca
- cE3c - Centre for Ecology, Evolution and Environmental Changes, Azorean Biodiversity Group, CHANGE - Global Change and Sustainability Institute, Faculty of Sciences and Technology, University of the Azores, Rua da Mãe de Deus, 9500-321, Ponta Delgada, Portugal; CICS.NOVA - Interdisciplinary Centre of Social Sciences, Faculty of Social Sciences and Humanities (FCSH/NOVA), Avenida de Berna 26-C, Lisboa 1069-061, Portugal
| | - Andrea Zita Botelho
- Faculty of Sciences and Technology, University of the Azores, Ponta Delgada, Portugal; CIBIO (CIBIO - Research Centre in Biodiversity and Genetic Resources, InBio Associate Laboratory, Ponta Delgada, Portugal
| | - Torsten Krause
- Lund University Centre for Sustainability Studies, P.O. Box 170, 221-00 Lund, Sweden
| | - Alicia N'Guetta
- Lund University Centre for Sustainability Studies, P.O. Box 170, 221-00 Lund, Sweden
| | | | - Pierre Failler
- Centre for Blue Governance, Portsmouth Business School, University of Portsmouth, Richmond Building, Portland Street, Portsmouth PO1 3DE, UK
| | - Antonio Marcomini
- Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, I-30170 Venice, Italy; Centro Euro-Mediterraneo sui Cambiamenti Climatici and Università Ca' Foscari Venezia, CMCC@Ca'Foscari - Edificio Porta dell'Innovazione, 2nd floor - Via della Libertà, 12 - 30175 Venice, Italy
| |
Collapse
|
9
|
Tornero-Costa R, Martinez-Millana A, Azzopardi-Muscat N, Lazeri L, Traver V, Novillo-Ortiz D. Methodological and Quality Flaws in the Use of Artificial Intelligence in Mental Health Research: Systematic Review. JMIR Ment Health 2023; 10:e42045. [PMID: 36729567 PMCID: PMC9936371 DOI: 10.2196/42045] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 11/02/2022] [Accepted: 11/20/2022] [Indexed: 02/03/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is giving rise to a revolution in medicine and health care. Mental health conditions are highly prevalent in many countries, and the COVID-19 pandemic has increased the risk of further erosion of the mental well-being in the population. Therefore, it is relevant to assess the current status of the application of AI toward mental health research to inform about trends, gaps, opportunities, and challenges. OBJECTIVE This study aims to perform a systematic overview of AI applications in mental health in terms of methodologies, data, outcomes, performance, and quality. METHODS A systematic search in PubMed, Scopus, IEEE Xplore, and Cochrane databases was conducted to collect records of use cases of AI for mental health disorder studies from January 2016 to November 2021. Records were screened for eligibility if they were a practical implementation of AI in clinical trials involving mental health conditions. Records of AI study cases were evaluated and categorized by the International Classification of Diseases 11th Revision (ICD-11). Data related to trial settings, collection methodology, features, outcomes, and model development and evaluation were extracted following the CHARMS (Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies) guideline. Further, evaluation of risk of bias is provided. RESULTS A total of 429 nonduplicated records were retrieved from the databases and 129 were included for a full assessment-18 of which were manually added. The distribution of AI applications in mental health was found unbalanced between ICD-11 mental health categories. Predominant categories were Depressive disorders (n=70) and Schizophrenia or other primary psychotic disorders (n=26). Most interventions were based on randomized controlled trials (n=62), followed by prospective cohorts (n=24) among observational studies. AI was typically applied to evaluate quality of treatments (n=44) or stratify patients into subgroups and clusters (n=31). Models usually applied a combination of questionnaires and scales to assess symptom severity using electronic health records (n=49) as well as medical images (n=33). Quality assessment revealed important flaws in the process of AI application and data preprocessing pipelines. One-third of the studies (n=56) did not report any preprocessing or data preparation. One-fifth of the models were developed by comparing several methods (n=35) without assessing their suitability in advance and a small proportion reported external validation (n=21). Only 1 paper reported a second assessment of a previous AI model. Risk of bias and transparent reporting yielded low scores due to a poor reporting of the strategy for adjusting hyperparameters, coefficients, and the explainability of the models. International collaboration was anecdotal (n=17) and data and developed models mostly remained private (n=126). CONCLUSIONS These significant shortcomings, alongside the lack of information to ensure reproducibility and transparency, are indicative of the challenges that AI in mental health needs to face before contributing to a solid base for knowledge generation and for being a support tool in mental health management.
Collapse
Affiliation(s)
- Roberto Tornero-Costa
- Instituto Universitario de Investigación de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas, Universitat Politècnica de València, Valencia, Spain
| | - Antonio Martinez-Millana
- Instituto Universitario de Investigación de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas, Universitat Politècnica de València, Valencia, Spain
| | - Natasha Azzopardi-Muscat
- Division of Country Health Policies and Systems, World Health Organization, Regional Office for Europe, Copenhagen, Denmark
| | - Ledia Lazeri
- Division of Country Health Policies and Systems, World Health Organization, Regional Office for Europe, Copenhagen, Denmark
| | - Vicente Traver
- Instituto Universitario de Investigación de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas, Universitat Politècnica de València, Valencia, Spain
| | - David Novillo-Ortiz
- Division of Country Health Policies and Systems, World Health Organization, Regional Office for Europe, Copenhagen, Denmark
| |
Collapse
|
10
|
Revisiting the bullwhip effect: how can AI smoothen the bullwhip phenomenon? INTERNATIONAL JOURNAL OF LOGISTICS MANAGEMENT 2023. [DOI: 10.1108/ijlm-02-2022-0078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
PurposeAlthough scholars argue that artificial intelligence (AI) represents a tool to potentially smoothen the bullwhip effect in the supply chain, only little research has examined this phenomenon. In this article, the authors conceptualize a framework that allows for a more structured management approach to examine the bullwhip effect using AI. In addition, the authors conduct a systematic literature review of this current status of how management can use AI to reduce the bullwhip effect and locate opportunities for future research.Design/methodology/approachGuided by the systematic literature review approach from Durach et al. (2017), the authors review and analyze key attributes and characteristics of both AI and the bullwhip effect from a management perspective.FindingsThe authors' findings reveal that literature examining how management can use AI to smoothen the bullwhip effect is a rather under-researched area that provides an abundance of research avenues. Based on identified AI capabilities, the authors propose three key management pillars that form the basis of the authors' Bullwhip-Smoothing-Framework (BSF): (1) digital skills, (2) leadership and (3) collaboration. The authors also critically assess current research efforts and offer suggestions for future research.Originality/valueBy providing a structured management approach to examine the link between AI and the bullwhip phenomena, this study offers scholars and managers a foundation for the advancement of theorizing how to smoothen the bullwhip effect along the supply chain.
Collapse
|
11
|
Fosch-Villaronga E, van der Hof S, Lutz C, Tamò-Larrieux A. Toy story or children story? Putting children and their rights at the forefront of the artificial intelligence revolution. AI & SOCIETY 2023; 38:133-152. [PMID: 34642550 PMCID: PMC8494166 DOI: 10.1007/s00146-021-01295-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 09/24/2021] [Indexed: 02/06/2023]
Abstract
Policymakers need to start considering the impact smart connected toys (SCTs) have on children. Equipped with sensors, data processing capacities, and connectivity, SCTs targeting children increasingly penetrate pervasively personal environments. The network of SCTs forms the Internet of Toys (IoToys) and often increases children's engagement and playtime experience. Unfortunately, this young part of the population and, most of the time, their parents are often unaware of SCTs' far-reaching capacities and limitations. The capabilities and constraints of SCTs create severe side effects at the technical, individual, and societal level. These side effects are often unforeseeable and unexpected. They arise from the technology's use and the interconnected nature of the IoToys, without necessarily involving malevolence from their creators. Although existing regulations and new ethical guidelines for artificial intelligence provide remedies to address some of the side effects, policymakers did not develop these redress mechanisms having children and SCTs in mind. This article provides an analysis of the arising side effects of SCTs and contrasts them with current regulatory redress mechanisms. We thereby highlight misfits and needs for further policymaking efforts.
Collapse
Affiliation(s)
- E. Fosch-Villaronga
- eLaw Center for Law and Digital Technologies, Leiden University, Leiden, The Netherlands
| | - S. van der Hof
- eLaw Center for Law and Digital Technologies, Leiden University, Leiden, The Netherlands
| | - C. Lutz
- Nordic Centre for Internet and Society, Department of Communication and Culture, BI Norwegian Business School, Oslo, Norway
| | - A. Tamò-Larrieux
- FAA-Institute for Work and Employment Research, University of St. Gallen, St. Gallen, Switzerland
| |
Collapse
|
12
|
Farina M, Zhdanov P, Karimov A, Lavazza A. AI and society: a virtue ethics approach. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01545-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Kiseleva A, Kotzinos D, De Hert P. Transparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations. Front Artif Intell 2022; 5:879603. [PMID: 35707765 PMCID: PMC9189302 DOI: 10.3389/frai.2022.879603] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Accepted: 03/31/2022] [Indexed: 11/13/2022] Open
Abstract
The lack of transparency is one of the artificial intelligence (AI)'s fundamental challenges, but the concept of transparency might be even more opaque than AI itself. Researchers in different fields who attempt to provide the solutions to improve AI's transparency articulate different but neighboring concepts that include, besides transparency, explainability and interpretability. Yet, there is no common taxonomy neither within one field (such as data science) nor between different fields (law and data science). In certain areas like healthcare, the requirements of transparency are crucial since the decisions directly affect people's lives. In this paper, we suggest an interdisciplinary vision on how to tackle the issue of AI's transparency in healthcare, and we propose a single point of reference for both legal scholars and data scientists on transparency and related concepts. Based on the analysis of the European Union (EU) legislation and literature in computer science, we submit that transparency shall be considered the “way of thinking” and umbrella concept characterizing the process of AI's development and use. Transparency shall be achieved through a set of measures such as interpretability and explainability, communication, auditability, traceability, information provision, record-keeping, data governance and management, and documentation. This approach to deal with transparency is of general nature, but transparency measures shall be always contextualized. By analyzing transparency in the healthcare context, we submit that it shall be viewed as a system of accountabilities of involved subjects (AI developers, healthcare professionals, and patients) distributed at different layers (insider, internal, and external layers, respectively). The transparency-related accountabilities shall be built-in into the existing accountability picture which justifies the need to investigate the relevant legal frameworks. These frameworks correspond to different layers of the transparency system. The requirement of informed medical consent correlates to the external layer of transparency and the Medical Devices Framework is relevant to the insider and internal layers. We investigate the said frameworks to inform AI developers on what is already expected from them with regards to transparency. We also discover the gaps in the existing legislative frameworks concerning AI's transparency in healthcare and suggest the solutions to fill them in.
Collapse
Affiliation(s)
- Anastasiya Kiseleva
- LSTS Research Group (Law, Science, Technology and Society), Faculty of Law, Vrije Universiteit Brussels, Brussels, Belgium
- ETIS Research Lab, Faculity of Computer Science, CY Cergy Paris University, Cergy-Pontoise, France
- *Correspondence: Anastasiya Kiseleva
| | - Dimitris Kotzinos
- ETIS Research Lab, Faculity of Computer Science, CY Cergy Paris University, Cergy-Pontoise, France
| | - Paul De Hert
- LSTS Research Group (Law, Science, Technology and Society), Faculty of Law, Vrije Universiteit Brussels, Brussels, Belgium
| |
Collapse
|
14
|
Artificial Intelligence Decision-Making Transparency and Employees’ Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort. Behav Sci (Basel) 2022; 12:bs12050127. [PMID: 35621424 PMCID: PMC9138134 DOI: 10.3390/bs12050127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 04/21/2022] [Accepted: 04/25/2022] [Indexed: 11/17/2022] Open
Abstract
The purpose of this paper is to investigate how Artificial Intelligence (AI) decision-making transparency affects humans’ trust in AI. Previous studies have shown inconsistent conclusions about the relationship between AI transparency and humans’ trust in AI (i.e., a positive correlation, non-correlation, or an inverted U-shaped relationship). Based on the stimulus-organism-response (SOR) model, algorithmic reductionism, and social identity theory, this paper explores the impact of AI decision-making transparency on humans’ trust in AI from cognitive and emotional perspectives. A total of 235 participants with previous work experience were recruited online to complete the experimental vignette. The results showed that employees’ perceived transparency, employees’ perceived effectiveness of AI, and employees’ discomfort with AI played mediating roles in the relationship between AI decision-making transparency and employees’ trust in AI. Specifically, AI decision-making transparency (vs. non-transparency) led to higher perceived transparency, which in turn increased both effectiveness (which promoted trust) and discomfort (which inhibited trust). This parallel multiple mediating effect can partly explain the inconsistent findings in previous studies on the relationship between AI transparency and humans’ trust in AI. This research has practical significance because it puts forward suggestions for enterprises to improve employees’ trust in AI, so that employees can better collaborate with AI.
Collapse
|
15
|
Wu T, Simonetto DA, Halamka JD, Shah VH. The digital transformation of hepatology: The patient is logged in. Hepatology 2022; 75:724-739. [PMID: 35028960 PMCID: PMC9531185 DOI: 10.1002/hep.32329] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/30/2021] [Accepted: 12/01/2021] [Indexed: 12/14/2022]
Abstract
The rise in innovative digital health technologies has led a paradigm shift in health care toward personalized, patient-centric medicine that is reaching beyond traditional brick-and-mortar facilities into patients' homes and everyday lives. Digital solutions can monitor and detect early changes in physiological data, predict disease progression and health-related outcomes based on individual risk factors, and manage disease intervention with a range of accessible telemedicine and mobile health options. In this review, we discuss the unique transformation underway in the care of patients with liver disease, specifically examining the digital transformation of diagnostics, prediction and clinical decision-making, and management. Additionally, we discuss the general considerations needed to confirm validity and oversight of new technologies, usability and acceptability of digital solutions, and equity and inclusivity of vulnerable populations.
Collapse
Affiliation(s)
- Tiffany Wu
- Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, Minnesota, USA
| | - Douglas A. Simonetto
- Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, Minnesota, USA
| | - John D. Halamka
- Mayo Clinic Platform, Mayo Clinic, Rochester, Minnesota, USA
| | - Vijay H. Shah
- Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
16
|
König PD. Challenges in enabling user control over algorithm-based services. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01395-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractAlgorithmic systems that provide services to people by supporting or replacing human decision-making promise greater convenience in various areas. The opacity of these applications, however, means that it is not clear how much they truly serve their users. A promising way to address the issue of possible undesired biases consists in giving users control by letting them configure a system and aligning its performance with users’ own preferences. However, as the present paper argues, this form of control over an algorithmic system demands an algorithmic literacy that also entails a certain way of making oneself knowable: users must interrogate their own dispositions and see how these can be formalized such that they can be translated into the algorithmic system. This may, however, extend already existing practices through which people are monitored and probed and means that exerting such control requires users to direct a computational mode of thinking at themselves.
Collapse
|
17
|
SDGs: A Responsible Research Assessment Tool toward Impactful Business Research. SUSTAINABILITY 2021. [DOI: 10.3390/su132414019] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
An alternative research assessment (RA) tool was constructed to assess the relatedness of published business school research to the United Nations’ 17 Sustainable Development Goals (SDGs). The RA tool was created using Leximancer™, an on-line cloud-based text analytic software tool, that identified core themes within the SDG framework. Eight (8) core themes were found to define the ‘spirit of the SDGs’: Sustainable Development, Governance, Vulnerable Populations, Water, Gross Domestic Product (GDP), Food Security, Restoration, and Public Health. These themes were compared to the core themes found in the content of 4576 academic articles published in 2019 in journals that comprise the Financial Times (FT) 50 list. The articles’ relatedness to the SDG themes were assessed. Overall, 10.6% of the themes found in the FT50 journal articles had an explicit relationship to the SDG themes while 24.5% were implied. Themes generated from machine learning (ML), augmented by researcher judgement (to account for synonyms, similar concepts, and discipline specific examples), improved the robustness of the relationships found between the SDG framework and the published articles. Although there are compelling reasons for business schools to focus research on advancing the SDGs, this study and others highlight that there is much opportunity for improvement. Recommendations are made to better align academic research with the SDGs, influencing how business school faculty and their schools prioritize research and its role in the world.
Collapse
|
18
|
Tigard DW. Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers. SCIENCE AND ENGINEERING ETHICS 2021; 27:59. [PMID: 34427804 PMCID: PMC8383242 DOI: 10.1007/s11948-021-00334-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 08/02/2021] [Indexed: 05/20/2023]
Abstract
Artificial intelligence (AI) and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the 'severance problem'-the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave it off? In particular, the fact that some technologies exhibit behavior that is unclear to us seems to constitute a kind of severance. Building upon contemporary work on moral responsibility, I argue for a mechanism I refer to as 'technological answerability', namely the capacity to recognize human demands for answers and to respond accordingly. By designing select devices-such as robotic assistants and personal AI programs-for increased answerability, we see at least one way of satisfying our demands for answers and thereby retaining our connection to a world increasingly occupied by technology.
Collapse
Affiliation(s)
- Daniel W Tigard
- Institute for History and Ethics of Medicine, Technical University of Munich, Ismaninger Str. 22, 81675, Munich, Germany.
| |
Collapse
|
19
|
A Fully Automatic, Interpretable and Adaptive Machine Learning Approach to Map Burned Area from Remote Sensing. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2021. [DOI: 10.3390/ijgi10080546] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The paper proposes a fully automatic algorithm approach to map burned areas from remote sensing characterized by human interpretable mapping criteria and explainable results. This approach is partially knowledge-driven and partially data-driven. It exploits active fire points to train the fusion function of factors deemed influential in determining the evidence of burned conditions from reflectance values of multispectral Sentinel-2 (S2) data. The fusion function is used to compute a map of seeds (burned pixels) that are adaptively expanded by applying a Region Growing (RG) algorithm to generate the final burned area map. The fusion function is an Ordered Weighted Averaging (OWA) operator, learnt through the application of a machine learning (ML) algorithm from a set of highly reliable fire points. Its semantics are characterized by two measures, the degrees of pessimism/optimism and democracy/monarchy. The former allows the prediction of the results of the fusion as affected by more false positives (commission errors) than false negatives (omission errors) in the case of pessimism, or vice versa; the latter foresees if there are only a few highly influential factors or many low influential ones that determine the result. The prediction on the degree of pessimism/optimism allows the expansion of the seeds to be appropriately tuned by selecting the most suited growing layer for the RG algorithm thus adapting the algorithm to the context. The paper illustrates the application of the automatic method in four study areas in southern Europe to map burned areas for the 2017 fire season. Thematic accuracy at each site was assessed by comparison to reference perimeters to prove the adaptability of the approach to the context; estimated average accuracy metrics are omission error = 0.057, commission error = 0.068, Dice coefficient = 0.94 and relative bias = 0.0046.
Collapse
|
20
|
Mökander J, Morley J, Taddeo M, Floridi L. Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations. SCIENCE AND ENGINEERING ETHICS 2021; 27:44. [PMID: 34231029 PMCID: PMC8260507 DOI: 10.1007/s11948-021-00319-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 06/08/2021] [Indexed: 06/13/2023]
Abstract
Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems (ADMS) can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of automation. In this article, we consider the feasibility and efficacy of ethics-based auditing (EBA) as a governance mechanism that allows organisations to validate claims made about their ADMS. Building on previous work, we define EBA as a structured process whereby an entity's present or past behaviour is assessed for consistency with relevant principles or norms. We then offer three contributions to the existing literature. First, we provide a theoretical explanation of how EBA can contribute to good governance by promoting procedural regularity and transparency. Second, we propose seven criteria for how to design and implement EBA procedures successfully. Third, we identify and discuss the conceptual, technical, social, economic, organisational, and institutional constraints associated with EBA. We conclude that EBA should be considered an integral component of multifaced approaches to managing the ethical risks posed by ADMS.
Collapse
Affiliation(s)
- Jakob Mökander
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Jessica Morley
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| |
Collapse
|
21
|
Iannessi A, Beaumont H, Bertrand AS. Letter to the editor: "Not all biases are bad: equitable and inequitable biases in machine learning and radiology". Insights Imaging 2021; 12:78. [PMID: 34132919 PMCID: PMC8208365 DOI: 10.1186/s13244-021-01022-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 05/26/2021] [Indexed: 11/25/2022] Open
Abstract
Artificial intelligence algorithms are booming in medicine, and the question of biases induced or perpetuated by these tools is a very important topic. There is a greater risk of these biases in radiology, which is now the primary diagnostic tool in modern treatment. Some authors have recently proposed an analysis framework for social inequalities and the biases at risk of being introduced into future algorithms. In our paper, we comment on the different strategies for resolving these biases. We warn that there is an even greater risk in mixing the notion of equity, the definition of which is socio-political, into the design stages of these algorithms. We believe that rather than being beneficial, this could in fact harm the main purpose of these artificial intelligence tools, which is the care of the patient.
Collapse
Affiliation(s)
- Antoine Iannessi
- Centre Antoine Lacassagne, 33 Avenue de Valombrose, 06100, Nice, France
| | - Hubert Beaumont
- Median Technologies, 1800 route des crêtes, 06560, Valbonne, France.
| | | |
Collapse
|