1
|
Kuhn E, Saleem M, Klein T, Köhler C, Fuhr DC, Lahutina S, Minarik A, Musesengwa R, Neubauer K, Olisaeloka L, Osei F, Reinhold AS, Singh I, Spanhel K, Thomas N, Hendl T, Kellmeyer P, Böge K. Interdisciplinary perspectives on digital technologies for global mental health. PLOS GLOBAL PUBLIC HEALTH 2024; 4:e0002867. [PMID: 38315676 PMCID: PMC10843075 DOI: 10.1371/journal.pgph.0002867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
Digital Mental Health Technologies (DMHTs) have the potential to close treatment gaps in settings where mental healthcare is scarce or even inaccessible. For this, DMHTs need to be affordable, evidence-based, justice-oriented, user-friendly, and embedded in a functioning digital infrastructure. This viewpoint discusses areas crucial for future developments of DMHTs. Drawing back on interdisciplinary scholarship, questions of health equity, consumer-, patient- and developer-oriented legislation, and requirements for successful implementation of technologies across the globe are discussed. Economic considerations and policy implications complement these aspects. We discuss the need for cultural adaptation specific to the context of use and point to several benefits as well as pitfalls of DMHTs for research and healthcare provision. Nonetheless, to circumvent technology-driven solutionism, the development and implementation of DMHTs require a holistic, multi-sectoral, and participatory approach.
Collapse
Affiliation(s)
- Eva Kuhn
- Department of Psychiatry and Neurosciences, Campus Benjamin Franklin, Charité –Universitätsmedizin Berlin, Berlin, Germany
| | - Maham Saleem
- Department of Prevention and Evaluation, Leibniz Institute of Prevention Research and Epidemiology-BIPS, Bremen, Germany
| | - Thomas Klein
- Department of Psychiatry and Psychotherapy II, Ulm University, Guenzburg, Germany
| | - Charlotte Köhler
- Department of Data Science & Decision Support, European University Viadrina, Große, Frankfurt (Oder), Germany
| | - Daniela C. Fuhr
- Department of Prevention and Evaluation, Leibniz Institute of Prevention Research and Epidemiology-BIPS, Bremen, Germany
- Faculty of Public Health and Policy, Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine, London, United Kingdom
- University of Bremen, Health Sciences, Bremen, Germany
| | - Sofiia Lahutina
- TUM Department of Sport and Health Sciences (TUM SG), Chronobiology and Health, Technical University of Munich, Munich, Germany
- TUM Institute for Advanced Study (TUM-IAS), Technical University of Munich, Garching, Germany
| | - Anna Minarik
- Department of Psychiatry and Neurosciences, Campus Benjamin Franklin, Charité –Universitätsmedizin Berlin, Berlin, Germany
- Department of Psychology and Neuroscience, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Rosemary Musesengwa
- Department of Psychiatry and Welcome Centre for Ethics and Humanities, University of Oxford, Oxford, United Kingdom
| | | | - Lotenna Olisaeloka
- Institute for Global Health, University College London, London, United Kingdom
| | - Francis Osei
- Department of Health and Physical Activity, Professorship for Medical Sociology and Psychobiology, University of Potsdam, Potsdam, Germany
| | - Annika Stefanie Reinhold
- Medical Faculty Mannheim, Department of Public Mental Health, Central Institute of Mental Health (CIMH), Heidelberg University, Mannheim, Germany
| | - Ilina Singh
- Department of Psychiatry and Welcome Centre for Ethics and Humanities, University of Oxford, Oxford, United Kingdom
| | - Kerstin Spanhel
- Faculty of Medicine, Institute for Medical Psychology and Medical Sociology, University of Freiburg, Freiburg im Breisgau, Germany
| | - Neil Thomas
- Centre for Mental Health, Swinburne University of Technology, Hawthorn, Melbourne, Australia
| | - Tereza Hendl
- Faculty of Medicine, University of Augsburg, Augsburg, Germany
- Institute of Ethics, History and Theory of Medicine, Ludwig-Maximilians-University in Munich, Munich, Germany
| | - Philipp Kellmeyer
- Department of Neurosurgery, University of Freiburg—Medical Center, Freiburg im Breisgau, Germany
- School of Business Informatics and Mathematics, University of Mannheim, Mannheim, Germany
| | - Kerem Böge
- Department of Psychiatry and Neurosciences, Campus Benjamin Franklin, Charité –Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
2
|
Ueda D, Kakinuma T, Fujita S, Kamagata K, Fushimi Y, Ito R, Matsui Y, Nozaki T, Nakaura T, Fujima N, Tatsugami F, Yanagawa M, Hirata K, Yamada A, Tsuboyama T, Kawamura M, Fujioka T, Naganawa S. Fairness of artificial intelligence in healthcare: review and recommendations. Jpn J Radiol 2024; 42:3-15. [PMID: 37540463 PMCID: PMC10764412 DOI: 10.1007/s11604-023-01474-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 07/17/2023] [Indexed: 08/05/2023]
Abstract
In this review, we address the issue of fairness in the clinical integration of artificial intelligence (AI) in the medical field. As the clinical adoption of deep learning algorithms, a subfield of AI, progresses, concerns have arisen regarding the impact of AI biases and discrimination on patient health. This review aims to provide a comprehensive overview of concerns associated with AI fairness; discuss strategies to mitigate AI biases; and emphasize the need for cooperation among physicians, AI researchers, AI developers, policymakers, and patients to ensure equitable AI integration. First, we define and introduce the concept of fairness in AI applications in healthcare and radiology, emphasizing the benefits and challenges of incorporating AI into clinical practice. Next, we delve into concerns regarding fairness in healthcare, addressing the various causes of biases in AI and potential concerns such as misdiagnosis, unequal access to treatment, and ethical considerations. We then outline strategies for addressing fairness, such as the importance of diverse and representative data and algorithm audits. Additionally, we discuss ethical and legal considerations such as data privacy, responsibility, accountability, transparency, and explainability in AI. Finally, we present the Fairness of Artificial Intelligence Recommendations in healthcare (FAIR) statement to offer best practices. Through these efforts, we aim to provide a foundation for discussing the responsible and equitable implementation and deployment of AI in healthcare.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-Machi, Abeno-ku, Osaka, 545-8585, Japan.
| | | | - Shohei Fujita
- Department of Radiology, University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Sakyoku, Kyoto, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Kita-ku, Okayama, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Shinjuku-ku, Tokyo, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Chuo-ku, Kumamoto, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Minami-ku, Hiroshima, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita City, Osaka, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita-ku, Sapporo, Hokkaido, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita City, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo-ku, Tokyo, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
3
|
Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J. Ethical Considerations of Using ChatGPT in Health Care. J Med Internet Res 2023; 25:e48009. [PMID: 37566454 PMCID: PMC10457697 DOI: 10.2196/48009] [Citation(s) in RCA: 36] [Impact Index Per Article: 36.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 07/05/2023] [Accepted: 07/25/2023] [Indexed: 08/12/2023] Open
Abstract
ChatGPT has promising applications in health care, but potential ethical issues need to be addressed proactively to prevent harm. ChatGPT presents potential ethical challenges from legal, humanistic, algorithmic, and informational perspectives. Legal ethics concerns arise from the unclear allocation of responsibility when patient harm occurs and from potential breaches of patient privacy due to data collection. Clear rules and legal boundaries are needed to properly allocate liability and protect users. Humanistic ethics concerns arise from the potential disruption of the physician-patient relationship, humanistic care, and issues of integrity. Overreliance on artificial intelligence (AI) can undermine compassion and erode trust. Transparency and disclosure of AI-generated content are critical to maintaining integrity. Algorithmic ethics raise concerns about algorithmic bias, responsibility, transparency and explainability, as well as validation and evaluation. Information ethics include data bias, validity, and effectiveness. Biased training data can lead to biased output, and overreliance on ChatGPT can reduce patient adherence and encourage self-diagnosis. Ensuring the accuracy, reliability, and validity of ChatGPT-generated content requires rigorous validation and ongoing updates based on clinical practice. To navigate the evolving ethical landscape of AI, AI in health care must adhere to the strictest ethical standards. Through comprehensive ethical guidelines, health care professionals can ensure the responsible use of ChatGPT, promote accurate and reliable information exchange, protect patient privacy, and empower patients to make informed decisions about their health care.
Collapse
Affiliation(s)
- Changyu Wang
- Department of Medical Informatics, West China Medical School, Sichuan University, Chengdu, China
- West China College of Stomatology, Sichuan University, Chengdu, China
| | - Siru Liu
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Hao Yang
- Information Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jiulin Guo
- Information Center, West China Hospital, Sichuan University, Chengdu, China
| | - Yuxuan Wu
- Department of Medical Informatics, West China Medical School, Sichuan University, Chengdu, China
| | - Jialin Liu
- Department of Medical Informatics, West China Medical School, Sichuan University, Chengdu, China
- Information Center, West China Hospital, Sichuan University, Chengdu, China
- Department of Otolaryngology-Head and Neck Surgery, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
4
|
Curto G, Comim F. SAF: Stakeholders' Agreement on Fairness in the Practice of Machine Learning Development. SCIENCE AND ENGINEERING ETHICS 2023; 29:29. [PMID: 37486434 PMCID: PMC10366323 DOI: 10.1007/s11948-023-00448-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Accepted: 06/16/2023] [Indexed: 07/25/2023]
Abstract
This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML) and proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development as an ongoing agreement with stakeholders. The pro-ethical iterative process presented in the paper aims to challenge asymmetric power dynamics in the fairness decision making within ML design and support ML development teams to identify, mitigate and monitor bias at each step of ML systems development. The process also provides guidance on how to explain the always imperfect trade-offs in terms of bias to users.
Collapse
Affiliation(s)
| | - Flavio Comim
- IQS School of Management, Universitat Ramon Llull, Barcelona, Spain
| |
Collapse
|
5
|
Accountability in artificial intelligence: what it is and how it works. AI & SOCIETY 2023. [DOI: 10.1007/s00146-023-01635-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
AbstractAccountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, and implications). We analyze this architecture through four accountability goals (compliance, report, oversight, and enforcement). We argue that these goals are often complementary and that policy-makers emphasize or prioritize some over others depending on the proactive or reactive use of accountability and the missions of AI governance.
Collapse
|
6
|
Sajno E, Bartolotta S, Tuena C, Cipresso P, Pedroli E, Riva G. Machine learning in biosignals processing for mental health: A narrative review. Front Psychol 2023; 13:1066317. [PMID: 36710855 PMCID: PMC9880193 DOI: 10.3389/fpsyg.2022.1066317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 12/16/2022] [Indexed: 01/15/2023] Open
Abstract
Machine Learning (ML) offers unique and powerful tools for mental health practitioners to improve evidence-based psychological interventions and diagnoses. Indeed, by detecting and analyzing different biosignals, it is possible to differentiate between typical and atypical functioning and to achieve a high level of personalization across all phases of mental health care. This narrative review is aimed at presenting a comprehensive overview of how ML algorithms can be used to infer the psychological states from biosignals. After that, key examples of how they can be used in mental health clinical activity and research are illustrated. A description of the biosignals typically used to infer cognitive and emotional correlates (e.g., EEG and ECG), will be provided, alongside their application in Diagnostic Precision Medicine, Affective Computing, and brain-computer Interfaces. The contents will then focus on challenges and research questions related to ML applied to mental health and biosignals analysis, pointing out the advantages and possible drawbacks connected to the widespread application of AI in the medical/mental health fields. The integration of mental health research and ML data science will facilitate the transition to personalized and effective medicine, and, to do so, it is important that researchers from psychological/ medical disciplines/health care professionals and data scientists all share a common background and vision of the current research.
Collapse
Affiliation(s)
- Elena Sajno
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milan, Italy,Department of Computer Science, University of Pisa, Pisa, Italy,*Correspondence: Elena Sajno, ✉
| | - Sabrina Bartolotta
- ExperienceLab, Università Cattolica del Sacro Cuore, Milan, Italy,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Cosimo Tuena
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Pietro Cipresso
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy,Department of Psychology, University of Turin, Turin, Italy
| | - Elisa Pedroli
- Department of Psychology, eCampus University, Novedrate, Italy
| | - Giuseppe Riva
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milan, Italy,Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
7
|
Sigfrids A, Leikas J, Salo-Pöntinen H, Koskimies E. Human-centricity in AI governance: A systemic approach. Front Artif Intell 2023; 6:976887. [PMID: 36872934 PMCID: PMC9979257 DOI: 10.3389/frai.2023.976887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 01/24/2023] [Indexed: 02/16/2023] Open
Abstract
Human-centricity is considered a central aspect in the development and governance of artificial intelligence (AI). Various strategies and guidelines highlight the concept as a key goal. However, we argue that current uses of Human-Centered AI (HCAI) in policy documents and AI strategies risk downplaying promises of creating desirable, emancipatory technology that promotes human wellbeing and the common good. Firstly, HCAI, as it appears in policy discourses, is the result of aiming to adapt the concept of human-centered design (HCD) to the public governance context of AI but without proper reflection on how it should be reformed to suit the new task environment. Second, the concept is mainly used in reference to realizing human and fundamental rights, which are necessary, but not sufficient for technological emancipation. Third, the concept is used ambiguously in policy and strategy discourses, making it unclear how it should be operationalized in governance practices. This article explores means and approaches for using the HCAI approach for technological emancipation in the context of public AI governance. We propose that the potential for emancipatory technology development rests on expanding the traditional user-centered view of technology design to involve community- and society-centered perspectives in public governance. Developing public AI governance in this way relies on enabling inclusive governance modalities that enhance the social sustainability of AI deployment. We discuss mutual trust, transparency, communication, and civic tech as key prerequisites for socially sustainable and human-centered public AI governance. Finally, the article introduces a systemic approach to ethically and socially sustainable, human-centered AI development and deployment.
Collapse
Affiliation(s)
- Anton Sigfrids
- VTT Technical Research Centre of Finland Ltd, Espoo, Finland
| | - Jaana Leikas
- VTT Technical Research Centre of Finland Ltd, Espoo, Finland
| | - Henrikki Salo-Pöntinen
- Faculty of Information Technology, Cognitive Science, University of Jyväskylä, Jyväskylä, Finland
| | - Emmi Koskimies
- Faculty of Management and Business, Administrative Sciences, Tampere University, Tampere, Finland
| |
Collapse
|
8
|
Blanchard A, Taddeo M. The Ethics of Artificial Intelligence for Intelligence Analysis: a Review of the Key Challenges with Recommendations. DIGITAL SOCIETY : ETHICS, SOCIO-LEGAL AND GOVERNANCE OF DIGITAL TECHNOLOGY 2023; 2:12. [PMID: 37034181 PMCID: PMC10073779 DOI: 10.1007/s44206-023-00036-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 02/09/2023] [Indexed: 04/11/2023]
Abstract
Intelligence agencies have identified artificial intelligence (AI) as a key technology for maintaining an edge over adversaries. As a result, efforts to develop, acquire, and employ AI capabilities for purposes of national security are growing. This article reviews the ethical challenges presented by the use of AI for augmented intelligence analysis. These challenges have been identified through a qualitative systematic review of the relevant literature. The article identifies five sets of ethical challenges relating to intrusion, explainability and accountability, bias, authoritarianism and political security, and collaboration and classification, and offers a series of recommendations targeted at intelligence agencies to address and mitigate these challenges.
Collapse
Affiliation(s)
| | - Mariarosaria Taddeo
- The Alan Turing Institute, London, UK
- Oxford Internet Institute, University of Oxford, Oxford, UK
| |
Collapse
|
9
|
Samhammer D, Roller R, Hummel P, Osmanodja B, Burchardt A, Mayrdorfer M, Duettmann W, Dabrock P. "Nothing works without the doctor:" Physicians' perception of clinical decision-making and artificial intelligence. Front Med (Lausanne) 2022; 9:1016366. [PMID: 36606050 PMCID: PMC9807757 DOI: 10.3389/fmed.2022.1016366] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 11/23/2022] [Indexed: 12/24/2022] Open
Abstract
Introduction Artificial intelligence-driven decision support systems (AI-DSS) have the potential to help physicians analyze data and facilitate the search for a correct diagnosis or suitable intervention. The potential of such systems is often emphasized. However, implementation in clinical practice deserves continuous attention. This article aims to shed light on the needs and challenges arising from the use of AI-DSS from physicians' perspectives. Methods The basis for this study is a qualitative content analysis of expert interviews with experienced nephrologists after testing an AI-DSS in a straightforward usage scenario. Results The results provide insights on the basics of clinical decision-making, expected challenges when using AI-DSS as well as a reflection on the test run. Discussion While we can confirm the somewhat expectable demand for better explainability and control, other insights highlight the need to uphold classical strengths of the medical profession when using AI-DSS as well as the importance of broadening the view of AI-related challenges to the clinical environment, especially during treatment. Our results stress the necessity for adjusting AI-DSS to shared decision-making. We conclude that explainability must be context-specific while fostering meaningful interaction with the systems available.
Collapse
Affiliation(s)
- David Samhammer
- Institute for Systematic Theology II (Ethics), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany,*Correspondence: David Samhammer,
| | - Roland Roller
- German Research Center for Artificial Intelligence (DFKI), Berlin, Germany,Department of Nephrology and Medical Intensive Care, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Patrik Hummel
- Department of Industrial Engineering and Innovation Sciences, Philosophy and Ethics Group, TU Eindhoven, Eindhoven, Netherlands
| | - Bilgin Osmanodja
- Department of Nephrology and Medical Intensive Care, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Aljoscha Burchardt
- German Research Center for Artificial Intelligence (DFKI), Berlin, Germany
| | - Manuel Mayrdorfer
- Department of Nephrology and Medical Intensive Care, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany,Division of Nephrology and Dialysis, Department of Internal Medicine III, Medical University of Vienna, Vienna, Austria
| | - Wiebke Duettmann
- Department of Nephrology and Medical Intensive Care, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Peter Dabrock
- Institute for Systematic Theology II (Ethics), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| |
Collapse
|
10
|
The limitation of ethics-based approaches to regulating artificial intelligence: regulatory gifting in the context of Russia. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01611-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
11
|
Artificial Intelligence (AI) in Breast Imaging: A Scientometric Umbrella Review. Diagnostics (Basel) 2022; 12:diagnostics12123111. [PMID: 36553119 PMCID: PMC9777253 DOI: 10.3390/diagnostics12123111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial intelligence (AI), a rousing advancement disrupting a wide spectrum of applications with remarkable betterment, has continued to gain momentum over the past decades. Within breast imaging, AI, especially machine learning and deep learning, honed with unlimited cross-data/case referencing, has found great utility encompassing four facets: screening and detection, diagnosis, disease monitoring, and data management as a whole. Over the years, breast cancer has been the apex of the cancer cumulative risk ranking for women across the six continents, existing in variegated forms and offering a complicated context in medical decisions. Realizing the ever-increasing demand for quality healthcare, contemporary AI has been envisioned to make great strides in clinical data management and perception, with the capability to detect indeterminate significance, predict prognostication, and correlate available data into a meaningful clinical endpoint. Here, the authors captured the review works over the past decades, focusing on AI in breast imaging, and systematized the included works into one usable document, which is termed an umbrella review. The present study aims to provide a panoramic view of how AI is poised to enhance breast imaging procedures. Evidence-based scientometric analysis was performed in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline, resulting in 71 included review works. This study aims to synthesize, collate, and correlate the included review works, thereby identifying the patterns, trends, quality, and types of the included works, captured by the structured search strategy. The present study is intended to serve as a "one-stop center" synthesis and provide a holistic bird's eye view to readers, ranging from newcomers to existing researchers and relevant stakeholders, on the topic of interest.
Collapse
|
12
|
Howard J. Algorithms and the future of work. Am J Ind Med 2022; 65:943-952. [PMID: 36128686 DOI: 10.1002/ajim.23429] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 09/08/2022] [Accepted: 09/09/2022] [Indexed: 02/01/2023]
Abstract
An algorithm refers to a series of stepwise instructions used by a machine to perform a mathematical operation. In 1955, the term artificial intelligence (AI) was coined to indicate that a machine could be programmed to duplicate human intelligence. Even though that goal has not yet been reached, the use of sophisticated machine learning algorithms has moved us closer to that goal. While algorithm-enabled systems and devices will bring many benefits to occupational safety and health, this Commentary focuses on new sources of worker risk that algorithms present in the use of worker management systems, advanced sensor technologies, and robotic devices. A new "digital Taylorism" may erode worker autonomy, and lead to work intensification and psychosocial stress. The presence of large amounts of information on workers within algorithmic-enabled systems presents security and privacy risks. Reliance on indiscriminate data mining may reproduce forms of discrimination and lead to inequalities in hiring, retention, and termination. Workers interfacing with robots may face work intensification and job displacement, while injury in the course of employment by a robotic device is also possible. Algorithm governance strategies are discussed such as risk management practices, national and international laws and regulations, and emerging legal accountability proposals. Determining if an algorithm is safe for workplace use is rapidly becoming a challenge for manufacturers, programmers, employers, workers, and occupational safety and health practitioners. To achieve the benefits that algorithm-enabled systems and devices promise in the future of work, now is the time to study how to effectively manage their risks.
Collapse
Affiliation(s)
- John Howard
- Office of the Director, National Institute for Occupational Safety and Health, Washington, District of Columbia, USA
| |
Collapse
|
13
|
Roberts H, Zhang J, Bariach B, Cowls J, Gilburt B, Juneja P, Tsamados A, Ziosi M, Taddeo M, Floridi L. Artificial intelligence in support of the circular economy: ethical considerations and a path forward. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01596-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractThe world’s current model for economic development is unsustainable. It encourages high levels of resource extraction, consumption, and waste that undermine positive environmental outcomes. Transitioning to a circular economy (CE) model of development has been proposed as a sustainable alternative. Artificial intelligence (AI) is a crucial enabler for CE. It can aid in designing robust and sustainable products, facilitate new circular business models, and support the broader infrastructures needed to scale circularity. However, to date, considerations of the ethical implications of using AI to achieve a transition to CE have been limited. This article addresses this gap. It outlines how AI is and can be used to transition towards CE, analyzes the ethical risks associated with using AI for this purpose, and supports some recommendations to policymakers and industry on how to minimise these risks.
Collapse
|
14
|
Mökander J, Sheth M, Gersbro-Sundler M, Blomgren P, Floridi L. Challenges and best practices in corporate AI governance: Lessons from the biopharmaceutical industry. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1068361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best reap the benefits of automation whilst managing the associated risks. As a first step, many companies have committed themselves to various sets of ethics principles aimed at guiding the design and use of AI systems. So far so good. But how can well-intentioned ethical principles be translated into effective practice? And what challenges await companies that attempt to operationalize AI governance? In this article, we address these questions by drawing on our first-hand experience of shaping and driving the roll-out of AI governance within AstraZeneca, a biopharmaceutical company. The examples we discuss highlight challenges that any organization attempting to operationalize AI governance will have to face. These include questions concerning how to define the material scope of AI governance, how to harmonize standards across decentralized organizations, and how to measure the impact of specific AI governance initiatives. By showcasing how AstraZeneca managed these operational questions, we hope to provide project managers, CIOs, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks within other organizations with generalizable best practices. In essence, companies seeking to operationalize AI governance are encouraged to build on existing policies and governance structures, use pragmatic and action-oriented terminology, focus on risk management in development and procurement, and empower employees through continuous education and change management.
Collapse
|
15
|
Stypinska J. AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies. AI & SOCIETY 2022; 38:665-677. [PMID: 36212226 PMCID: PMC9527733 DOI: 10.1007/s00146-022-01553-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Accepted: 06/28/2022] [Indexed: 11/29/2022]
Abstract
In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the analysis of fairness and bias in AI, important for the debate of AI for social good, has paid insufficient attention to the category of age and older people. Ageing populations have been largely neglected during the turn to digitality and AI. In this article, the concept of AI ageism is presented to make a theoretical contribution to how the understanding of inclusion and exclusion within the field of AI can be expanded to include the category of age. AI ageism can be defined as practices and ideologies operating within the field of AI, which exclude, discriminate, or neglect the interests, experiences, and needs of older population and can be manifested in five interconnected forms: (1) age biases in algorithms and datasets (technical level), (2) age stereotypes, prejudices and ideologies of actors in AI (individual level), (3) invisibility of old age in discourses on AI (discourse level), (4) discriminatory effects of use of AI technology on different age groups (group level), (5) exclusion as users of AI technology, services and products (user level). Additionally, the paper provides empirical illustrations of the way ageism operates in these five forms.
Collapse
Affiliation(s)
- Justyna Stypinska
- Freie Universität, Berlin, Germany
- European New School of Digital Studies, European University Viadrina, Frankfurt (Oder), Germany
| |
Collapse
|
16
|
Angelucci A, Li Z, Stoimenova N, Canali S. The paradox of the artificial intelligence system development process: the use case of corporate wellness programs using smart wearables. AI & SOCIETY 2022:1-11. [PMID: 36185063 PMCID: PMC9511446 DOI: 10.1007/s00146-022-01562-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 09/12/2022] [Indexed: 11/23/2022]
Abstract
Artificial intelligence (AI) systems have been widely applied to various contexts, including high-stake decision processes in healthcare, banking, and judicial systems. Some developed AI models fail to offer a fair output for specific minority groups, sparking comprehensive discussions about AI fairness. We argue that the development of AI systems is marked by a central paradox: the less participation one stakeholder has within the AI system's life cycle, the more influence they have over the way the system will function. This means that the impact on the fairness of the system is in the hands of those who are less impacted by it. However, most of the existing works ignore how different aspects of AI fairness are dynamically and adaptively affected by different stages of AI system development. To this end, we present a use case to discuss fairness in the development of corporate wellness programs using smart wearables and AI algorithms to analyze data. The four key stakeholders throughout this type of AI system development process are presented. These stakeholders are called service designer, algorithm designer, system deployer, and end-user. We identify three core aspects of AI fairness, namely, contextual fairness, model fairness, and device fairness. We propose a relative contribution of the four stakeholders to the three aspects of fairness. Furthermore, we propose the boundaries and interactions between the four roles, from which we make our conclusion about the possible unfairness in such an AI developing process.
Collapse
Affiliation(s)
- Alessandra Angelucci
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Ziyue Li
- The Cologne Institute of Information Systems, Faculty of Management, Economics and Social Sciences, University of Cologne, Cologne, Germany
- Department of Industrial Engineering and Decision Analytics, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Niya Stoimenova
- Department of Industrial Design, Delft University of Technology, Delft, Netherlands
| | - Stefano Canali
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
- META—Social Sciences and Humanities for Science and Technology, Politecnico di Milano, Milan, Italy
| |
Collapse
|
17
|
Delgado J, de Manuel A, Parra I, Moyano C, Rueda J, Guersenzvaig A, Ausin T, Cruz M, Casacuberta D, Puyol A. Bias in algorithms of AI systems developed for COVID-19: A scoping review. JOURNAL OF BIOETHICAL INQUIRY 2022; 19:407-419. [PMID: 35857214 PMCID: PMC9463236 DOI: 10.1007/s11673-022-10200-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 05/06/2022] [Indexed: 06/15/2023]
Abstract
To analyze which ethically relevant biases have been identified by academic literature in artificial intelligence (AI) algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health (SDOH) have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. Studies mentioning biases on AI algorithms developed for contact tracing and medical triage or risk prediction regarding COVID-19 were included. From 1054 identified articles, 20 studies were finally included. We propose a typology of biases identified in the literature based on bias, limitations and other ethical issues in both areas of analysis. Results on health disparities and SDOH were classified into five categories: racial disparities, biased data, socio-economic disparities, unequal accessibility and workforce, and information communication. SDOH needs to be considered in the clinical context, where they still seem underestimated. Epidemiological conditions depend on geographic location, so the use of local data in studies to develop international solutions may increase some biases. Gender bias was not specifically addressed in the articles included. The main biases are related to data collection and management. Ethical problems related to privacy, consent, and lack of regulation have been identified in contact tracing while some bias-related health inequalities have been highlighted. There is a need for further research focusing on SDOH and these specific AI apps.
Collapse
Affiliation(s)
- Janet Delgado
- Department of Philosophy 1, Faculty of Philosophy, University of Granada, Granada, Spain
| | - Alicia de Manuel
- Department of Philosophy, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Iris Parra
- Department of Philosophy, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Cristian Moyano
- Department of Philosophy, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Jon Rueda
- FiloLab Scientific Unit of Excellence of the University of Granada, Granada, Spain
| | | | - Txetxu Ausin
- Institute for Philosophy of the Spanish National Research Council (CSIC), Madrid, Spain
| | - Maite Cruz
- Andalusian School of Public Health (EASP), Granada, Spain
| | - David Casacuberta
- Department of Philosophy, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Angel Puyol
- Department of Philosophy, Universitat Autònoma de Barcelona, Barcelona, Spain
| |
Collapse
|
18
|
Mökander J, Juneja P, Watson DS, Floridi L. The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: what can they learn from each other? Minds Mach (Dordr) 2022. [DOI: 10.1007/s11023-022-09612-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractOn the whole, the US Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA).
Collapse
|
19
|
Curto G, Jojoa Acosta MF, Comim F, Garcia-Zapirain B. Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings. AI & SOCIETY 2022:1-16. [PMID: 35789618 PMCID: PMC9243923 DOI: 10.1007/s00146-022-01494-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 04/28/2022] [Indexed: 12/04/2022]
Abstract
Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the issue and a tailor-made model from which meaningful data are obtained using Natural Language Processing word vectors in pretrained Google Word2Vec, Twitter and Wikipedia GloVe word embeddings. The results of the study offer the first set of data that evidences the existence of bias against the poor and suggest that Google Word2vec shows a higher degree of bias when the terms are related to beliefs, whereas bias is higher in Twitter GloVe when the terms express behaviour. This article contributes to the body of work on bias, both from and AI and a social sciences perspective, by providing evidence of a transversal aggravating factor for historical types of discrimination. The evidence of bias against the poor also has important consequences in terms of human development, since it often leads to discrimination, which constitutes an obstacle for the effectiveness of poverty reduction policies.
Collapse
Affiliation(s)
- Georgina Curto
- Universitat Ramon Llull, IQS School of Management, Barcelona, Spain
- Universitat Autònoma de Barcelona, EINA Centre Universitari de Disseny i Art, Barcelona, Spain
| | | | - Flavio Comim
- Universitat Ramon Llull, IQS School of Management, Barcelona, Spain
| | | |
Collapse
|
20
|
Deranty JP, Corbin T. Artificial intelligence and work: a critical review of recent research from the social sciences. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01496-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractThis review seeks to present a comprehensive picture of recent discussions in the social sciences of the anticipated impact of AI on the world of work. Issues covered include: technological unemployment, algorithmic management, platform work and the politics of AI work. The review identifies the major disciplinary and methodological perspectives on AI’s impact on work, and the obstacles they face in making predictions. Two parameters influencing the development and deployment of AI in the economy are highlighted: the capitalist imperative and nationalistic pressures.
Collapse
|
21
|
Abstract
In recent years, machine learning, especially deep learning, has developed rapidly and has shown remarkable performance in many tasks of the smart grid field. The representation ability of machine learning algorithms is greatly improved, but with the increase of model complexity, the interpretability of machine learning algorithms is worse. The smart grid is a critical infrastructure area, so machine learning models involving it must be interpretable in order to increase user trust and improve system reliability. Unfortunately, the black-box nature of most machine learning models remains unresolved, and many decisions of intelligent systems still lack explanation. In this paper, we elaborate on the definition, motivations, properties, and classification of interpretability. In addition, we review the relevant literature addressing interpretability for smart grid applications. Finally, we discuss the future research directions of interpretable machine learning in the smart grid.
Collapse
|
22
|
|
23
|
Applying AI for social good: Aligning academic journal ratings with the United Nations Sustainable Development Goals (SDGs). AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01459-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
24
|
Tutun S, Johnson ME, Ahmed A, Albizri A, Irgil S, Yesilkaya I, Ucar EN, Sengun T, Harfouche A. An AI-based Decision Support System for Predicting Mental Health Disorders. INFORMATION SYSTEMS FRONTIERS : A JOURNAL OF RESEARCH AND INNOVATION 2022; 25:1261-1276. [PMID: 35669335 PMCID: PMC9142346 DOI: 10.1007/s10796-022-10282-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/21/2022] [Indexed: 05/27/2023]
Abstract
Approximately one billion individuals suffer from mental health disorders, such as depression, bipolar disorder, schizophrenia, and anxiety. Mental health professionals use various assessment tools to detect and diagnose these disorders. However, these tools are complex, contain an excessive number of questions, and require a significant amount of time to administer, leading to low participation and completion rates. Additionally, the results obtained from these tools must be analyzed and interpreted manually by mental health professionals, which may yield inaccurate diagnoses. To this extent, this research utilizes advanced analytics and artificial intelligence to develop a decision support system (DSS) that can efficiently detect and diagnose various mental disorders. As part of the DSS development process, the Network Pattern Recognition (NEPAR) algorithm is first utilized to build the assessment tool and identify the questions that participants need to answer. Then, various machine learning models are trained using participants' answers to these questions and other historical data as inputs to predict the existence and the type of their mental disorder. The results show that the proposed DSS can automatically diagnose mental disorders using only 28 questions without any human input, to an accuracy level of 89%. Furthermore, the proposed mental disorder diagnostic tool has significantly fewer questions than its counterparts; hence, it provides higher participation and completion rates. Therefore, mental health professionals can use this proposed DSS and its accompanying assessment tool for improved clinical decision-making and diagnostic accuracy.
Collapse
Affiliation(s)
- Salih Tutun
- Washington University in St. Louis, St. Louis, MO USA
| | | | | | | | - Sedat Irgil
- Guven Private Health Laboratory, Guven, Turkey
| | | | | | | | | |
Collapse
|
25
|
Giovanola B, Tiribelli S. Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms. AI & SOCIETY 2022; 38:549-563. [PMID: 35615443 PMCID: PMC9123626 DOI: 10.1007/s00146-022-01455-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 04/13/2022] [Indexed: 01/09/2023]
Abstract
The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently explored. Our paper aims to fill this gap and address the AI ethics principle of fairness from a conceptual standpoint, drawing insights from accounts of fairness elaborated in moral philosophy and using them to conceptualise fairness as an ethical value and to redefine fairness in HMLA accordingly. To achieve our goal, following a first section aimed at clarifying the background, methodology and structure of the paper, in the second section, we provide an overview of the discussion of the AI ethics principle of fairness in HMLA and show that the concept of fairness underlying this debate is framed in purely distributive terms and overlaps with non-discrimination, which is defined in turn as the absence of biases. After showing that this framing is inadequate, in the third section, we pursue an ethical inquiry into the concept of fairness and argue that fairness ought to be conceived of as an ethical value. Following a clarification of the relationship between fairness and non-discrimination, we show that the two do not overlap and that fairness requires much more than just non-discrimination. Moreover, we highlight that fairness not only has a distributive but also a socio-relational dimension. Finally, we pinpoint the constitutive components of fairness. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect to include respect for individual persons. In the fourth section, we analyse the implications of our conceptual redefinition of fairness as an ethical value in the discussion of fairness in HMLA. Here, we claim that fairness requires more than non-discrimination and the absence of biases as well as more than just distribution; it needs to ensure that HMLA respects persons both as persons and as particular individuals. Finally, in the fifth section, we sketch some broader implications and show how our inquiry can contribute to making HMLA and, more generally, AI promote the social good and a fairer society.
Collapse
Affiliation(s)
- Benedetta Giovanola
- Department of Political Sciences, Communication, and International Relations, University of Macerata, Macerata, 62100 Italy.,Department of Philosophy, Tufts University, 222 Miner Hall, Medford, MA 02155 USA
| | - Simona Tiribelli
- Department of Political Sciences, Communication, and International Relations, University of Macerata, Macerata, 62100 Italy.,Present Address: Institute for Technology and Global Health, PathCheck Foundation, 955 Massachusetts Ave, Cambridge, MA 02139 USA
| |
Collapse
|
26
|
Watch out! Cities as data engines. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01448-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
27
|
Mindful Application of Digitalization for Sustainable Development: The Digitainability Assessment Framework. SUSTAINABILITY 2022. [DOI: 10.3390/su14053114] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Digitalization is widely recognized as a transformative power for sustainable development. Careful alignment of progress made by digitalization with the globally acknowledged Sustainable Development Goals (SDGs) is crucial for inclusive and holistic sustainable development in the digital era. However, limited reference has been made in SDGs about harnessing the opportunities offered by digitalization capabilities. Moreover, research on inhibiting or enabling effects of digitalization considering its multi-faceted interlinkages with the SDGs and their targets is fragmented. There are only limited instances in the literature examining and categorizing the impact of digitalization on sustainable development. To overcome this gap, this paper introduces a new Digitainability Assessment Framework (DAF) for context-aware practical assessment of the impact of the digitalization intervention on the SDGs. The DAF facilitates in-depth assessment of the many diverse technical, social, ethical, and environmental aspects of a digital intervention by systematically examining its impact on the SDG indicators. Our approach draws on and adapts concepts of the Theory of Change (ToC). The DAF should support developers, users as well policymakers by providing a 360-degree perspective on the impact of digital services or products, as well as providing hints for its possible improvement. We demonstrate the application of the DAF with the three test case studies illustrating how it supports in providing a holistic view of the relation between digitalization and SDGs.
Collapse
|
28
|
Morley J, Murphy L, Mishra A, Joshi I, Karpathakis K. Governing Data and Artificial Intelligence for Health Care: Developing an International Understanding. JMIR Form Res 2022; 6:e31623. [PMID: 35099403 PMCID: PMC8844981 DOI: 10.2196/31623] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 10/12/2021] [Accepted: 10/13/2021] [Indexed: 01/04/2023] Open
Abstract
Background Although advanced analytical techniques falling under the umbrella heading of artificial intelligence (AI) may improve health care, the use of AI in health raises safety and ethical concerns. There are currently no internationally recognized governance mechanisms (policies, ethical standards, evaluation, and regulation) for developing and using AI technologies in health care. A lack of international consensus creates technical and social barriers to the use of health AI while potentially hampering market competition. Objective The aim of this study is to review current health data and AI governance mechanisms being developed or used by Global Digital Health Partnership (GDHP) member countries that commissioned this research, identify commonalities and gaps in approaches, identify examples of best practices, and understand the rationale for policies. Methods Data were collected through a scoping review of academic literature and a thematic analysis of policy documents published by selected GDHP member countries. The findings from this data collection and the literature were used to inform semistructured interviews with key senior policy makers from GDHP member countries exploring their countries’ experience of AI-driven technologies in health care and associated governance and inform a focus group with professionals working in international health and technology to discuss the themes and proposed policy recommendations. Policy recommendations were developed based on the aggregated research findings. Results As this is an empirical research paper, we primarily focused on reporting the results of the interviews and the focus group. Semistructured interviews (n=10) and a focus group (n=6) revealed 4 core areas for international collaborations: leadership and oversight, a whole systems approach covering the entire AI pipeline from data collection to model deployment and use, standards and regulatory processes, and engagement with stakeholders and the public. There was a broad range of maturity in health AI activity among the participants, with varying data infrastructure, application of standards across the AI life cycle, and strategic approaches to both development and deployment. A demand for further consistency at the international level and policies was identified to support a robust innovation pipeline. In total, 13 policy recommendations were developed to support GDHP member countries in overcoming core AI governance barriers and establishing common ground for international collaboration. Conclusions AI-driven technology research and development for health care outpaces the creation of supporting AI governance globally. International collaboration and coordination on AI governance for health care is needed to ensure coherent solutions and allow countries to support and benefit from each other’s work. International bodies and initiatives have a leading role to play in the international conversation, including the production of tools and sharing of practical approaches to the use of AI-driven technologies for health care.
Collapse
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
| | | | - Abhishek Mishra
- Uehiro Centre for Practical Ethics, University of Oxford, Oxford, United Kingdom
| | | | - Kassandra Karpathakis
- Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, United States
| |
Collapse
|
29
|
van Nood R, Yeomans C. Fairness as Equal Concession: Critical Remarks on Fair AI. SCIENCE AND ENGINEERING ETHICS 2021; 27:73. [PMID: 34807336 DOI: 10.1007/s11948-021-00348-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 10/21/2021] [Indexed: 06/13/2023]
Abstract
Although existing work draws attention to a range of obstacles in realizing fair AI, the field lacks an account that emphasizes how these worries hang together in a systematic way. Furthermore, a review of the fair AI and philosophical literature demonstrates the unsuitability of 'treat like cases alike' and other intuitive notions as conceptions of fairness. That review then generates three desiderata for a replacement conception of fairness valuable to AI research: (1) It must provide a meta-theory for understanding tradeoffs, entailing that it must be flexible enough to capture diverse species of objection to decisions. (2) It must not appeal to an impartial perspective (neutral data, objective data, or final arbiter.) (3) It must foreground the way in which judgments of fairness are sensitive to context, i.e., to historical and institutional states of affairs. We argue that a conception of fairness as appropriate concession in the historical iteration of institutional decisions meets these three desiderata. On the basis of this definition, we organize the insights of commentators into a process-structure map of the ethical territory that we hope will bring clarity to computer scientists and ethicists analyzing Fair AI while clearing some ground for further technical and philosophical work.
Collapse
Affiliation(s)
- Ryan van Nood
- Department of Philosophy, Purdue University, 100 N. University Street, West Lafayette, IN, 47907, USA
| | - Christopher Yeomans
- Department of Philosophy, Purdue University, 100 N. University Street, West Lafayette, IN, 47907, USA.
| |
Collapse
|
30
|
Roberts H, Cowls J, Hine E, Mazzi F, Tsamados A, Taddeo M, Floridi L. Achieving a 'Good AI Society': Comparing the Aims and Progress of the EU and the US. SCIENCE AND ENGINEERING ETHICS 2021; 27:68. [PMID: 34767085 PMCID: PMC8587491 DOI: 10.1007/s11948-021-00340-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 09/10/2021] [Indexed: 06/13/2023]
Abstract
Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States' (US) AI strategies and considers (i) the visions of a 'Good AI Society' that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a 'Good AI Society' have for transatlantic cooperation. The article concludes by comparing the ethical desirability of each vision and identifies areas where the EU, and especially the US, need to improve in order to achieve ethical outcomes and deepen cooperation.
Collapse
Affiliation(s)
- Huw Roberts
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
| | - Josh Cowls
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK
| | - Emmie Hine
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
| | - Francesca Mazzi
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
- Saïd Business School, University of Oxford, Park End St, Oxford, OX1 1HP, UK
| | - Andreas Tsamados
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK.
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK.
| |
Collapse
|
31
|
Speeding up to keep up: exploring the use of AI in the research process. AI & SOCIETY 2021; 37:1439-1457. [PMID: 34667374 PMCID: PMC8516568 DOI: 10.1007/s00146-021-01259-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 08/10/2021] [Indexed: 11/26/2022]
Abstract
There is a long history of the science of intelligent machines and its potential to provide scientific insights have been debated since the dawn of AI. In particular, there is renewed interest in the role of AI in research and research policy as an enabler of new methods, processes, management and evaluation which is still relatively under-explored. This empirical paper explores interviews with leading scholars on the potential impact of AI on research practice and culture through deductive, thematic analysis to show the issues affecting academics and universities today. Our interviewees identify positive and negative consequences for research and researchers with respect to collective and individual use. AI is perceived as helpful with respect to information gathering and other narrow tasks, and in support of impact and interdisciplinarity. However, using AI as a way of ‘speeding up—to keep up’ with bureaucratic and metricised processes, may proliferate negative aspects of academic culture in that the expansion of AI in research should assist and not replace human creativity. Research into the future role of AI in the research process needs to go further to address these challenges, and ask fundamental questions about how AI might assist in providing new tools able to question the values and principles driving institutions and research processes. We argue that to do this an explicit movement of meta-research on the role of AI in research should consider the effects for research and researcher creativity. Anticipatory approaches and engagement of diverse and critical voices at policy level and across disciplines should also be considered.
Collapse
|
32
|
Andreotta AJ, Kirkham N, Rizzi M. AI, big data, and the future of consent. AI & SOCIETY 2021; 37:1715-1728. [PMID: 34483498 PMCID: PMC8404542 DOI: 10.1007/s00146-021-01262-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 08/18/2021] [Indexed: 11/25/2022]
Abstract
In this paper, we discuss several problems with current Big data practices which, we claim, seriously erode the role of informed consent as it pertains to the use of personal information. To illustrate these problems, we consider how the notion of informed consent has been understood and operationalised in the ethical regulation of biomedical research (and medical practices, more broadly) and compare this with current Big data practices. We do so by first discussing three types of problems that can impede informed consent with respect to Big data use. First, we discuss the transparency (or explanation) problem. Second, we discuss the re-repurposed data problem. Third, we discuss the meaningful alternatives problem. In the final section of the paper, we suggest some solutions to these problems. In particular, we propose that the use of personal data for commercial and administrative objectives could be subject to a ‘soft governance’ ethical regulation, akin to the way that all projects involving human participants (e.g., social science projects, human medical data and tissue use) are regulated in Australia through the Human Research Ethics Committees (HRECs). We also consider alternatives to the standard consent forms, and privacy policies, that could make use of some of the latest research focussed on the usability of pictorial legal contracts.
Collapse
Affiliation(s)
- Adam J Andreotta
- School of Management, Curtin University, Kent St, Bentley, WA 6102 Australia
| | - Nin Kirkham
- Department of Philosophy, The University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009 Australia
| | - Marco Rizzi
- UWA Law School, The University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009 Australia
| |
Collapse
|
33
|
Deep Automation Bias: How to Tackle a Wicked Problem of AI? BIG DATA AND COGNITIVE COMPUTING 2021. [DOI: 10.3390/bdcc5020018] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
The increasing use of AI in different societal contexts intensified the debate on risks, ethical problems and bias. Accordingly, promising research activities focus on debiasing to strengthen fairness, accountability and transparency in machine learning. There is, though, a tendency to fix societal and ethical issues with technical solutions that may cause additional, wicked problems. Alternative analytical approaches are thus needed to avoid this and to comprehend how societal and ethical issues occur in AI systems. Despite various forms of bias, ultimately, risks result from eventual rule conflicts between the AI system behavior due to feature complexity and user practices with limited options for scrutiny. Hence, although different forms of bias can occur, automation is their common ground. The paper highlights the role of automation and explains why deep automation bias (DAB) is a metarisk of AI. Based on former work it elaborates the main influencing factors and develops a heuristic model for assessing DAB-related risks in AI systems. This model aims at raising problem awareness and training on the sociotechnical risks resulting from AI-based automation and contributes to improving the general explicability of AI systems beyond technical issues.
Collapse
|