1
|
Bibri SE, Krogstie J, Kaboli A, Alahi A. Smarter eco-cities and their leading-edge artificial intelligence of things solutions for environmental sustainability: A comprehensive systematic review. ENVIRONMENTAL SCIENCE AND ECOTECHNOLOGY 2024; 19:100330. [PMID: 38021367 PMCID: PMC10656232 DOI: 10.1016/j.ese.2023.100330] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 09/28/2023] [Accepted: 09/28/2023] [Indexed: 12/01/2023]
Abstract
The recent advancements made in the realms of Artificial Intelligence (AI) and Artificial Intelligence of Things (AIoT) have unveiled transformative prospects and opportunities to enhance and optimize the environmental performance and efficiency of smart cities. These strides have, in turn, impacted smart eco-cities, catalyzing ongoing improvements and driving solutions to address complex environmental challenges. This aligns with the visionary concept of smarter eco-cities, an emerging paradigm of urbanism characterized by the seamless integration of advanced technologies and environmental strategies. However, there remains a significant gap in thoroughly understanding this new paradigm and the intricate spectrum of its multifaceted underlying dimensions. To bridge this gap, this study provides a comprehensive systematic review of the burgeoning landscape of smarter eco-cities and their leading-edge AI and AIoT solutions for environmental sustainability. To ensure thoroughness, the study employs a unified evidence synthesis framework integrating aggregative, configurative, and narrative synthesis approaches. At the core of this study lie these subsequent research inquiries: What are the foundational underpinnings of emerging smarter eco-cities, and how do they intricately interrelate, particularly urbanism paradigms, environmental solutions, and data-driven technologies? What are the key drivers and enablers propelling the materialization of smarter eco-cities? What are the primary AI and AIoT solutions that can be harnessed in the development of smarter eco-cities? In what ways do AI and AIoT technologies contribute to fostering environmental sustainability practices, and what potential benefits and opportunities do they offer for smarter eco-cities? What challenges and barriers arise in the implementation of AI and AIoT solutions for the development of smarter eco-cities? The findings significantly deepen and broaden our understanding of both the significant potential of AI and AIoT technologies to enhance sustainable urban development practices, as well as the formidable nature of the challenges they pose. Beyond theoretical enrichment, these findings offer invaluable insights and new perspectives poised to empower policymakers, practitioners, and researchers to advance the integration of eco-urbanism and AI- and AIoT-driven urbanism. Through an insightful exploration of the contemporary urban landscape and the identification of successfully applied AI and AIoT solutions, stakeholders gain the necessary groundwork for making well-informed decisions, implementing effective strategies, and designing policies that prioritize environmental well-being.
Collapse
Affiliation(s)
- Simon Elias Bibri
- School of Architecture, Civil and Environmental Engineering (ENAC), Civil Engineering Institute (IIC), Visual Intelligence for Transportation (VITA), Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
| | - John Krogstie
- Department of Computer Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Amin Kaboli
- School of Engineering, Institute of Mechanical Engineering, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
| | - Alexandre Alahi
- School of Architecture, Civil and Environmental Engineering (ENAC), Civil Engineering Institute (IIC), Visual Intelligence for Transportation (VITA), Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
2
|
Arora A, Alderman JE, Palmer J, Ganapathi S, Laws E, McCradden MD, Oakden-Rayner L, Pfohl SR, Ghassemi M, McKay F, Treanor D, Rostamzadeh N, Mateen B, Gath J, Adebajo AO, Kuku S, Matin R, Heller K, Sapey E, Sebire NJ, Cole-Lewis H, Calvert M, Denniston A, Liu X. The value of standards for health datasets in artificial intelligence-based applications. Nat Med 2023; 29:2929-2938. [PMID: 37884627 PMCID: PMC10667100 DOI: 10.1038/s41591-023-02608-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 09/22/2023] [Indexed: 10/28/2023]
Abstract
Artificial intelligence as a medical device is increasingly being applied to healthcare for diagnosis, risk stratification and resource allocation. However, a growing body of evidence has highlighted the risk of algorithmic bias, which may perpetuate existing health inequity. This problem arises in part because of systemic inequalities in dataset curation, unequal opportunity to participate in research and inequalities of access. This study aims to explore existing standards, frameworks and best practices for ensuring adequate data diversity in health datasets. Exploring the body of existing literature and expert views is an important step towards the development of consensus-based guidelines. The study comprises two parts: a systematic review of existing standards, frameworks and best practices for healthcare datasets; and a survey and thematic analysis of stakeholder views of bias, health equity and best practices for artificial intelligence as a medical device. We found that the need for dataset diversity was well described in literature, and experts generally favored the development of a robust set of guidelines, but there were mixed views about how these could be implemented practically. The outputs of this study will be used to inform the development of standards for transparency of data diversity in health datasets (the STANDING Together initiative).
Collapse
Affiliation(s)
- Anmol Arora
- School of Clinical Medicine, University of Cambridge, Cambridge, UK
| | - Joseph E Alderman
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | - Joanne Palmer
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | | | - Elinor Laws
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | - Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics and Genome Biology, Peter Gilgan Centre for Research and Learning, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, Toronto, Ontario, Canada
| | - Lauren Oakden-Rayner
- The Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | | | - Marzyeh Ghassemi
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for Medical Engineering & Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Vector Institute, Toronto, Ontario, Canada
| | - Francis McKay
- The Ethox Centre and the Wellcome Centre for Ethics and Humanities, Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Darren Treanor
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- University of Leeds, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | | | - Bilal Mateen
- Institute for Health Informatics, University College London, London, UK
- Wellcome Trust, London, UK
| | - Jacqui Gath
- Patient and Public Involvement and Engagement (PPIE) Group, STANDING Together, Birmingham, UK
| | - Adewole O Adebajo
- Patient and Public Involvement and Engagement (PPIE) Group, STANDING Together, Birmingham, UK
| | | | - Rubeta Matin
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | | | - Elizabeth Sapey
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- PIONEER, HDR UK Hub in Acute Care, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
| | - Neil J Sebire
- National Institute for Health and Care Research, Great Ormond Street Hospital Biomedical Research Centre, London, UK
- Great Ormond Street Institute of Child Health, University Hospital London, London, UK
| | | | - Melanie Calvert
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
- Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Applied Research Collaboration West Midlands, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Birmingham-Oxford Blood and Transplant Research Unit in Precision Transplant and Cellular Therapeutics, University of Birmingham, Birmingham, UK
- DEMAND Hub, University of Birmingham, Birmingham, UK
- UK SPINE, University of Birmingham, Birmingham, UK
| | - Alastair Denniston
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Biomedical Research Centre, Moorfields Eye Hospital/University College London, London, UK
| | - Xiaoxuan Liu
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK.
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK.
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK.
| |
Collapse
|
3
|
Biondi G, Cagnoni S, Capobianco R, Franzoni V, Lisi FA, Milani A, Vallverdú J. Editorial: Ethical design of artificial intelligence-based systems for decision making. Front Artif Intell 2023; 6:1250209. [PMID: 37554695 PMCID: PMC10406498 DOI: 10.3389/frai.2023.1250209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 07/06/2023] [Indexed: 08/10/2023] Open
Affiliation(s)
- Giulio Biondi
- EmoRe Research Group, Department of Mathematics and Computer Science, University of Perugia, Perugia, Italy
| | - Stefano Cagnoni
- Department of Engineering and Architecture, University of Parma, Parma, Italy
| | - Roberto Capobianco
- Artificial Intelligence and Robotics Research Group, Department of Computer, Control and Management Engineering, La Sapienza University of Rome, Rome, Italy
| | - Valentina Franzoni
- EmoRe Research Group, Department of Mathematics and Computer Science, University of Perugia, Perugia, Italy
- Department of Computer Science, Hong Kong Baptist University, Kowloon, Hong Kong SAR, China
| | - Francesca A. Lisi
- Department of Computer Science, University of Bari “Aldo Moro”, Bari, Italy
| | - Alfredo Milani
- EmoRe Research Group, Department of Mathematics and Computer Science, University of Perugia, Perugia, Italy
| | - Jordi Vallverdú
- ICREA Acadèmia, Department of Philosophy, Universitat Autònoma de Barcelona, Barcelona, Catalonia, Spain
| |
Collapse
|
4
|
Liu LT, Wang S, Britton T, Abebe R. Reimagining the machine learning life cycle to improve educational outcomes of students. Proc Natl Acad Sci U S A 2023; 120:e2204781120. [PMID: 36827260 PMCID: PMC9992853 DOI: 10.1073/pnas.2204781120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 09/20/2022] [Indexed: 02/25/2023] Open
Abstract
Machine learning (ML) techniques are increasingly prevalent in education, from their use in predicting student dropout to assisting in university admissions and facilitating the rise of massive open online courses (MOOCs). Given the rapid growth of these novel uses, there is a pressing need to investigate how ML techniques support long-standing education principles and goals. In this work, we shed light on this complex landscape drawing on qualitative insights from interviews with education experts. These interviews comprise in-depth evaluations of ML for education (ML4Ed) papers published in preeminent applied ML conferences over the past decade. Our central research goal is to critically examine how the stated or implied education and societal objectives of these papers are aligned with the ML problems they tackle. That is, to what extent does the technical problem formulation, objectives, approach, and interpretation of results align with the education problem at hand? We find that a cross-disciplinary gap exists and is particularly salient in two parts of the ML life cycle: the formulation of an ML problem from education goals and the translation of predictions to interventions. We use these insights to propose an extended ML life cycle, which may also apply to the use of ML in other domains. Our work joins a growing number of meta-analytical studies across education and ML research as well as critical analyses of the societal impact of ML. Specifically, it fills a gap between the prevailing technical understanding of machine learning and the perspective of education researchers working with students and in policy.
Collapse
Affiliation(s)
- Lydia T. Liu
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA94709
| | - Serena Wang
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA94709
| | - Tolani Britton
- Berkeley School of Education, University of California, Berkeley, CA94704
| | | |
Collapse
|
5
|
Fear of AI: an inquiry into the adoption of autonomous cars in spite of fear, and a theoretical framework for the study of artificial intelligence technology acceptance. AI & SOCIETY 2023. [DOI: 10.1007/s00146-022-01598-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
AbstractArtificial intelligence (AI) is becoming part of the everyday. During this transition, people’s intention to use AI technologies is still unclear and emotions such as fear are influencing it. In this paper, we focus on autonomous cars to first verify empirically the extent to which people fear AI and then examine the impact that fear has on their intention to use AI-driven vehicles. Our research is based on a systematic survey and it reveals that while individuals are largely afraid of cars that are driven by AI, they are nonetheless willing to adopt this technology as soon as possible. To explain this tension, we extend our analysis beyond just fear and show that people also believe that AI-driven cars will generate many individual, urban and global benefits. Subsequently, we employ our empirical findings as the foundations of a theoretical framework meant to illustrate the main factors that people ponder when they consider the use of AI tech. In addition to offering a comprehensive theoretical framework for the study of AI technology acceptance, this paper provides a nuanced understanding of the tension that exists between the fear and adoption of AI, capturing what exactly people fear and intend to do.
Collapse
|
6
|
Sajno E, Bartolotta S, Tuena C, Cipresso P, Pedroli E, Riva G. Machine learning in biosignals processing for mental health: A narrative review. Front Psychol 2023; 13:1066317. [PMID: 36710855 PMCID: PMC9880193 DOI: 10.3389/fpsyg.2022.1066317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 12/16/2022] [Indexed: 01/15/2023] Open
Abstract
Machine Learning (ML) offers unique and powerful tools for mental health practitioners to improve evidence-based psychological interventions and diagnoses. Indeed, by detecting and analyzing different biosignals, it is possible to differentiate between typical and atypical functioning and to achieve a high level of personalization across all phases of mental health care. This narrative review is aimed at presenting a comprehensive overview of how ML algorithms can be used to infer the psychological states from biosignals. After that, key examples of how they can be used in mental health clinical activity and research are illustrated. A description of the biosignals typically used to infer cognitive and emotional correlates (e.g., EEG and ECG), will be provided, alongside their application in Diagnostic Precision Medicine, Affective Computing, and brain-computer Interfaces. The contents will then focus on challenges and research questions related to ML applied to mental health and biosignals analysis, pointing out the advantages and possible drawbacks connected to the widespread application of AI in the medical/mental health fields. The integration of mental health research and ML data science will facilitate the transition to personalized and effective medicine, and, to do so, it is important that researchers from psychological/ medical disciplines/health care professionals and data scientists all share a common background and vision of the current research.
Collapse
Affiliation(s)
- Elena Sajno
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milan, Italy,Department of Computer Science, University of Pisa, Pisa, Italy,*Correspondence: Elena Sajno, ✉
| | - Sabrina Bartolotta
- ExperienceLab, Università Cattolica del Sacro Cuore, Milan, Italy,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Cosimo Tuena
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Pietro Cipresso
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy,Department of Psychology, University of Turin, Turin, Italy
| | - Elisa Pedroli
- Department of Psychology, eCampus University, Novedrate, Italy
| | - Giuseppe Riva
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milan, Italy,Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
7
|
Sinde R, Diwani S, Leo J, Kondo T, Elisa N, Matogoro J. AI for Anglophone Africa: Unlocking its adoption for responsible solutions in academia-private sector. Front Artif Intell 2023; 6:1133677. [PMID: 37113649 PMCID: PMC10126471 DOI: 10.3389/frai.2023.1133677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 03/17/2023] [Indexed: 04/29/2023] Open
Abstract
In recent years, AI technologies have become indispensable in social and industrial development, yielding revolutionary results in improving labor efficiency, lowering labor costs, optimizing human resource structure, and creating new job demands. To reap the full benefits of responsible AI solutions in Africa, it is critical to investigate existing challenges and propose strategies, policies, and frameworks for overcoming and eliminating them. As a result, this study investigated the challenges of adopting responsible AI solutions in the Academia-Private sectors for Anglophone Africa through literature reviews, expert interviews, and then proposes solutions and framework for the sustainable and successful adoption of responsible AI.
Collapse
Affiliation(s)
- Ramadhani Sinde
- School of Computational and Communication Science and Engineering, Nelson Mandela African Institution of Science and Technology (NM-AIST), Arusha, Tanzania
- *Correspondence: Ramadhani Sinde
| | - Salim Diwani
- Department of Computer Science and Engineering at the College of Informatics and Virtual Education, The University of Dodoma, Dodoma, Tanzania
| | - Judith Leo
- School of Computational and Communication Science and Engineering, Nelson Mandela African Institution of Science and Technology (NM-AIST), Arusha, Tanzania
| | - Tabu Kondo
- Department of Computer Science and Engineering at the College of Informatics and Virtual Education, The University of Dodoma, Dodoma, Tanzania
| | - Noe Elisa
- Department of Computer Science and Engineering at the College of Informatics and Virtual Education, The University of Dodoma, Dodoma, Tanzania
| | - Jabhera Matogoro
- Department of Computer Science and Engineering at the College of Informatics and Virtual Education, The University of Dodoma, Dodoma, Tanzania
| |
Collapse
|
8
|
Sigfrids A, Leikas J, Salo-Pöntinen H, Koskimies E. Human-centricity in AI governance: A systemic approach. Front Artif Intell 2023; 6:976887. [PMID: 36872934 PMCID: PMC9979257 DOI: 10.3389/frai.2023.976887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 01/24/2023] [Indexed: 02/16/2023] Open
Abstract
Human-centricity is considered a central aspect in the development and governance of artificial intelligence (AI). Various strategies and guidelines highlight the concept as a key goal. However, we argue that current uses of Human-Centered AI (HCAI) in policy documents and AI strategies risk downplaying promises of creating desirable, emancipatory technology that promotes human wellbeing and the common good. Firstly, HCAI, as it appears in policy discourses, is the result of aiming to adapt the concept of human-centered design (HCD) to the public governance context of AI but without proper reflection on how it should be reformed to suit the new task environment. Second, the concept is mainly used in reference to realizing human and fundamental rights, which are necessary, but not sufficient for technological emancipation. Third, the concept is used ambiguously in policy and strategy discourses, making it unclear how it should be operationalized in governance practices. This article explores means and approaches for using the HCAI approach for technological emancipation in the context of public AI governance. We propose that the potential for emancipatory technology development rests on expanding the traditional user-centered view of technology design to involve community- and society-centered perspectives in public governance. Developing public AI governance in this way relies on enabling inclusive governance modalities that enhance the social sustainability of AI deployment. We discuss mutual trust, transparency, communication, and civic tech as key prerequisites for socially sustainable and human-centered public AI governance. Finally, the article introduces a systemic approach to ethically and socially sustainable, human-centered AI development and deployment.
Collapse
Affiliation(s)
- Anton Sigfrids
- VTT Technical Research Centre of Finland Ltd, Espoo, Finland
| | - Jaana Leikas
- VTT Technical Research Centre of Finland Ltd, Espoo, Finland
| | - Henrikki Salo-Pöntinen
- Faculty of Information Technology, Cognitive Science, University of Jyväskylä, Jyväskylä, Finland
| | - Emmi Koskimies
- Faculty of Management and Business, Administrative Sciences, Tampere University, Tampere, Finland
| |
Collapse
|
9
|
Blanchard A, Taddeo M. The Ethics of Artificial Intelligence for Intelligence Analysis: a Review of the Key Challenges with Recommendations. DIGITAL SOCIETY : ETHICS, SOCIO-LEGAL AND GOVERNANCE OF DIGITAL TECHNOLOGY 2023; 2:12. [PMID: 37034181 PMCID: PMC10073779 DOI: 10.1007/s44206-023-00036-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 02/09/2023] [Indexed: 04/11/2023]
Abstract
Intelligence agencies have identified artificial intelligence (AI) as a key technology for maintaining an edge over adversaries. As a result, efforts to develop, acquire, and employ AI capabilities for purposes of national security are growing. This article reviews the ethical challenges presented by the use of AI for augmented intelligence analysis. These challenges have been identified through a qualitative systematic review of the relevant literature. The article identifies five sets of ethical challenges relating to intrusion, explainability and accountability, bias, authoritarianism and political security, and collaboration and classification, and offers a series of recommendations targeted at intelligence agencies to address and mitigate these challenges.
Collapse
Affiliation(s)
| | - Mariarosaria Taddeo
- The Alan Turing Institute, London, UK
- Oxford Internet Institute, University of Oxford, Oxford, UK
| |
Collapse
|
10
|
Kostick-Quenet KM, Gerke S. AI in the hands of imperfect users. NPJ Digit Med 2022; 5:197. [PMID: 36577851 PMCID: PMC9795935 DOI: 10.1038/s41746-022-00737-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 11/29/2022] [Indexed: 12/29/2022] Open
Abstract
As the use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. Less attention has fallen to addressing potential bias among AI/ML's human users or factors that influence user reliance. We argue for a systematic approach to identifying the existence and impacts of user biases while using AI/ML tools and call for the development of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI/ML.
Collapse
Affiliation(s)
| | - Sara Gerke
- Penn State Dickinson Law, Carlisle, PA, USA
| |
Collapse
|
11
|
Hagos DH, Rawat DB. Recent Advances in Artificial Intelligence and Tactical Autonomy: Current Status, Challenges, and Perspectives. SENSORS (BASEL, SWITZERLAND) 2022; 22:9916. [PMID: 36560285 PMCID: PMC9782095 DOI: 10.3390/s22249916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/13/2022] [Accepted: 12/14/2022] [Indexed: 06/17/2023]
Abstract
This paper presents the findings of detailed and comprehensive technical literature aimed at identifying the current and future research challenges of tactical autonomy. It discusses in great detail the current state-of-the-art powerful artificial intelligence (AI), machine learning (ML), and robot technologies, and their potential for developing safe and robust autonomous systems in the context of future military and defense applications. Additionally, we discuss some of the technical and operational critical challenges that arise when attempting to practically build fully autonomous systems for advanced military and defense applications. Our paper provides the state-of-the-art advanced AI methods available for tactical autonomy. To the best of our knowledge, this is the first work that addresses the important current trends, strategies, critical challenges, tactical complexities, and future research directions of tactical autonomy. We believe this work will greatly interest researchers and scientists from academia and the industry working in the field of robotics and the autonomous systems community. We hope this work encourages researchers across multiple disciplines of AI to explore the broader tactical autonomy domain. We also hope that our work serves as an essential step toward designing advanced AI and ML models with practical implications for real-world military and defense settings.
Collapse
|
12
|
Roberts H, Zhang J, Bariach B, Cowls J, Gilburt B, Juneja P, Tsamados A, Ziosi M, Taddeo M, Floridi L. Artificial intelligence in support of the circular economy: ethical considerations and a path forward. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01596-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractThe world’s current model for economic development is unsustainable. It encourages high levels of resource extraction, consumption, and waste that undermine positive environmental outcomes. Transitioning to a circular economy (CE) model of development has been proposed as a sustainable alternative. Artificial intelligence (AI) is a crucial enabler for CE. It can aid in designing robust and sustainable products, facilitate new circular business models, and support the broader infrastructures needed to scale circularity. However, to date, considerations of the ethical implications of using AI to achieve a transition to CE have been limited. This article addresses this gap. It outlines how AI is and can be used to transition towards CE, analyzes the ethical risks associated with using AI for this purpose, and supports some recommendations to policymakers and industry on how to minimise these risks.
Collapse
|
13
|
Stypinska J. AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies. AI & SOCIETY 2022; 38:665-677. [PMID: 36212226 PMCID: PMC9527733 DOI: 10.1007/s00146-022-01553-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Accepted: 06/28/2022] [Indexed: 11/29/2022]
Abstract
In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the analysis of fairness and bias in AI, important for the debate of AI for social good, has paid insufficient attention to the category of age and older people. Ageing populations have been largely neglected during the turn to digitality and AI. In this article, the concept of AI ageism is presented to make a theoretical contribution to how the understanding of inclusion and exclusion within the field of AI can be expanded to include the category of age. AI ageism can be defined as practices and ideologies operating within the field of AI, which exclude, discriminate, or neglect the interests, experiences, and needs of older population and can be manifested in five interconnected forms: (1) age biases in algorithms and datasets (technical level), (2) age stereotypes, prejudices and ideologies of actors in AI (individual level), (3) invisibility of old age in discourses on AI (discourse level), (4) discriminatory effects of use of AI technology on different age groups (group level), (5) exclusion as users of AI technology, services and products (user level). Additionally, the paper provides empirical illustrations of the way ageism operates in these five forms.
Collapse
Affiliation(s)
- Justyna Stypinska
- Freie Universität, Berlin, Germany
- European New School of Digital Studies, European University Viadrina, Frankfurt (Oder), Germany
| |
Collapse
|
14
|
Paraman P, Anamalah S. Ethical artificial intelligence framework for a good AI society: principles, opportunities and perils. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01458-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
15
|
AI for the public. How public interest theory shifts the discourse on AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01480-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractAI for social good is a thriving research topic and a frequently declared goal of AI strategies and regulation. This article investigates the requirements necessary in order for AI to actually serve a public interest, and hence be socially good. The authors propose shifting the focus of the discourse towards democratic governance processes when developing and deploying AI systems. The article draws from the rich history of public interest theory in political philosophy and law, and develops a framework for ‘public interest AI’. The framework consists of (1) public justification for the AI system, (2) an emphasis on equality, (3) deliberation/ co-design process, (4) technical safeguards, and (5) openness to validation. This framework is then applied to two case studies, namely SyRI, the Dutch welfare fraud detection project, and UNICEF’s Project Connect, that maps schools worldwide. Through the analysis of these cases, the authors conclude that public interest is a helpful and practical guide for the development and governance of AI for the people.
Collapse
|
16
|
Applying AI for social good: Aligning academic journal ratings with the United Nations Sustainable Development Goals (SDGs). AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01459-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
17
|
A principle-based approach to AI: the case for European Union and Italy. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01453-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractAs Artificial Intelligence (AI) becomes more and more pervasive in our everyday life, new questions arise about its ethical and social impacts. Such issues concern all stakeholders involved in or committed to the design, implementation, deployment, and use of the technology. The present document addresses these preoccupations by introducing and discussing a set of practical obligations and recommendations for the development of applications and systems based on AI techniques. With this work we hope to contribute to spreading awareness on the many social challenges posed by AI and encouraging the establishment of good practices throughout the relevant social areas. As points of novelty, the paper elaborates on an integrated view that combines both human rights and ethical concepts to reap the benefits of the two approaches. Moreover, it proposes innovative recommendations, such as those on redress and governance, which add further insight to the debate. Finally, it incorporates a specific focus on the Italian Constitution, thus offering an example of how core legislations of Member States might contribute to further specify and enrich the EU normative framework on AI.
Collapse
|
18
|
Foffano F, Scantamburlo T, Cortés A. Investing in AI for social good: an analysis of European national strategies. AI & SOCIETY 2022; 38:479-500. [PMID: 35528248 PMCID: PMC9068863 DOI: 10.1007/s00146-022-01445-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 03/24/2022] [Indexed: 10/25/2022]
Abstract
Artificial Intelligence (AI) has become a driving force in modern research, industry and public administration and the European Union (EU) is embracing this technology with a view to creating societal, as well as economic, value. This effort has been shared by EU Member States which were all encouraged to develop their own national AI strategies outlining policies and investment levels. This study focuses on how EU Member States are approaching the promise to develop and use AI for the good of society through the lens of their national AI strategies. In particular, we aim to investigate how European countries are investing in AI and to what extent the stated plans contribute to the good of people and society as a whole. Our contribution consists of three parts: (i) a conceptualization of AI for social good highlighting the role of AI policy, in particular, the one put forward by the European Commission (EC); (ii) a qualitative analysis of 15 European national strategies mapping investment plans and suggesting their relation to the social good (iii) a reflection on the current status of investments in socially good AI and possible steps to move forward. Our study suggests that while European national strategies incorporate money allocations in the sphere of AI for social good (e.g. education), there is a broader variety of underestimated actions (e.g. multidisciplinary approach in STEM curricula and dialogue among stakeholders) that can boost the European commitment to sustainable and responsible AI innovation.
Collapse
Affiliation(s)
- Francesca Foffano
- UK and European Centre for Living Technology, University of York, York, Venice, Italy.,European Centre for Living Technology, Venice, Italy.,Barcelona Supercomputing Center, Barcelona, Spain
| | - Teresa Scantamburlo
- UK and European Centre for Living Technology, University of York, York, Venice, Italy.,European Centre for Living Technology, Venice, Italy.,Barcelona Supercomputing Center, Barcelona, Spain
| | - Atia Cortés
- UK and European Centre for Living Technology, University of York, York, Venice, Italy.,European Centre for Living Technology, Venice, Italy.,Barcelona Supercomputing Center, Barcelona, Spain
| |
Collapse
|
19
|
Design of a Computable Approximate Reasoning Logic System for AI. MATHEMATICS 2022. [DOI: 10.3390/math10091447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
The fuzzy logic reasoning based on the “If... then...” rule is not the inaccurate reasoning of AI against ambiguity because fuzzy reasoning is antilogical. In order to solve this problem, a redundancy theory for discriminative weight filtering containing six theorems and one M(1,2,3) model was proposed and the approximate reasoning process was shown, the system logic of AI handling ambiguity as an extension of the classical logic system was proposed. The system is a generalized dynamic logic system characterized by machine learning, which is the practical-application logic system of AI, and can effectively deal with practical problems including conflict, noise, emergencies and various unknown uncertainties. It is characterized by combining approximate reasoning and computing for specific data conversion through machine learning. Its core is data and calculations and the condition is “sufficient” high-quality training data. The innovation is that we proposed a discriminative weight filtering redundancy theory and designed a computable approximate reasoning logic system that combines approximate reasoning and calculation through machine learning to convert specific data. It is a general logic system for AI to deal with uncertainty. The study has significance in theory and practice for AI and logical reasoning research.
Collapse
|
20
|
Acknowledging Sustainability in the Framework of Ethical Certification for AI. SUSTAINABILITY 2022. [DOI: 10.3390/su14074157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
In the past few years, many stakeholders have begun to develop ethical and trustworthiness certification for AI applications. This study furnishes the reader with a discussion of the philosophical arguments that impel the need to include sustainability, in its different forms, among the audit areas of ethical AI certification. We demonstrate how sustainability might be included in two different types of ethical impact assessment: assessment certifying the fulfillment of minimum ethical requirements and what we describe as nuanced assessment. The paper focuses on the European, and especially the German, context, and the development of certification for AI.
Collapse
|
21
|
Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants. MEDICINE, HEALTH CARE AND PHILOSOPHY 2022; 25:11-22. [PMID: 34822096 PMCID: PMC8613457 DOI: 10.1007/s11019-021-10062-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/16/2021] [Indexed: 11/25/2022]
Abstract
Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing behaviours. This article discusses how these AI-driven systems pose particular ethical challenges with regards to nudging. To confront these issues, the value sensitive design (VSD) approach is adopted as a principled methodology that designers can adopt to design these systems to avoid harming and contribute to the social good. The AI for Social Good (AI4SG) factors are adopted as the norms constraining maleficence. In contrast, higher-order values specific to AI, such as those from the EU High-Level Expert Group on AI and the United Nations Sustainable Development Goals, are adopted as the values to be promoted as much as possible in design. The use case of Amazon Alexa's Healthcare Skills is used to illustrate this design approach. It provides an exemplar of how designers and engineers can begin to orientate their design programs of these technologies towards the social good.
Collapse
|
22
|
The Ethical Governance for the Vulnerability of Care Robots: Interactive-Distance-Oriented Flexible Design. SUSTAINABILITY 2022. [DOI: 10.3390/su14042303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The application of caring robots is currently a widely accepted solution to the problem of aging. However, for the elderly groups who live in gregarious residences and share intelligence devices, caring robots will cause intimacy and assistance dilemmas in the relationship between humans and non-human agencies. This is an information-assisted machine setting, with resulting design ethics issues brought about by the binary values of human and machine, body and mind. The “vulnerability” in risk ethics demonstrates that the ethical problems of human institutions stem from the increase of dependence and the obstruction of intimacy, which are essentially caused by the increased degree of ethical risk exposure and the restriction of agency. Based on value-sensitive design, caring ethics and machine ethics, this paper proposes a flexible design with the interaction-distance-oriented concept, and reprograms the ethical design of caring robots with intentional distance, representational distance and interpretive distance as indicators. The main purpose is to advocate a new type of human-machine interaction relationship emphasizing diversity and physical interaction.
Collapse
|
23
|
Social network behavior and public opinion manipulation. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS 2022. [DOI: 10.1016/j.jisa.2021.103060] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
24
|
Ericsson D, Stasinski R, Stenström E. Body, mind, and soul principles for designing management education: an ethnography from the future. CULTURE AND ORGANIZATION 2022. [DOI: 10.1080/14759551.2022.2028148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Daniel Ericsson
- School of Business and Economics, Linnaeus University, Växjö, Sweden
| | - Robert Stasinski
- Department of Culture and Aesthetics, Stockholm University, Stockholm, Sweden
| | - Emma Stenström
- Center for Arts, Business & Culture (ABC), Stockholm School of Economics, Stockholm, Sweden
| |
Collapse
|
25
|
Abstract
Human–machine interactions research should include diverse subjects and benefit all people.
Collapse
Affiliation(s)
- Tahira Reid
- School of Mechanical Engineering, Purdue University, 585 Purdue Mall, West Lafayette, IN, USA
| | - James Gibert
- School of Mechanical Engineering, Purdue University, 585 Purdue Mall, West Lafayette, IN, USA
| |
Collapse
|
26
|
Sikstrom L, Maslej MM, Hui K, Findlay Z, Buchman DZ, Hill SL. Conceptualising fairness: three pillars for medical algorithms and health equity. BMJ Health Care Inform 2022; 29:e100459. [PMID: 35012941 PMCID: PMC8753410 DOI: 10.1136/bmjhci-2021-100459] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 12/14/2021] [Indexed: 12/25/2022] Open
Abstract
OBJECTIVES Fairness is a core concept meant to grapple with different forms of discrimination and bias that emerge with advances in Artificial Intelligence (eg, machine learning, ML). Yet, claims to fairness in ML discourses are often vague and contradictory. The response to these issues within the scientific community has been technocratic. Studies either measure (mathematically) competing definitions of fairness, and/or recommend a range of governance tools (eg, fairness checklists or guiding principles). To advance efforts to operationalise fairness in medicine, we synthesised a broad range of literature. METHODS We conducted an environmental scan of English language literature on fairness from 1960-July 31, 2021. Electronic databases Medline, PubMed and Google Scholar were searched, supplemented by additional hand searches. Data from 213 selected publications were analysed using rapid framework analysis. Search and analysis were completed in two rounds: to explore previously identified issues (a priori), as well as those emerging from the analysis (de novo). RESULTS Our synthesis identified 'Three Pillars for Fairness': transparency, impartiality and inclusion. We draw on these insights to propose a multidimensional conceptual framework to guide empirical research on the operationalisation of fairness in healthcare. DISCUSSION We apply the conceptual framework generated by our synthesis to risk assessment in psychiatry as a case study. We argue that any claim to fairness must reflect critical assessment and ongoing social and political deliberation around these three pillars with a range of stakeholders, including patients. CONCLUSION We conclude by outlining areas for further research that would bolster ongoing commitments to fairness and health equity in healthcare.
Collapse
Affiliation(s)
- Laura Sikstrom
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Anthropology, University of Toronto, Toronto, Ontario, Canada
| | - Marta M Maslej
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Katrina Hui
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Zoe Findlay
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| | - Daniel Z Buchman
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Sean L Hill
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
27
|
Brambilla Pisoni G, Taddeo M. Apropos Data Sharing: Abandon the Distrust and Embrace the Opportunity. DNA Cell Biol 2022; 41:11-15. [PMID: 34941450 PMCID: PMC8787700 DOI: 10.1089/dna.2021.0501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 09/21/2021] [Accepted: 10/05/2021] [Indexed: 11/16/2022] Open
Abstract
In this commentary, we focus on the ethical challenges of data sharing and its potential in supporting biomedical research. Taking human genomics (HG) and European governance for sharing genomic data as a case study, we consider how to balance competing rights and interests-balancing protection of the privacy of data subjects and data security, with scientific progress and the need to promote public health. This is of particular relevancy in light of the current pandemic, which stresses the urgent need for international collaborations to promote health for all. We draw from existing ethical codes for data sharing in HG to offer recommendations as to how to protect rights while fostering scientific research and open science.
Collapse
Affiliation(s)
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
- Alan Turing Institute, London, United Kingdom
| |
Collapse
|
28
|
Monteith S, Glenn T, Geddes J, Whybrow PC, Achtyes E, Bauer M. Expectations for Artificial Intelligence (AI) in Psychiatry. Curr Psychiatry Rep 2022; 24:709-721. [PMID: 36214931 PMCID: PMC9549456 DOI: 10.1007/s11920-022-01378-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/15/2022] [Indexed: 01/29/2023]
Abstract
PURPOSE OF REVIEW Artificial intelligence (AI) is often presented as a transformative technology for clinical medicine even though the current technology maturity of AI is low. The purpose of this narrative review is to describe the complex reasons for the low technology maturity and set realistic expectations for the safe, routine use of AI in clinical medicine. RECENT FINDINGS For AI to be productive in clinical medicine, many diverse factors that contribute to the low maturity level need to be addressed. These include technical problems such as data quality, dataset shift, black-box opacity, validation and regulatory challenges, and human factors such as a lack of education in AI, workflow changes, automation bias, and deskilling. There will also be new and unanticipated safety risks with the introduction of AI. The solutions to these issues are complex and will take time to discover, develop, validate, and implement. However, addressing the many problems in a methodical manner will expedite the safe and beneficial use of AI to augment medical decision making in psychiatry.
Collapse
Affiliation(s)
- Scott Monteith
- Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, MI, 49684, USA.
| | | | - John Geddes
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
| | - Peter C. Whybrow
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angeles, CA USA
| | - Eric Achtyes
- Michigan State University College of Human Medicine, Grand Rapids, MI 49684 USA ,Network180, Grand Rapids, MI USA
| | - Michael Bauer
- Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
29
|
Ethical Redress of Racial Inequities in AI: Lessons from Decoupling Machine Learning from Optimization in Medical Appointment Scheduling. PHILOSOPHY & TECHNOLOGY 2022; 35:96. [PMID: 36284736 PMCID: PMC9584259 DOI: 10.1007/s13347-022-00590-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 10/07/2022] [Indexed: 10/30/2022]
Abstract
An Artificial Intelligence algorithm trained on data that reflect racial biases may yield racially biased outputs, even if the algorithm on its own is unbiased. For example, algorithms used to schedule medical appointments in the USA predict that Black patients are at a higher risk of no-show than non-Black patients, though technically accurate given existing data that prediction results in Black patients being overwhelmingly scheduled in appointment slots that cause longer wait times than non-Black patients. This perpetuates racial inequity, in this case lesser access to medical care. This gives rise to one type of Accuracy-Fairness trade-off: preserve the efficiency offered by using AI to schedule appointments or discard that efficiency in order to avoid perpetuating ethno-racial disparities. Similar trade-offs arise in a range of AI applications including others in medicine, as well as in education, judicial systems, and public security, among others. This article presents a framework for addressing such trade-offs where Machine Learning and Optimization components of the algorithm are decoupled. Applied to medical appointment scheduling, our framework articulates four approaches intervening in different ways on different components of the algorithm. Each yields specific results, in one case preserving accuracy comparable to the current state-of-the-art while eliminating the disparity.
Collapse
|
30
|
Hermann E. Leveraging Artificial Intelligence in Marketing for Social Good-An Ethical Perspective. JOURNAL OF BUSINESS ETHICS : JBE 2022; 179:43-61. [PMID: 34054170 PMCID: PMC8150633 DOI: 10.1007/s10551-021-04843-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 05/12/2021] [Indexed: 05/08/2023]
Abstract
Artificial intelligence (AI) is (re)shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications (will) provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To reconcile some of these tensions and account for the AI-for-social-good perspective, the authors make suggestions of how AI in marketing can be leveraged to promote societal and environmental well-being.
Collapse
Affiliation(s)
- Erik Hermann
- Wireless Systems,
IHP - Leibniz-Institut für innovative Mikroelektronik
, Frankfurt (Oder), Germany
| |
Collapse
|
31
|
Umbrello S, van de Poel I. Mapping value sensitive design onto AI for social good principles. AI AND ETHICS 2021; 1:283-296. [PMID: 34790942 PMCID: PMC7848675 DOI: 10.1007/s43681-021-00038-3] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 11/30/2020] [Indexed: 11/29/2022]
Abstract
Value sensitive design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML may lead to AI systems adapting in ways that ‘disembody’ the values embedded in them. To address this, we propose a threefold modified VSD approach: (1) integrating a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived; (2) distinguishing between values that are promoted and respected by the design to ensure outcomes that not only do no harm but also contribute to good, and (3) extending the VSD process to encompass the whole life cycle of an AI technology to monitor unintended value consequences and redesign as needed. We illustrate our VSD for AI approach with an example use case of a SARS-CoV-2 contact tracing app.
Collapse
Affiliation(s)
- Steven Umbrello
- Institute for Ethics and Emerging Technologies, University of Turin, Via Sant'Ottavio, 20, 10124 Turin, Italy
| | - Ibo van de Poel
- Delft University of Technology, Faculty of Technology, Policy and Management, Jaffalaan 5, 2628 BX Delft, The Netherlands
| |
Collapse
|
32
|
Sapienza S, Vedder A. Principle-based recommendations for big data and machine learning in food safety: the P-SAFETY model. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01282-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
AbstractBig data and Machine learning Techniques are reshaping the way in which food safety risk assessment is conducted. The ongoing ‘datafication’ of food safety risk assessment activities and the progressive deployment of probabilistic models in their practices requires a discussion on the advantages and disadvantages of these advances. In particular, the low level of trust in EU food safety risk assessment framework highlighted in 2019 by an EU-funded survey could be exacerbated by novel methods of analysis. The variety of processed data raises unique questions regarding the interplay of multiple regulatory systems alongside food safety legislation. Provisions aiming to preserve the confidentiality of data and protect personal information are juxtaposed to norms prescribing the public disclosure of scientific information. This research is intended to provide guidance for data governance and data ownership issues that unfold from the ongoing transformation of the technical and legal domains of food safety risk assessment. Following the reconstruction of technological advances in data collection and analysis and the description of recent amendments to food safety legislation, emerging concerns are discussed in light of the individual, collective and social implications of the deployment of cutting-edge Big Data collection and analysis techniques. Then, a set of principle-based recommendations is proposed by adapting high-level principles enshrined in institutional documents about Artificial Intelligence to the realm of food safety risk assessment. The proposed set of recommendations adopts Safety, Accountability, Fairness, Explainability, Transparency as core principles (SAFETY), whereas Privacy and data protection are used as a meta-principle.
Collapse
|
33
|
Sharma M, Luthra S, Joshi S, Kumar A. Implementing challenges of artificial intelligence: Evidence from public manufacturing sector of an emerging economy. GOVERNMENT INFORMATION QUARTERLY 2021. [DOI: 10.1016/j.giq.2021.101624] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
34
|
Mathiyazhagan S. Field Practice, Emerging Technologies, and Human Rights: the Emergence of Tech Social Workers. JOURNAL OF HUMAN RIGHTS AND SOCIAL WORK 2021; 7:441-448. [PMID: 34518805 PMCID: PMC8426334 DOI: 10.1007/s41134-021-00190-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 07/21/2021] [Indexed: 06/13/2023]
Abstract
Structural inequalities, historical oppression, discrimination, social exclusion, power, and privilege are some of the most pressing human rights issues that social workers deal with in everyday practice. In the recent past, all these issues are not only prevalent in offline communities, but they are also active in online communities. The digital divide and online polarizations perpetuate power and privilege within and outside of social work practice. Social work practices are moving beyond boundaries, expanding, and adopting emerging technologies in all aspects of social work education, research, and practice. This paper has been prepared based on my last decade of transnational social work practice experience and fieldwork supervision. There is an emerging need for tech social work practices in all fields of social work. This paper discusses the challenges and opportunities for tech social work in the field and explores a possible model for tech social work practice to support safe and inclusive communities on and offline to promote human rights.
Collapse
Affiliation(s)
- Siva Mathiyazhagan
- Trust for Youth and Child Leadership (TYCL) International, New York, USA
| |
Collapse
|
35
|
Nordström M. AI under great uncertainty: implications and decision strategies for public policy. AI & SOCIETY 2021; 37:1703-1714. [PMID: 34511737 PMCID: PMC8421460 DOI: 10.1007/s00146-021-01263-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 08/18/2021] [Indexed: 11/25/2022]
Abstract
Decisions where there is not enough information for a well-informed decision due to unidentified consequences, options, or undetermined demarcation of the decision problem are called decisions under great uncertainty. This paper argues that public policy decisions on how and if to implement decision-making processes based on machine learning and AI for public use are such decisions. Decisions on public policy on AI are uncertain due to three features specific to the current landscape of AI, namely (i) the vagueness of the definition of AI, (ii) uncertain outcomes of AI implementations and (iii) pacing problems. Given that many potential applications of AI in the public sector concern functions central to the public sphere, decisions on the implementation of such applications are particularly sensitive. Therefore, it is suggested that public policy-makers and decision-makers in the public sector can adopt strategies from the argumentative approach in decision theory to mitigate the established great uncertainty. In particular, the notions of framing and temporal strategies are considered.
Collapse
Affiliation(s)
- Maria Nordström
- Division of Philosophy, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
36
|
Hermann E, Hermann G, Tremblay JC. Ethical Artificial Intelligence in Chemical Research and Development: A Dual Advantage for Sustainability. SCIENCE AND ENGINEERING ETHICS 2021; 27:45. [PMID: 34231042 PMCID: PMC8260511 DOI: 10.1007/s11948-021-00325-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 06/25/2021] [Indexed: 06/13/2023]
Abstract
Artificial intelligence can be a game changer to address the global challenge of humanity-threatening climate change by fostering sustainable development. Since chemical research and development lay the foundation for innovative products and solutions, this study presents a novel chemical research and development process backed with artificial intelligence and guiding ethical principles to account for both process- and outcome-related sustainability. Particularly in ethically salient contexts, ethical principles have to accompany research and development powered by artificial intelligence to promote social and environmental good and sustainability (beneficence) while preventing any harm (non-maleficence) for all stakeholders (i.e., companies, individuals, society at large) affected.
Collapse
Affiliation(s)
- Erik Hermann
- IHP - Leibniz-Institut für innovative Mikroelektronik, Frankfurt (Oder), Germany.
| | | | | |
Collapse
|
37
|
Chiril P, Pamungkas EW, Benamara F, Moriceau V, Patti V. Emotionally Informed Hate Speech Detection: A Multi-target Perspective. Cognit Comput 2021; 14:322-352. [PMID: 34221180 PMCID: PMC8236572 DOI: 10.1007/s12559-021-09862-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 01/12/2021] [Indexed: 11/11/2022]
Abstract
Hate Speech and harassment are widespread in online communication, due to users' freedom and anonymity and the lack of regulation provided by social media platforms. Hate speech is topically focused (misogyny, sexism, racism, xenophobia, homophobia, etc.), and each specific manifestation of hate speech targets different vulnerable groups based on characteristics such as gender (misogyny, sexism), ethnicity, race, religion (xenophobia, racism, Islamophobia), sexual orientation (homophobia), and so on. Most automatic hate speech detection approaches cast the problem into a binary classification task without addressing either the topical focus or the target-oriented nature of hate speech. In this paper, we propose to tackle, for the first time, hate speech detection from a multi-target perspective. We leverage manually annotated datasets, to investigate the problem of transferring knowledge from different datasets with different topical focuses and targets. Our contribution is threefold: (1) we explore the ability of hate speech detection models to capture common properties from topic-generic datasets and transfer this knowledge to recognize specific manifestations of hate speech; (2) we experiment with the development of models to detect both topics (racism, xenophobia, sexism, misogyny) and hate speech targets, going beyond standard binary classification, to investigate how to detect hate speech at a finer level of granularity and how to transfer knowledge across different topics and targets; and (3) we study the impact of affective knowledge encoded in sentic computing resources (SenticNet, EmoSenticNet) and in semantically structured hate lexicons (HurtLex) in determining specific manifestations of hate speech. We experimented with different neural models including multitask approaches. Our study shows that: (1) training a model on a combination of several (training sets from several) topic-specific datasets is more effective than training a model on a topic-generic dataset; (2) the multi-task approach outperforms a single-task model when detecting both the hatefulness of a tweet and its topical focus in the context of a multi-label classification approach; and (3) the models incorporating EmoSenticNet emotions, the first level emotions of SenticNet, a blend of SenticNet and EmoSenticNet emotions or affective features based on Hurtlex, obtained the best results. Our results demonstrate that multi-target hate speech detection from existing datasets is feasible, which is a first step towards hate speech detection for a specific topic/target when dedicated annotated data are missing. Moreover, we prove that domain-independent affective knowledge, injected into our models, helps finer-grained hate speech detection.
Collapse
Affiliation(s)
- Patricia Chiril
- IRIT, Université de Toulouse, Université Toulouse III - UPS, Toulouse, France
| | | | - Farah Benamara
- IRIT, Université de Toulouse, Université Toulouse III - UPS, Toulouse, France
| | - Véronique Moriceau
- IRIT, Université de Toulouse, Université Toulouse III - UPS, Toulouse, France
| | - Viviana Patti
- Dipartimento di Informatica, University of Turin, Turin, Italy
| |
Collapse
|
38
|
Umbrello S, Capasso M, Balistreri M, Pirni A, Merenda F. Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots. Minds Mach (Dordr) 2021; 31:395-419. [PMID: 34092922 PMCID: PMC8165341 DOI: 10.1007/s11023-021-09561-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 05/24/2021] [Indexed: 11/29/2022]
Abstract
Healthcare is becoming increasingly automated with the development and deployment of care robots. There are many benefits to care robots but they also pose many challenging ethical issues. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. Using the value sensitive design (VSD) approach to technology design, this paper extends its application to care robots by integrating the values of care, values that are specific to AI, and higher-scale values such as the United Nations Sustainable Development Goals (SDGs). The ethical issues specific to care robots for the elderly are discussed at length alongside examples of specific design requirements that work to ameliorate these ethical concerns.
Collapse
Affiliation(s)
- Steven Umbrello
- Institute for Ethics and Emerging Technologies, University of Turin, Via Sant'Ottavio, 20, 10124 Turin, TO Italy
| | - Marianna Capasso
- Scuola Superiore Sant'Anna, Piazza Martiri della Libertà, 33, 56127 Pisa, Italy
| | | | - Alberto Pirni
- Scuola Superiore Sant'Anna, Piazza Martiri della Libertà, 33, 56127 Pisa, Italy
| | - Federica Merenda
- Scuola Superiore Sant'Anna, Piazza Martiri della Libertà, 33, 56127 Pisa, Italy
| |
Collapse
|
39
|
Abstract
AbstractIn this paper I critically evaluate the value neutrality thesis regarding technology, and find it wanting. I then introduce the various ways in which artifacts can come to influence moral value, and our evaluation of moral situations and actions. Here, following van de Poel and Kroes, I introduce the idea of value sensitive design. Specifically, I show how by virtue of their designed properties, artifacts may come to embody values. Such accounts, however, have several shortcomings. In agreement with Michael Klenk, I raise epistemic and metaphysical issues with respect to designed properties embodying value. The concept of an affordance, borrowed from ecological psychology, provides a more philosophically fruitful grounding to the potential way(s) in which artifacts might embody values. This is due to the way in which it incorporates key insights from perception more generally, and how we go about determining possibilities for action in our environment specifically. The affordance account as it is presented by Klenk, however, is insufficient. I therefore argue that we understand affordances based on whether they are meaningful, and, secondly, that we grade them based on their force.
Collapse
|
40
|
Arnold MH. Teasing out Artificial Intelligence in Medicine: An Ethical Critique of Artificial Intelligence and Machine Learning in Medicine. JOURNAL OF BIOETHICAL INQUIRY 2021; 18:121-139. [PMID: 33415596 PMCID: PMC7790358 DOI: 10.1007/s11673-020-10080-1] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Accepted: 12/23/2020] [Indexed: 05/05/2023]
Abstract
The rapid adoption and implementation of artificial intelligence in medicine creates an ontologically distinct situation from prior care models. There are both potential advantages and disadvantages with such technology in advancing the interests of patients, with resultant ontological and epistemic concerns for physicians and patients relating to the instatiation of AI as a dependent, semi- or fully-autonomous agent in the encounter. The concept of libertarian paternalism potentially exercised by AI (and those who control it) has created challenges to conventional assessments of patient and physician autonomy. The unclear legal relationship between AI and its users cannot be settled presently, an progress in AI and its implementation in patient care will necessitate an iterative discourse to preserve humanitarian concerns in future models of care. This paper proposes that physicians should neither uncritically accept nor unreasonably resist developments in AI but must actively engage and contribute to the discourse, since AI will affect their roles and the nature of their work. One's moral imaginative capacity must be engaged in the questions of beneficence, autonomy, and justice of AI and whether its integration in healthcare has the potential to augment or interfere with the ends of medical practice.
Collapse
Affiliation(s)
- Mark Henderson Arnold
- School of Rural Health (Dubbo/Orange), Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Sydney, Australia.
- Sydney Health Ethics, School of Public Health, University of Sydney, Sydney, Australia.
| |
Collapse
|
41
|
Tsamados A, Aggarwal N, Cowls J, Morley J, Roberts H, Taddeo M, Floridi L. The ethics of algorithms: key problems and solutions. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01154-8] [Citation(s) in RCA: 55] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
AbstractResearch on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big Data Soc 3(2), 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
Collapse
|
42
|
Cowls J, Tsamados A, Taddeo M, Floridi L. A definition, benchmark and database of AI for social good initiatives. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-021-00296-0] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
43
|
Vanderhaegen F. Weak Signal-Oriented Investigation of Ethical Dissonance Applied to Unsuccessful Mobility Experiences Linked to Human-Machine Interactions. SCIENCE AND ENGINEERING ETHICS 2021; 27:2. [PMID: 33492482 DOI: 10.1007/s11948-021-00284-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Accepted: 12/22/2020] [Indexed: 05/14/2023]
Abstract
Ethical dissonance arises from conflicts between beliefs or behaviors and affects ethical factors such as normality or conformity. This paper proposes a weak signal-oriented framework to investigate ethical dissonance from experiences linked to human-machine interactions. It is based on a systems engineering principle called human-systems inclusion, which considers any experience feedback of weak signals as beneficial to learn. The framework studies weak signal-based scenarios from testimonies of individual experiences and these scenarios are assessed by other people. For this purpose, the framework proposes several databases as sources of weak signals, formalization tools of experience feedback of weak signals, models of references of conformity, ethical factors, and a list of examples of ethical dissonance. It also includes sequential steps to make the latter credible regarding the results from the experimental protocols. The framework was used to investigate ethical dissonance by analyzing experiences pertaining to achieving inclusive mobility. The first example focuses on ethical dissonance in terms of hindrance that goes against autonomous mobility due to a misunderstanding of how the system functions in terms of negative and positive emotions. Two other examples present possible ethical dissonance when the use of safety systems such as car driver-assistance systems, may create danger. Investigating ethical dissonance can then help system-inclusive design or evaluation processes by taking into account scenarios from weak signal-based experiences and making them credible.
Collapse
Affiliation(s)
- F Vanderhaegen
- Univ. Polytechnique Hauts-de-France, LAMIH, CNRS, UMR 8201, Le Mont Houy, 59313, Valenciennes Cedex 9, France.
- INSA Hauts-de-France, 59313, Valenciennes, France.
| |
Collapse
|
44
|
The Sustainability of Artificial Intelligence: An Urbanistic Viewpoint from the Lens of Smart and Sustainable Cities. SUSTAINABILITY 2020. [DOI: 10.3390/su12208548] [Citation(s) in RCA: 65] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
The popularity and application of artificial intelligence (AI) are increasing rapidly all around the world—where, in simple terms, AI is a technology which mimics the behaviors commonly associated with human intelligence. Today, various AI applications are being used in areas ranging from marketing to banking and finance, from agriculture to healthcare and security, from space exploration to robotics and transport, and from chatbots to artificial creativity and manufacturing. More recently, AI applications have also started to become an integral part of many urban services. Urban artificial intelligences manage the transport systems of cities, run restaurants and shops where every day urbanity is expressed, repair urban infrastructure, and govern multiple urban domains such as traffic, air quality monitoring, garbage collection, and energy. In the age of uncertainty and complexity that is upon us, the increasing adoption of AI is expected to continue, and so its impact on the sustainability of our cities. This viewpoint explores and questions the sustainability of AI from the lens of smart and sustainable cities, and generates insights into emerging urban artificial intelligences and the potential symbiosis between AI and a smart and sustainable urbanism. In terms of methodology, this viewpoint deploys a thorough review of the current status of AI and smart and sustainable cities literature, research, developments, trends, and applications. In so doing, it contributes to existing academic debates in the fields of smart and sustainable cities and AI. In addition, by shedding light on the uptake of AI in cities, the viewpoint seeks to help urban policymakers, planners, and citizens make informed decisions about a sustainable adoption of AI.
Collapse
|
45
|
Abstract
The NIH-funded Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative has led to significant advances in what we know about the functions and capacities of the brain. This multifaceted and expansive effort supports a range of experimentation from cells to circuits, and its outputs promise to ease suffering from various neurological injuries, diseases, and neuropsychiatric conditions. At the midway point of the 10-year BRAIN Initiative, we pause to consider how these studies, and neuroscience research more broadly, may bear on human characteristics and moral concepts such as identity, agency, and others. This midway point also offers us an opportunity to evaluate the sociology and impacts of BRAIN Initiative-funded investigations to ensure that ethical standards of fairness and justice pervade the scientific process itself. Neuroethics inquiry provides a mechanism to invite relevant, novel expertise from the wide array of disciplines that intersect with biomedicine in neuroscience research. As the BRAIN Initiative and the broader field of neuroscience proceed, neuroethics serves as a central component of neuroscience inquiry to i) foster necessary and beneficial collaborations for responsible discovery; ii) ensure a rigorous, reproducible, and representative neuroscience research process; and iii) explore the unique nature of study of the human brain through accurate and representative models of its function and dysfunction.
Collapse
Affiliation(s)
| | - Khara M Ramos
- National Institute of Neurological Disorders and Stroke NIH
| |
Collapse
|