1
|
Committing to the wrong artificial delegate in a collective-risk dilemma is better than directly committing mistakes. Sci Rep 2024; 14:10460. [PMID: 38714713 PMCID: PMC11076577 DOI: 10.1038/s41598-024-61153-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 05/02/2024] [Indexed: 05/10/2024] Open
Abstract
While autonomous artificial agents are assumed to perfectly execute the strategies they are programmed with, humans who design them may make mistakes. These mistakes may lead to a misalignment between the humans' intended goals and their agents' observed behavior, a problem of value alignment. Such an alignment problem may have particularly strong consequences when these autonomous systems are used in social contexts that involve some form of collective risk. By means of an evolutionary game theoretical model, we investigate whether errors in the configuration of artificial agents change the outcome of a collective-risk dilemma, in comparison to a scenario with no delegation. Delegation is here distinguished from no-delegation simply by the moment at which a mistake occurs: either when programming/choosing the agent (in case of delegation) or when executing the actions at each round of the game (in case of no-delegation). We find that, while errors decrease success rate, it is better to delegate and commit to a somewhat flawed strategy, perfectly executed by an autonomous agent, than to commit execution errors directly. Our model also shows that in the long-term, delegation strategies should be favored over no-delegation, if given the choice.
Collapse
|
2
|
Motivation, inclusivity, and realism should drive data science education. F1000Res 2024; 12:1240. [PMID: 38764793 PMCID: PMC11101914 DOI: 10.12688/f1000research.134655.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/06/2024] [Indexed: 05/21/2024] Open
Abstract
Data science education provides tremendous opportunities but remains inaccessible to many communities. Increasing the accessibility of data science to these communities not only benefits the individuals entering data science, but also increases the field's innovation and potential impact as a whole. Education is the most scalable solution to meet these needs, but many data science educators lack formal training in education. Our group has led education efforts for a variety of audiences: from professional scientists to high school students to lay audiences. These experiences have helped form our teaching philosophy which we have summarized into three main ideals: 1) motivation, 2) inclusivity, and 3) realism. 20 we also aim to iteratively update our teaching approaches and curriculum as we find ways to better reach these ideals. In this manuscript we discuss these ideals as well practical ideas for how to implement these philosophies in the classroom.
Collapse
|
3
|
Addressing bias in artificial intelligence for public health surveillance. JOURNAL OF MEDICAL ETHICS 2024; 50:190-194. [PMID: 37130756 DOI: 10.1136/jme-2022-108875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 04/20/2023] [Indexed: 05/04/2023]
Abstract
Components of artificial intelligence (AI) for analysing social big data, such as natural language processing (NLP) algorithms, have improved the timeliness and robustness of health data. NLP techniques have been implemented to analyse large volumes of text from social media platforms to gain insights on disease symptoms, understand barriers to care and predict disease outbreaks. However, AI-based decisions may contain biases that could misrepresent populations, skew results or lead to errors. Bias, within the scope of this paper, is described as the difference between the predictive values and true values within the modelling of an algorithm. Bias within algorithms may lead to inaccurate healthcare outcomes and exacerbate health disparities when results derived from these biased algorithms are applied to health interventions. Researchers who implement these algorithms must consider when and how bias may arise. This paper explores algorithmic biases as a result of data collection, labelling and modelling of NLP algorithms. Researchers have a role in ensuring that efforts towards combating bias are enforced, especially when drawing health conclusions derived from social media posts that are linguistically diverse. Through the implementation of open collaboration, auditing processes and the development of guidelines, researchers may be able to reduce bias and improve NLP algorithms that improve health surveillance.
Collapse
|
4
|
Conversational agents enhance women's contribution in online debates. Sci Rep 2023; 13:14534. [PMID: 37666917 PMCID: PMC10477209 DOI: 10.1038/s41598-023-41703-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 08/30/2023] [Indexed: 09/06/2023] Open
Abstract
The advent of Artificial Intelligence (AI) is fostering the development of innovative methods of communication and collaboration. Integrating AI into Information and Communication Technologies (ICTs) is now ushering in an era of social progress that has the potential to empower marginalized groups. This transformation paves the way to a digital inclusion that could qualitatively empower the online presence of women, particularly in conservative and male-dominated regions. To explore this possibility, we investigated the effect of integrating conversational agents into online debates encompassing 240 Afghans discussing the fall of Kabul in August 2021. We found that the agent leads to quantitative differences in how both genders contribute to the debate by raising issues, presenting ideas, and articulating arguments. We also found increased ideation and reduced inhibition for both genders, particularly females, when interacting exclusively with other females or the agent. The enabling character of the conversational agent reveals an apparatus that could empower women and increase their agency on online platforms.
Collapse
|
5
|
Artificial intelligence research strategy of the United States: critical assessment and policy recommendations. Front Big Data 2023; 6:1206139. [PMID: 37609602 PMCID: PMC10440374 DOI: 10.3389/fdata.2023.1206139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 07/24/2023] [Indexed: 08/24/2023] Open
Abstract
The foundations of Artificial Intelligence (AI), a field whose applications are of great use and concern for society, can be traced back to the early years of the second half of the 20th century. Since then, the field has seen increased research output and funding cycles followed by setbacks. The new millennium has seen unprecedented interest in AI progress and expectations with significant financial investments from the public and private sectors. However, the continual acceleration of AI capabilities and real-world applications is not guaranteed. Mainly, accountability of AI systems in the context of the interplay between AI and the broader society is essential for adopting AI systems via the trust placed in them. Continual progress in AI research and development (R&D) can help tackle humanity's most significant challenges to improve social good. The authors of this paper suggest that the careful design of forward-looking research policies serves a crucial function in avoiding potential future setbacks in AI research, development, and use. The United States (US) has kept its leading role in R&D, mainly shaping the global trends in the field. Accordingly, this paper presents a critical assessment of the US National AI R&D Strategic Plan and prescribes six recommendations to improve future research strategies in the US and around the globe.
Collapse
|
6
|
Mobile Device-Based Video Screening for Infant Head Lag: An Exploratory Study. CHILDREN (BASEL, SWITZERLAND) 2023; 10:1239. [PMID: 37508736 PMCID: PMC10378382 DOI: 10.3390/children10071239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023]
Abstract
INTRODUCTION Video-based automatic motion analysis has been employed to identify infant motor development delays. To overcome the limitations of lab-recorded images and training datasets, this study aimed to develop an artificial intelligence (AI) model using videos taken by mobile phone to assess infants' motor skills. METHODS A total of 270 videos of 41 high-risk infants were taken by parents using a mobile device. Based on the Pull to Sit (PTS) levels from the Hammersmith Motor Evaluation, we set motor skills assessments. The videos included 84 level 0, 106 level 1, and 80 level 3 recordings. We used whole-body pose estimation and three-dimensional transformation with a fuzzy-based approach to develop an AI model. The model was trained with two types of vectors: whole-body skeleton and key points with domain knowledge. RESULTS The average accuracies of the whole-body skeleton and key point models for level 0 were 77.667% and 88.062%, respectively. The Area Under the ROC curve (AUC) of the whole-body skeleton and key point models for level 3 were 96.049% and 94.333% respectively. CONCLUSIONS An AI model with minimal environmental restrictions can provide a family-centered developmental delay screen and enable the remote monitoring of infants requiring intervention.
Collapse
|
7
|
AutoPrognosis 2.0: Democratizing diagnostic and prognostic modeling in healthcare with automated machine learning. PLOS DIGITAL HEALTH 2023; 2:e0000276. [PMID: 37347752 DOI: 10.1371/journal.pdig.0000276] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 05/17/2023] [Indexed: 06/24/2023]
Abstract
Diagnostic and prognostic models are increasingly important in medicine and inform many clinical decisions. Recently, machine learning approaches have shown improvement over conventional modeling techniques by better capturing complex interactions between patient covariates in a data-driven manner. However, the use of machine learning introduces technical and practical challenges that have thus far restricted widespread adoption of such techniques in clinical settings. To address these challenges and empower healthcare professionals, we present an open-source machine learning framework, AutoPrognosis 2.0, to facilitate the development of diagnostic and prognostic models. AutoPrognosis leverages state-of-the-art advances in automated machine learning to develop optimized machine learning pipelines, incorporates model explainability tools, and enables deployment of clinical demonstrators, without requiring significant technical expertise. To demonstrate AutoPrognosis 2.0, we provide an illustrative application where we construct a prognostic risk score for diabetes using the UK Biobank, a prospective study of 502,467 individuals. The models produced by our automated framework achieve greater discrimination for diabetes than expert clinical risk scores. We have implemented our risk score as a web-based decision support tool, which can be publicly accessed by patients and clinicians. By open-sourcing our framework as a tool for the community, we aim to provide clinicians and other medical practitioners with an accessible resource to develop new risk scores, personalized diagnostics, and prognostics using machine learning techniques. Software: https://github.com/vanderschaarlab/AutoPrognosis.
Collapse
|
8
|
Forecasting carbon emissions future prices using the machine learning methods. ANNALS OF OPERATIONS RESEARCH 2023:1-32. [PMID: 36777411 PMCID: PMC9901414 DOI: 10.1007/s10479-023-05188-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 01/16/2023] [Indexed: 06/18/2023]
Abstract
Due to the uncertainty surrounding the coupling and decoupling of natural gas, oil, and energy commodity futures prices, the current study seeks to investigate the interactions between energy commodity futures, oil price futures, and carbon emission futures from a forecasting perspective with implications for environmental sustainability. We employed daily data on natural gas futures prices, crude oil futures prices, carbon futures prices, and Dow Jones energy commodity futures prices from January 2018 to October 2021. For empirical analysis, we applied machine learning tools including traditional multiple linear regression (MLR), artificial neural network (ANN), support vector regression (SVR), and long short-term memory (LSTM). The machine learning analysis provides two key findings. First, the nonlinear frameworks outperform linear models in developing the relationships between future oil prices (crude oil and heating oil) and carbon emission futures prices. Second, the machine learning findings establish that when oil prices and natural gas prices display extreme movement, carbon emission futures prices react nonlinearly. Understanding the nonlinear dynamics of extreme movements can help policymakers design climate and environmental policies, as well as adjust natural gas and oil futures prices. We discuss important implications to sustainable development goals mainly SDG 7 and SDG 12.
Collapse
|
9
|
Detecting the socio-economic drivers of confidence in government with eXplainable Artificial Intelligence. Sci Rep 2023; 13:839. [PMID: 36646810 PMCID: PMC9841965 DOI: 10.1038/s41598-023-28020-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 01/10/2023] [Indexed: 01/18/2023] Open
Abstract
The European Quality of Government Index (EQI) measures the perceived level of government quality by European Union citizens, combining surveys on corruption, impartiality and quality of provided services. It is, thus, an index based on individual subjective evaluations. Understanding the most relevant objective factors affecting the EQI outcomes is important for both evaluators and policy makers, especially in view of the fact that perception of government integrity contributes to determine the level of civic engagement. In our research, we employ methods of Artificial Intelligence and complex systems physics to measure the impact on the perceived government quality of multifaceted variables, describing territorial development and citizen well-being, from an economic, social and environmental viewpoint. Our study, focused on a set of regions in European Union at a subnational scale, leads to identifying the territorial and demographic drivers of citizens' confidence in government institutions. In particular, we find that the 2021 EQI values are significantly related to two indicators: the first one is the difference between female and male labour participation rates, and the second one is a proxy of wealth and welfare such as the average number of rooms per inhabitant. This result corroborates the idea of a central role played by labour gender equity and housing policies in government confidence building. In particular, the relevance of the former indicator in EQI prediction results from a combination of positive conditions such as equal job opportunities, vital labour market, welfare and availability of income sources, while the role of the latter is possibly amplified by the lockdown policies related to the COVID-19 pandemics. The analysis is based on combining regression, to predict EQI from a set of publicly available indicators, with the eXplainable Artificial Intelligence approach, that quantifies the impact of each indicator on the prediction. Such a procedure does not require any ad-hoc hypotheses on the functional dependence of EQI on the indicators used to predict it. Finally, using network science methods concerning community detection, we investigate how the impact of relevant indicators on EQI prediction changes throughout European regions. Thus, the proposed approach enables to identify the objective factors at the basis of government quality perception by citizens in different territorial contexts, providing the methodological basis for the development of a quantitative tool for policy design.
Collapse
|
10
|
Can Citizenship Education Benefit Computing? INFORMATICS 2022. [DOI: 10.3390/informatics9040093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
A recurring motif in recent scholarship in the computing ethics and society studies (CESS) subfield within computing have been the calls for a wider recognition of the social and political nature of computing work. These calls have highlighted the limitations of an ethics-only approach to covering social and political topics such as bias, fairness, equality, and justice within computing curricula. However, given the technically focused background of most computing educators, it is not necessarily clear how political topics should best be addressed in computing courses. This paper proposes that one helpful way to do so is via the well-established pedagogy of citizenship education, and as such it endeavors to introduce the discourse of citizenship education to an audience of computing educators. In particular, the change within citizenship education away from its early focus on personal responsibility and duty to its current twin focus on engendering civic participation in one’s community along with catalyzing critical attitudes to the realities of today’s social, political, and technical worlds, is especially relevant to computing educators in light of computing’s new-found interest in the political education of its students. Related work in digital literacy education is also discussed.
Collapse
|
11
|
Toward children-centric AI: a case for a growth model in children-AI interactions. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01579-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AbstractThis article advocates for a hermeneutic model for children-AI (age group 7–11 years) interactions in which the desirable purpose of children’s interaction with artificial intelligence (AI) systems is children's growth. The article perceives AI systems with machine-learning components as having a recursive element when interacting with children. They can learn from an encounter with children and incorporate data from interaction, not only from prior programming. Given the purpose of growth and this recursive element of AI, the article argues for distinguishing the interpretation of bias within the artificial intelligence (AI) ethics and responsible AI discourse. Interpreting bias as a preference and distinguishing between positive (pro-diversity) and negative (discriminative) bias is needed as this would serve children's healthy psychological and moral development. The human-centric AI discourse advocates for an alignment of capacities of humans and capabilities of machines by a focus both on the purpose of humans and on the purpose of machines for humans. The emphasis on mitigating negative biases through data protection, AI law, and certain value-sensitive design frameworks demonstrates that the purpose of the machine for humans is prioritized over the purpose of humans. These top–down frameworks often narrow down the purpose of machines to do-no-harm and they miss accounting for the bottom-up views and developmental needs of children. Therefore, applying a growth model for children-AI interactions that incorporates learning from negative AI-mediated biases and amplifying positive ones would positively benefit children’s development and children-centric AI innovation. Consequently, the article explores: What challenges arise from mitigating negative biases and amplifying positive biases in children-AI interactions and how can a growth model address these? To answer this, the article recommends applying a growth model in open AI co-creational spaces with and for children. In such spaces human–machine and human–human value alignment methods can be collectively applied in such a manner that children can (1) become sensitized toward the effects of AI-mediated negative biases on themselves and others; (2) enable children to appropriate and imbue top-down values of diversity, and non-discrimination with their meanings; (3) enforce children’s right to identity and non-discrimination; (4) guide children in developing an inclusive mindset; (5) inform top-down normative AI frameworks by children’s bottom-up views; (6) contribute to design criteria for children-centric AI. Applying such methods under a growth model in AI co-creational spaces with children could yield an inclusive co-evolution between responsible young humans in the loop and children-centric AI systems.
Collapse
|
12
|
Critical review of indicators, metrics, methods, and tools for monitoring and evaluation of biofortification programs at scale. Front Nutr 2022; 9:963748. [PMID: 36313073 PMCID: PMC9607891 DOI: 10.3389/fnut.2022.963748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 09/26/2022] [Indexed: 11/13/2022] Open
Abstract
Sound monitoring and evaluation (M&E) systems are needed to inform effective biofortification program management and implementation. Despite the existence of M&E frameworks for biofortification programs, the use of indicators, metrics, methods, and tools (IMMT) are currently not harmonized, rendering the tracking of biofortification programs difficult. We aimed to compile IMMT for M&E of existing biofortification programs and recommend a sub-set of high-level indicators (HLI) for a harmonized global M&E framework. We conducted (1) a mapping review to compile IMMT for M&E biofortification programs; (2) semi-structured interviews (SSIs) with biofortification programming experts (and other relevant stakeholders) to contextualize findings from step 1; and (3) compiled a generic biofortification program Theory of Change (ToC) to use it as an analytical framework for selecting the HLI. This study revealed diversity in seed systems and crop value chains across countries and crops, resulting in differences in M&E frameworks. Yet, sufficient commonalities between implementation pathways emerged. A set of 17 HLI for tracking critical results along the biofortification implementation pathway represented in the ToC is recommended for a harmonized global M&E framework. Further research is needed to test, revise, and develop mechanisms to harmonize the M&E framework across programs, institutions, and countries.
Collapse
|
13
|
AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies. AI & SOCIETY 2022; 38:665-677. [PMID: 36212226 PMCID: PMC9527733 DOI: 10.1007/s00146-022-01553-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Accepted: 06/28/2022] [Indexed: 11/29/2022]
Abstract
In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the analysis of fairness and bias in AI, important for the debate of AI for social good, has paid insufficient attention to the category of age and older people. Ageing populations have been largely neglected during the turn to digitality and AI. In this article, the concept of AI ageism is presented to make a theoretical contribution to how the understanding of inclusion and exclusion within the field of AI can be expanded to include the category of age. AI ageism can be defined as practices and ideologies operating within the field of AI, which exclude, discriminate, or neglect the interests, experiences, and needs of older population and can be manifested in five interconnected forms: (1) age biases in algorithms and datasets (technical level), (2) age stereotypes, prejudices and ideologies of actors in AI (individual level), (3) invisibility of old age in discourses on AI (discourse level), (4) discriminatory effects of use of AI technology on different age groups (group level), (5) exclusion as users of AI technology, services and products (user level). Additionally, the paper provides empirical illustrations of the way ageism operates in these five forms.
Collapse
|
14
|
Intelligent Posture Training: Machine-Learning-Powered Human Sitting Posture Recognition Based on a Pressure-Sensing IoT Cushion. SENSORS 2022; 22:s22145337. [PMID: 35891018 PMCID: PMC9320787 DOI: 10.3390/s22145337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 07/06/2022] [Accepted: 07/11/2022] [Indexed: 11/16/2022]
Abstract
We present a solution for intelligent posture training based on accurate, real-time sitting posture monitoring using the LifeChair IoT cushion and supervised machine learning from pressure sensing and user body data. We demonstrate our system's performance in sitting posture and seated stretch recognition tasks with over 98.82% accuracy in recognizing 15 different sitting postures and 97.94% in recognizing six seated stretches. We also show that user BMI divergence significantly affects posture recognition accuracy using machine learning. We validate our method's performance in five different real-world workplace environments and discuss training strategies for the machine learning models. Finally, we propose the first smart posture data-driven stretch recommendation system in alignment with physiotherapy standards.
Collapse
|
15
|
Investigating the negative bias towards artificial intelligence: Effects of prior assignment of AI-authorship on the aesthetic appreciation of abstract paintings. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
16
|
Meeting sustainable development goals via robotics and autonomous systems. Nat Commun 2022; 13:3559. [PMID: 35729171 PMCID: PMC9211790 DOI: 10.1038/s41467-022-31150-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 06/06/2022] [Indexed: 11/24/2022] Open
Abstract
Robotics and autonomous systems are reshaping the world, changing healthcare, food production and biodiversity management. While they will play a fundamental role in delivering the UN Sustainable Development Goals, associated opportunities and threats are yet to be considered systematically. We report on a horizon scan evaluating robotics and autonomous systems impact on all Sustainable Development Goals, involving 102 experts from around the world. Robotics and autonomous systems are likely to transform how the Sustainable Development Goals are achieved, through replacing and supporting human activities, fostering innovation, enhancing remote access and improving monitoring. Emerging threats relate to reinforcing inequalities, exacerbating environmental change, diverting resources from tried-and-tested solutions and reducing freedom and privacy through inadequate governance. Although predicting future impacts of robotics and autonomous systems on the Sustainable Development Goals is difficult, thoroughly examining technological developments early is essential to prevent unintended detrimental consequences. Additionally, robotics and autonomous systems should be considered explicitly when developing future iterations of the Sustainable Development Goals to avoid reversing progress or exacerbating inequalities. A horizon scan was used to explore possible impacts of robotics and automated systems on achieving the UN Sustainable Development Goals. Positive effects are likely. Iterative regulatory processes and continued dialogue could help avoid environmental damages and increases in inequality.
Collapse
|
17
|
Investing in AI for social good: an analysis of European national strategies. AI & SOCIETY 2022; 38:479-500. [PMID: 35528248 PMCID: PMC9068863 DOI: 10.1007/s00146-022-01445-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 03/24/2022] [Indexed: 10/25/2022]
Abstract
Artificial Intelligence (AI) has become a driving force in modern research, industry and public administration and the European Union (EU) is embracing this technology with a view to creating societal, as well as economic, value. This effort has been shared by EU Member States which were all encouraged to develop their own national AI strategies outlining policies and investment levels. This study focuses on how EU Member States are approaching the promise to develop and use AI for the good of society through the lens of their national AI strategies. In particular, we aim to investigate how European countries are investing in AI and to what extent the stated plans contribute to the good of people and society as a whole. Our contribution consists of three parts: (i) a conceptualization of AI for social good highlighting the role of AI policy, in particular, the one put forward by the European Commission (EC); (ii) a qualitative analysis of 15 European national strategies mapping investment plans and suggesting their relation to the social good (iii) a reflection on the current status of investments in socially good AI and possible steps to move forward. Our study suggests that while European national strategies incorporate money allocations in the sphere of AI for social good (e.g. education), there is a broader variety of underestimated actions (e.g. multidisciplinary approach in STEM curricula and dialogue among stakeholders) that can boost the European commitment to sustainable and responsible AI innovation.
Collapse
|
18
|
Our New Artificial Intelligence Infrastructure: Becoming Locked into an Unsustainable Future. SUSTAINABILITY 2022. [DOI: 10.3390/su14084829] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Artificial intelligence (AI) is becoming increasingly important for the infrastructures that support many of society’s functions. Transportation, security, energy, education, the workplace, the government have all incorporated AI into their infrastructures for enhancement and/or protection. In this paper, we argue that not only is AI seen as a tool for augmenting existing infrastructures, but AI itself is becoming an infrastructure that many services of today and tomorrow will depend upon. Considering the vast environmental consequences associated with the development and use of AI, of which the world is only starting to learn, the necessity of addressing AI alongside the concept of infrastructure points toward the phenomenon of carbon lock-in. Carbon lock-in refers to society’s constrained ability to reduce carbon emissions technologically, economically, politically, and socially. These constraints are due to the inherent inertia created by entrenched technological, institutional, and behavioral norms. That is, the drive for AI adoption in virtually every sector of society will create dependencies and interdependencies from which it will be hard to escape. The crux of this paper boils down to this: in conceptualizing AI as infrastructure we can recognize the risk of lock-in, not just carbon lock-in but lock-in as it relates to all the physical needs to achieve the infrastructure of AI. This does not exclude the possibility of solutions arising with the rise of these technologies; however, given these points, it is of the utmost importance that we ask inconvenient questions regarding these environmental costs before becoming locked into this new AI infrastructure.
Collapse
|
19
|
Abstract
Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework - computational ethics - that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.
Collapse
|
20
|
The problem with trust: on the discursive commodification of trust in AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01401-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
AbstractThis commentary draws critical attention to the ongoing commodification of trust in policy and scholarly discourses of artificial intelligence (AI) and society. Based on an assessment of publications discussing the implementation of AI in governmental and private services, our findings indicate that this discursive trend towards commodification is driven by the need for a trusting population of service users to harvest data at scale and leads to the discursive construction of trust as an essential good on a par with data as raw material. This discursive commodification is marked by a decreasing emphasis on trust understood as the expected reliability of a trusted agent, and increased emphasis on instrumental and extractive framings of trust as a resource. This tendency, we argue, does an ultimate disservice to developers, users, and systems alike, insofar as it obscures the subtle mechanisms through which trust in AI systems might be built, making it less likely that it will be.
Collapse
|
21
|
A review of some techniques for inclusion of domain-knowledge into deep neural networks. Sci Rep 2022; 12:1040. [PMID: 35058487 PMCID: PMC8776800 DOI: 10.1038/s41598-021-04590-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 12/20/2021] [Indexed: 11/09/2022] Open
Abstract
We present a survey of ways in which existing scientific knowledge are included when constructing models with neural networks. The inclusion of domain-knowledge is of special interest not just to constructing scientific assistants, but also, many other areas that involve understanding data using human-machine collaboration. In many such instances, machine-based model construction may benefit significantly from being provided with human-knowledge of the domain encoded in a sufficiently precise form. This paper examines the inclusion of domain-knowledge by means of changes to: the input, the loss-function, and the architecture of deep networks. The categorisation is for ease of exposition: in practice we expect a combination of such changes will be employed. In each category, we describe techniques that have been shown to yield significant changes in the performance of deep neural networks.
Collapse
|
22
|
Imaging Africa: a strategic approach to optical microscopy training in Africa. Nat Methods 2021; 18:847-855. [PMID: 34354292 DOI: 10.1038/s41592-021-01227-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
23
|
A Framework for Evaluating and Disclosing the ESG Related Impacts of AI with the SDGs. SUSTAINABILITY 2021. [DOI: 10.3390/su13158503] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) now permeates all aspects of modern society, and we are simultaneously seeing an increased focus on issues of sustainability in all human activities. All major corporations are now expected to account for their environmental and social footprint and to disclose and report on their activities. This is carried out through a diverse set of standards, frameworks, and metrics related to what is referred to as ESG (environment, social, governance), which is now, increasingly often, replacing the older term CSR (corporate social responsibility). The challenge addressed in this article is that none of these frameworks sufficiently capture the nature of the sustainability related impacts of AI. This creates a situation in which companies are not incentivised to properly analyse such impacts. Simultaneously, it allows the companies that are aware of negative impacts to not disclose them. This article proposes a framework for evaluating and disclosing ESG related AI impacts based on the United Nation’s Sustainable Development Goals (SDG). The core of the framework is here presented, with examples of how it forces an examination of micro, meso, and macro level impacts, a consideration of both negative and positive impacts, and accounting for ripple effects and interlinkages between the different impacts. Such a framework helps make analyses of AI related ESG impacts more structured and systematic, more transparent, and it allows companies to draw on research in AI ethics in such evaluations. In the closing section, Microsoft’s sustainability reporting from 2018 and 2019 is used as an example of how sustainability reporting is currently carried out, and how it might be improved by using the approach here advocated.
Collapse
|
24
|
Abstract
It has been the historic responsibility of the social sciences to investigate human societies. Fulfilling this responsibility requires social theories, measurement models and social data. Most existing theories and measurement models in the social sciences were not developed with the deep societal reach of algorithms in mind. The emergence of 'algorithmically infused societies'-societies whose very fabric is co-shaped by algorithmic and human behaviour-raises three key challenges: the insufficient quality of measurements, the complex consequences of (mis)measurements, and the limits of existing social theories. Here we argue that tackling these challenges requires new social theories that account for the impact of algorithmic systems on social realities. To develop such theories, we need new methodologies for integrating data and measurements into theory construction. Given the scale at which measurements can be applied, we believe measurement models should be trustworthy, auditable and just. To achieve this, the development of measurements should be transparent and participatory, and include mechanisms to ensure measurement quality and identify possible harms. We argue that computational social scientists should rethink what aspects of algorithmically infused societies should be measured, how they should be measured, and the consequences of doing so.
Collapse
|