1
|
Liebig L, Güttel L, Jobin A, Katzenbach C. Subnational AI policy: shaping AI in a multi-level governance system. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01561-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractThe promises and risks of Artificial Intelligence permeate current policy statements and have attracted much attention by AI governance research. However, most analyses focus exclusively on AI policy on the national and international level, overlooking existing federal governance structures. This is surprising because AI is connected to many policy areas, where the competences are already distributed between the national and subnational level, such as research or economic policy. Addressing this gap, this paper argues that more attention should be dedicated to subnational efforts to shape AI and asks which themes are discussed in subnational AI policy documents with a case study of Germany’s 16 states. Our qualitative analysis of 34 AI policy documents issued on the subnational level demonstrates that subnational efforts focus on knowledge transfer between research and industry actors, the commercialization of AI, different economic identities of the German states, and the incorporation of ethical principles. Because federal states play an active role in AI policy, analysing AI as a policy issue on different levels of government is necessary and will contribute to a better understanding of the developments and implementations of AI strategies in different national contexts.
Collapse
|
2
|
van Kersbergen K, Tinggaard Svendsen G. Social trust and public digitalization. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01570-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
3
|
Gianni R, Lehtinen S, Nieminen M. Governance of Responsible AI: From Ethical Guidelines to Cooperative Policies. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.873437] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
The increasingly pervasive role of Artificial Intelligence (AI) in our societies is radically changing the way that social interaction takes place within all fields of knowledge. The obvious opportunities in terms of accuracy, speed and originality of research are accompanied by questions about the possible risks and the consequent responsibilities involved in such a disruptive technology. In recent years, this twofold aspect has led to an increase in analyses of the ethical and political implications of AI. As a result, there has been a proliferation of documents that seek to define the strategic objectives of AI together with the ethical precautions required for its acceptable development and deployment. Although the number of documents is certainly significant, doubts remain as to whether they can effectively play a role in safeguarding democratic decision-making processes. Indeed, a common feature of the national strategies and ethical guidelines published in recent years is that they only timidly address how to integrate civil society into the selection of AI objectives. Although scholars are increasingly advocating the necessity to include civil society, it remains unclear which modalities should be selected. If both national strategies and ethics guidelines appear to be neglecting the necessary role of a democratic scrutiny for identifying challenges, objectives, strategies and the appropriate regulatory measures that such a disruptive technology should undergo, the question is then, what measures can we advocate that are able to overcome such limitations? Considering the necessity to operate holistically with AI as a social object, what theoretical framework can we adopt in order to implement a model of governance? What conceptual methodology shall we develop that is able to offer fruitful insights to governance of AI? Drawing on the insights of classical pragmatist scholars, we propose a framework of democratic experimentation based on the method of social inquiry. In this article, we first summarize some of the main points of discussion around the potential societal, ethical and political issues of AI systems. We then identify the main answers and solutions by analyzing current national strategies and ethics guidelines. After showing the theoretical and practical limits of these approaches, we outline an alternative proposal that can help strengthening the active role of society in the discussion about the role and extent of AI systems.
Collapse
|