1
|
Ayana G, Dese K, Daba Nemomssa H, Habtamu B, Mellado B, Badu K, Yamba E, Faye SL, Ondua M, Nsagha D, Nkweteyim D, Kong JD. Decolonizing global AI governance: assessment of the state of decolonized AI governance in Sub-Saharan Africa. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231994. [PMID: 39113766 PMCID: PMC11303018 DOI: 10.1098/rsos.231994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/22/2024] [Indexed: 08/10/2024]
Abstract
Global artificial intelligence (AI) governance must prioritize equity, embrace a decolonial mindset, and provide the Global South countries the authority to spearhead solution creation. Decolonization is crucial for dismantling Western-centric cognitive frameworks and mitigating biases. Integrating a decolonial approach to AI governance involves recognizing persistent colonial repercussions, leading to biases in AI solutions and disparities in AI access based on gender, race, geography, income and societal factors. This paradigm shift necessitates deliberate efforts to deconstruct imperial structures governing knowledge production, perpetuating global unequal resource access and biases. This research evaluates Sub-Saharan African progress in AI governance decolonization, focusing on indicators like AI governance institutions, national strategies, sovereignty prioritization, data protection regulations, and adherence to local data usage requirements. Results show limited progress, with only Rwanda notably responsive to decolonization among the ten countries evaluated; 80% are 'decolonization-aware', and one is 'decolonization-blind'. The paper provides a detailed analysis of each nation, offering recommendations for fostering decolonization, including stakeholder involvement, addressing inequalities, promoting ethical AI, supporting local innovation, building regional partnerships, capacity building, public awareness, and inclusive governance. This paper contributes to elucidating the challenges and opportunities associated with decolonization in SSA countries, thereby enriching the ongoing discourse on global AI governance.
Collapse
Affiliation(s)
- Gelan Ayana
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Hundessa Daba Nemomssa
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Bontu Habtamu
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Bruce Mellado
- The University of the Witwatersrand, Private Bag 3, Johannesburg, Wits 2050, South Africa
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Kingsley Badu
- Kwame Nkrumah University of Science and Technology (KNUST), Kumasi, Ghana
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Edmund Yamba
- Kwame Nkrumah University of Science and Technology (KNUST), Kumasi, Ghana
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Sylvain Landry Faye
- Cheikh Anta Diop University, Avenue Cheikh Anta DIOP, Dakar SENEGAL
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Moise Ondua
- The University Ngaoundere, PO Box 454, Ngaoundere. City, Adamawa Province, Cameroon
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Dickson Nsagha
- The University of Buea, PO Box 63, Buea, South West Province, Cameroon
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Denis Nkweteyim
- The University of Buea, PO Box 63, Buea, South West Province, Cameroon
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Jude Dzevela Kong
- Artificial Intelligence & Mathematical Modeling Lab (AIMM Lab), Dalla Lana School of Public Health, University of Toronto, 155 College St Room 500, Toronto, ON M5T 3M7
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| |
Collapse
|
2
|
Hasanov I, Virtanen S, Hakkala A, Isoaho J. Application of Large Language Models in Cybersecurity: A Systematic Literature Review. IEEE ACCESS 2024; 12:176751-176778. [DOI: 10.1109/access.2024.3505983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Affiliation(s)
- Ismayil Hasanov
- Department of Computing, University of Turku, Turku, Finland
| | - Seppo Virtanen
- Department of Computing, University of Turku, Turku, Finland
| | - Antti Hakkala
- Department of Computing, University of Turku, Turku, Finland
| | - Jouni Isoaho
- Department of Computing, University of Turku, Turku, Finland
| |
Collapse
|
3
|
Mökander J, Sheth M, Gersbro-Sundler M, Blomgren P, Floridi L. Challenges and best practices in corporate AI governance: Lessons from the biopharmaceutical industry. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1068361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best reap the benefits of automation whilst managing the associated risks. As a first step, many companies have committed themselves to various sets of ethics principles aimed at guiding the design and use of AI systems. So far so good. But how can well-intentioned ethical principles be translated into effective practice? And what challenges await companies that attempt to operationalize AI governance? In this article, we address these questions by drawing on our first-hand experience of shaping and driving the roll-out of AI governance within AstraZeneca, a biopharmaceutical company. The examples we discuss highlight challenges that any organization attempting to operationalize AI governance will have to face. These include questions concerning how to define the material scope of AI governance, how to harmonize standards across decentralized organizations, and how to measure the impact of specific AI governance initiatives. By showcasing how AstraZeneca managed these operational questions, we hope to provide project managers, CIOs, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks within other organizations with generalizable best practices. In essence, companies seeking to operationalize AI governance are encouraged to build on existing policies and governance structures, use pragmatic and action-oriented terminology, focus on risk management in development and procurement, and empower employees through continuous education and change management.
Collapse
|