1
|
Al-Zahrani AM. Unveiling the shadows: Beyond the hype of AI in education. Heliyon 2024; 10:e30696. [PMID: 38737255 PMCID: PMC11087970 DOI: 10.1016/j.heliyon.2024.e30696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 04/26/2024] [Accepted: 05/02/2024] [Indexed: 05/14/2024] Open
Abstract
Despite the wave of enthusiasm for the role of Artificial Intelligence (AI) in reshaping education, critical voices urge a more tempered approach. This study investigates the less-discussed 'shadows' of AI implementation in educational settings, focusing on potential negatives that may accompany its integration. Through a multi-phased exploration consisting of content analysis and survey research, the study develops and validates a theoretical model that pinpoints several areas of concern. The initial phase, a systematic literature review, yielded 56 relevant studies from which the model was crafted. The subsequent survey with 260 participants from a Saudi Arabian university aimed to validate the model. Findings confirm concerns about human connection, data privacy and security, algorithmic bias, transparency, critical thinking, access equity, ethical issues, teacher development, reliability, and the consequences of AI-generated content. They also highlight correlations between various AI-associated concerns, suggesting intertwined consequences rather than isolated issues. For instance, enhancements in AI transparency could simultaneously support teacher professional development and foster better student outcomes. Furthermore, the study acknowledges the transformative potential of AI but cautions against its unexamined adoption in education. It advocates for comprehensive strategies to maintain human connections, ensure data privacy and security, mitigate biases, enhance system transparency, foster creativity, reduce access disparities, emphasize ethics, prepare teachers, ensure system reliability, and regulate AI-generated content. Such strategies underscore the need for holistic policymaking to leverage AI's benefits while safeguarding against its disadvantages.
Collapse
Affiliation(s)
- Abdulrahman M. Al-Zahrani
- Department of Learning Design and Technology, Faculty of Education, University of Jeddah, P.O. box 15758, 21454, Jeddah, Saudi Arabia
| |
Collapse
|
2
|
Bi X, Lin L, Chen Z, Ye J. Artificial Intelligence for Surface-Enhanced Raman Spectroscopy. SMALL METHODS 2024; 8:e2301243. [PMID: 37888799 DOI: 10.1002/smtd.202301243] [Citation(s) in RCA: 31] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/11/2023] [Indexed: 10/28/2023]
Abstract
Surface-enhanced Raman spectroscopy (SERS), well acknowledged as a fingerprinting and sensitive analytical technique, has exerted high applicational value in a broad range of fields including biomedicine, environmental protection, food safety among the others. In the endless pursuit of ever-sensitive, robust, and comprehensive sensing and imaging, advancements keep emerging in the whole pipeline of SERS, from the design of SERS substrates and reporter molecules, synthetic route planning, instrument refinement, to data preprocessing and analysis methods. Artificial intelligence (AI), which is created to imitate and eventually exceed human behaviors, has exhibited its power in learning high-level representations and recognizing complicated patterns with exceptional automaticity. Therefore, facing up with the intertwining influential factors and explosive data size, AI has been increasingly leveraged in all the above-mentioned aspects in SERS, presenting elite efficiency in accelerating systematic optimization and deepening understanding about the fundamental physics and spectral data, which far transcends human labors and conventional computations. In this review, the recent progresses in SERS are summarized through the integration of AI, and new insights of the challenges and perspectives are provided in aim to better gear SERS toward the fast track.
Collapse
Affiliation(s)
- Xinyuan Bi
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Li Lin
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Zhou Chen
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Jian Ye
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200127, P. R. China
- Shanghai Key Laboratory of Gynecologic Oncology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, P. R. China
| |
Collapse
|
3
|
Haltaufderheide J, Viero D, Krämer D. Cultural Implications Regarding Privacy in Digital Contact Tracing Algorithms: Method Development and Empirical Ethics Analysis of a German and a Japanese Approach to Contact Tracing. J Med Internet Res 2023; 25:e45112. [PMID: 37379062 PMCID: PMC10365635 DOI: 10.2196/45112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 05/02/2023] [Accepted: 05/23/2023] [Indexed: 06/29/2023] Open
Abstract
BACKGROUND Digital contact tracing algorithms (DCTAs) have emerged as a means of supporting pandemic containment strategies and protecting populations from the adverse effects of COVID-19. However, the impact of DCTAs on users' privacy and autonomy has been heavily debated. Although privacy is often viewed as the ability to control access to information, recent approaches consider it as a norm that structures social life. In this regard, cultural factors are crucial in evaluating the appropriateness of information flows in DCTAs. Hence, an important part of ethical evaluations of DCTAs is to develop an understanding of their information flow and their contextual situatedness to be able to adequately evaluate questions about privacy. However, only limited studies and conceptual approaches are currently available in this regard. OBJECTIVE This study aimed to develop a case study methodology to include contextual cultural factors in ethical analysis and present exemplary results of a subsequent analysis of 2 different DCTAs following this approach. METHODS We conducted a comparative qualitative case study of the algorithm of the Google Apple Exposure Notification Framework as exemplified in the German Corona Warn App and the Japanese approach of Computation of Infection Risk via Confidential Locational Entries (CIRCLE) method. The methodology was based on a postphenomenological perspective, combined with empirical investigations of the technological artifacts within their context of use. An ethics of disclosure approach was used to focus on the social ontologies created by the algorithms and highlight their connection to the question about privacy. RESULTS Both algorithms use the idea of representing a social encounter of 2 subjects. These subjects gain significance in terms of risk against the background of a representation of their temporal and spatial properties. However, the comparative analysis reveals 2 major differences. Google Apple Exposure Notification Framework prioritizes temporality over spatiality. In contrast, the representation of spatiality is reduced to distance without any direction or orientation. However, the CIRCLE framework prioritizes spatiality over temporality. These different concepts and prioritizations can be seen to align with important cultural differences in considering basic concepts such as subject, time, and space in Eastern and Western thought. CONCLUSIONS The differences noted in this study essentially lead to 2 different ethical questions about privacy that are raised against the respective backgrounds. These findings have important implications for the ethical evaluation of DCTAs, suggesting that a culture-sensitive assessment is required to ensure that technologies fit into their context and create less concern regarding their ethical acceptability. Methodologically, our study provides a basis for an intercultural approach to the ethics of disclosure, allowing for cross-cultural dialogue that can overcome mutual implicit biases and blind spots based on cultural differences.
Collapse
Affiliation(s)
- Joschka Haltaufderheide
- Medical Ethics With Focus on Digitization, Joint Faculty of Health Sciences Brandenburg, University of Potsdam, Potsdam, Germany
| | - Davide Viero
- Faculty of Educational Sciences, University of Duisburg-Essen, Essen, Germany
| | - Dennis Krämer
- Faculty of Social Sciences, Georg-August-University Göttingen, Göttingen, Germany
| |
Collapse
|
4
|
Mökander J, Sheth M, Gersbro-Sundler M, Blomgren P, Floridi L. Challenges and best practices in corporate AI governance: Lessons from the biopharmaceutical industry. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1068361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best reap the benefits of automation whilst managing the associated risks. As a first step, many companies have committed themselves to various sets of ethics principles aimed at guiding the design and use of AI systems. So far so good. But how can well-intentioned ethical principles be translated into effective practice? And what challenges await companies that attempt to operationalize AI governance? In this article, we address these questions by drawing on our first-hand experience of shaping and driving the roll-out of AI governance within AstraZeneca, a biopharmaceutical company. The examples we discuss highlight challenges that any organization attempting to operationalize AI governance will have to face. These include questions concerning how to define the material scope of AI governance, how to harmonize standards across decentralized organizations, and how to measure the impact of specific AI governance initiatives. By showcasing how AstraZeneca managed these operational questions, we hope to provide project managers, CIOs, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks within other organizations with generalizable best practices. In essence, companies seeking to operationalize AI governance are encouraged to build on existing policies and governance structures, use pragmatic and action-oriented terminology, focus on risk management in development and procurement, and empower employees through continuous education and change management.
Collapse
|
5
|
Dara R, Hazrati Fard SM, Kaur J. Recommendations for ethical and responsible use of artificial intelligence in digital agriculture. Front Artif Intell 2022; 5:884192. [PMID: 35968036 PMCID: PMC9372537 DOI: 10.3389/frai.2022.884192] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 07/12/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial intelligence (AI) applications are an integral and emerging component of digital agriculture. AI can help ensure sustainable production in agriculture by enhancing agricultural operations and decision-making. Recommendations about soil condition and pesticides or automatic devices for milking and apple picking are examples of AI applications in digital agriculture. Although AI offers many benefits in farming, AI systems may raise ethical issues and risks that should be assessed and proactively managed. Poor design and configuration of intelligent systems may impose harm and unintended consequences on digital agriculture. Invasion of farmers' privacy, damaging animal welfare due to robotic technologies, and lack of accountability for issues resulting from the use of AI tools are only some examples of ethical challenges in digital agriculture. This paper examines the ethical challenges of the use of AI in agriculture in six categories including fairness, transparency, accountability, sustainability, privacy, and robustness. This study further provides recommendations for agriculture technology providers (ATPs) and policymakers on how to proactively mitigate ethical issues that may arise from the use of AI in farming. These recommendations cover a wide range of ethical considerations, such as addressing farmers' privacy concerns, ensuring reliable AI performance, enhancing sustainability in AI systems, and reducing AI bias.
Collapse
|
6
|
An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01430-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractArtificial intelligence ethics requires a united approach from policymakers, AI companies, and individuals, in the development, deployment, and use of these technologies. However, sometimes discussions can become fragmented because of the different levels of governance (Schmitt in AI Ethics 1–12, 2021) or because of different values, stakeholders, and actors involved (Ryan and Stahl in J Inf Commun Ethics Soc 19:61–86, 2021). Recently, these conflicts became very visible, with such examples as the dismissal of AI ethics researcher Dr. Timnit Gebru from Google and the resignation of whistle-blower Frances Haugen from Facebook. Underpinning each debacle was a conflict between the organisation’s economic and business interests and the morals of their employees. This paper will examine tensions between the ethics of AI organisations and the values of their employees, by providing an exploration of the AI ethics literature in this area, and a qualitative analysis of three workshops with AI developers and practitioners. Common ethical and social tensions (such as power asymmetries, mistrust, societal risks, harms, and lack of transparency) will be discussed, along with proposals on how to avoid or reduce these conflicts in practice (e.g., building trust, fair allocation of responsibility, protecting employees’ autonomy, and encouraging ethical training and practice). Altogether, we suggest the following steps to help reduce ethical issues within AI organisations: improved and diverse ethics education and training within businesses; internal and external ethics auditing; the establishment of AI ethics ombudsmen, AI ethics review committees and an AI ethics watchdog; as well as access to trustworthy AI ethics whistle-blower organisations.
Collapse
|
7
|
Slota SC, Fleischmann KR, Greenberg S, Verma N, Cummings B, Li L, Shenefiel C. Many hands make many fingers to point: challenges in creating accountable AI. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01302-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
8
|
Hermann E, Hermann G, Tremblay JC. Ethical Artificial Intelligence in Chemical Research and Development: A Dual Advantage for Sustainability. SCIENCE AND ENGINEERING ETHICS 2021; 27:45. [PMID: 34231042 PMCID: PMC8260511 DOI: 10.1007/s11948-021-00325-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 06/25/2021] [Indexed: 06/13/2023]
Abstract
Artificial intelligence can be a game changer to address the global challenge of humanity-threatening climate change by fostering sustainable development. Since chemical research and development lay the foundation for innovative products and solutions, this study presents a novel chemical research and development process backed with artificial intelligence and guiding ethical principles to account for both process- and outcome-related sustainability. Particularly in ethically salient contexts, ethical principles have to accompany research and development powered by artificial intelligence to promote social and environmental good and sustainability (beneficence) while preventing any harm (non-maleficence) for all stakeholders (i.e., companies, individuals, society at large) affected.
Collapse
Affiliation(s)
- Erik Hermann
- IHP - Leibniz-Institut für innovative Mikroelektronik, Frankfurt (Oder), Germany.
| | | | | |
Collapse
|