1
|
Adjovi ISM. A worldwide itinerary of research ethics in science for a better social responsibility and justice: a bibliometric analysis and review. Front Res Metr Anal 2025; 10:1504937. [PMID: 40012693 PMCID: PMC11850331 DOI: 10.3389/frma.2025.1504937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2024] [Accepted: 01/13/2025] [Indexed: 02/28/2025] Open
Abstract
This study provides a comprehensive overview of research ethics in science using an approach that combine bibliometric analysis and systematic review. The importance of ethical conduct in scientific research to maintain integrity, credibility, and societal relevance has been highlighted. The findings revealed a growing awareness of ethical issues, as evidenced by the development of numerous guidelines, codes of conduct, and oversight institutions. However, significant challenges persist, including the lack of standardized approaches for detecting misconduct, limited understanding of the factors contributing to unethical behavior, and unclear definitions of ethical violations. To address these issues, this study recommends promoting transparency and data sharing, enhancing education, and training programs, establishing robust mechanisms to identify and address misconduct, and encouraging collaborative research and open science practices. This study emphasizes the need for a collaborative approach to restore public confidence in science, protect its positive impact, and effectively address global challenges, while upholding the principles of social responsibility and justice. This comprehensive approach is crucial for maintaining research credibility, conserving resources, and safeguarding both the research participants and the public.
Collapse
Affiliation(s)
- Ingrid Sonya Mawussi Adjovi
- Ethics and Social Responsibility Research Unit (UR-ERS), Research Laboratory on Innovation for Agricultural Development (LRIDA), University of Parakou, Parakou, Benin
| |
Collapse
|
2
|
Finkelstein SR, Daraboina R, Leschewski A, Michael S. A machine learning (ML) approach to understanding participation in government nutrition programs. Curr Opin Psychol 2024; 58:101830. [PMID: 38959778 DOI: 10.1016/j.copsyc.2024.101830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 06/13/2024] [Accepted: 06/18/2024] [Indexed: 07/05/2024]
Abstract
Machine Learning (ML) affords researchers tools to advance beyond research methods commonly employed in psychology, business, and public policy studies of federal nutrition programs and participant food decision-making. It is a sub domain of AI that is applied for feature extraction - a crucial step in decision making. These features are used in context-specific automated decisions resulting in predictive AI models. Whereas many prior studies rely on retrospective, static, "one-shot" decision-making in controlled laboratory environments, ML allows researchers to refine predictions about participation and food behaviors using large-scale datasets. We propose a case study using ML to predict an aspect of participation in a large, publicly funded nutrition education program (The Expanded Food and Nutrition Education Program). Participation has important downstream implications for diet quality, food security, and other important nutrition related decisions. We then suggest a process for validating the ML insights using qualitative research and survey data.
Collapse
Affiliation(s)
| | - Rohini Daraboina
- South Dakota State University, Ness School of Management and Economics, USA
| | - Andrea Leschewski
- South Dakota State University, Ness School of Management and Economics, USA
| | - Semhar Michael
- South Dakota State University, J.J. Lohr College of Engineering, USA
| |
Collapse
|
3
|
Ukanwa K. Algorithmic bias: Social science research integration through the 3-D Dependable AI Framework. Curr Opin Psychol 2024; 58:101836. [PMID: 38981371 DOI: 10.1016/j.copsyc.2024.101836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 06/11/2024] [Accepted: 06/25/2024] [Indexed: 07/11/2024]
Abstract
Algorithmic bias has emerged as a critical challenge in the age of responsible production of artificial intelligence (AI). This paper reviews recent research on algorithmic bias and proposes increased engagement of psychological and social science research to understand antecedents and consequences of algorithmic bias. Through the lens of the 3-D Dependable AI Framework, this article explores how social science disciplines, such as psychology, can contribute to identifying and mitigating bias at the Design, Develop, and Deploy stages of the AI life cycle. Finally, we propose future research directions to further address the complexities of algorithmic bias and its societal implications.
Collapse
|
4
|
Brown O, Smith LGE, Davidson BI, Racek D, Joinson A. Online Signals of Extremist Mobilization. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2024:1461672241266866. [PMID: 39086154 DOI: 10.1177/01461672241266866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/02/2024]
Abstract
Psychological theories of mobilization tend to focus on explaining people's motivations for action, rather than mobilization ("activation") processes. To investigate the online behaviors associated with mobilization, we compared the online communications data of 26 people who subsequently mobilized to right-wing extremist action and 48 people who held similar extremist views but did not mobilize (N = 119,473 social media posts). In a three-part analysis, involving content analysis (Part 1), topic modeling (Part 2), and machine learning (Part 3), we showed that communicating ideological or hateful content was not related to mobilization, but rather mobilization was positively related to talking about violent action, operational planning, and logistics. Our findings imply that to explain mobilization to extremist action, rather than the motivations for action, theories of collective action should extend beyond how individuals express grievances and anger, to how they equip themselves with the "know-how" and capability to act.
Collapse
|
5
|
Zainal NH, Newman MG. Which client with generalized anxiety disorder benefits from a mindfulness ecological momentary intervention versus a self-monitoring app? Developing a multivariable machine learning predictive model. J Anxiety Disord 2024; 102:102825. [PMID: 38245961 PMCID: PMC10922999 DOI: 10.1016/j.janxdis.2024.102825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 12/26/2023] [Accepted: 01/02/2024] [Indexed: 01/23/2024]
Abstract
Precision medicine methods (machine learning; ML) can identify which clients with generalized anxiety disorder (GAD) benefit from mindfulness ecological momentary intervention (MEMI) vs. self-monitoring app (SM). We used randomized controlled trial data of MEMI vs. SM for GAD (N = 110) and tested three ML models to predict one-month follow-up reliable improvement in GAD severity, perseverative cognitions (PC), trait mindfulness (TM), and executive function (EF). Eleven baseline predictors were tested regarding differential reliable change from MEMI vs. SM (age, sex, race, EF errors, inhibitory dyscontrol, set-shifting deficits, verbal fluency, working memory, GAD severity, TM, PC). The final top five prescriptive predictor models of all outcomes performed well (AUC = .752 .886). The following variables predicted better outcome from MEMI vs. SM: Higher GAD severity predicted more GAD improvement but less EF improvement. Elevated PC, inhibitory dyscontrol, and verbal dysfluency predicted better improvement in most outcomes. Greater set-shifting and TM predicted stronger improvements in GAD symptoms and TM. Older age predicted more alleviation of GAD and PC symptoms. Women exhibited more enhancements in trait mindfulness and EF than men. White individuals benefitted more than non-White. PC, TM, EF, and sociodemographic data might help predictive models optimize intervention selection for GAD.
Collapse
Affiliation(s)
- Nur Hani Zainal
- Harvard Medical School, Boston, MA, USA; National University of Singapore, Kent Ridge, Singapore.
| | | |
Collapse
|
6
|
Bellini V, Semeraro F, Montomoli J, Cascella M, Bignami E. Between human and AI: assessing the reliability of AI text detection tools. Curr Med Res Opin 2024; 40:353-358. [PMID: 38265047 DOI: 10.1080/03007995.2024.2310086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 01/18/2024] [Accepted: 01/22/2024] [Indexed: 01/25/2024]
Abstract
OBJECTIVE Large language models (LLMs) such as ChatGPT-4 have raised critical questions regarding their distinguishability from human-generated content. In this research, we evaluated the effectiveness of online detection tools in identifying ChatGPT-4 vs human-written text. METHODS A two texts produced by ChatGPT-4 using differing prompts and one text created by a human author were analytically assessed using the following online detection tools: GPTZero, ZeroGPT, Writer ACD, and Originality. RESULTS The findings revealed a notable variance in the detection capabilities of the employed detection tools. GPTZero and ZeroGPT exhibited inconsistent assessments regarding the AI-origin of the texts. Writer ACD predominantly identified texts as human-written, whereas Originality consistently recognized the AI-generated content in both samples from ChatGPT-4. This highlights Originality's enhanced sensitivity to patterns characteristic of AI-generated text. CONCLUSION The study demonstrates that while automatic detection tools may discern texts generated by ChatGPT-4 significant variability exists in their accuracy. Undoubtedly, there is an urgent need for advanced detection tools to ensure the authenticity and integrity of content, especially in scientific and academic research. However, our findings underscore an urgent need for more refined detection methodologies to prevent the misdetection of human-written content as AI-generated and vice versa.
Collapse
Affiliation(s)
- Valentina Bellini
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Federico Semeraro
- Department of Anesthesia, Intensive Care and Prehospital Emergency, Maggiore Hospital Carlo Alberto Pizzardi, Bologna, Italy
| | - Jonathan Montomoli
- Department of Anesthesia and Intensive Care, Infermi Hospital, Romagna Local Health Authority, Rimini, Italy
| | - Marco Cascella
- Anesthesia and Pain Medicine. Department of Medicine, Surgery and Dentistry "Scuola Medica Salernitana", University of Salerno, Baronissi, Italy
| | - Elena Bignami
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| |
Collapse
|
7
|
Timmons AC, Duong JB, Fiallo NS, Lee T, Vo HPQ, Ahle MW, Comer JS, Brewer LC, Frazier SL, Chaspari T. A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1062-1096. [PMID: 36490369 PMCID: PMC10250563 DOI: 10.1177/17456916221134490] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Advances in computer science and data-analytic methods are driving a new era in mental health research and application. Artificial intelligence (AI) technologies hold the potential to enhance the assessment, diagnosis, and treatment of people experiencing mental health problems and to increase the reach and impact of mental health care. However, AI applications will not mitigate mental health disparities if they are built from historical data that reflect underlying social biases and inequities. AI models biased against sensitive classes could reinforce and even perpetuate existing inequities if these models create legacies that differentially impact who is diagnosed and treated, and how effectively. The current article reviews the health-equity implications of applying AI to mental health problems, outlines state-of-the-art methods for assessing and mitigating algorithmic bias, and presents a call to action to guide the development of fair-aware AI in psychological science.
Collapse
Affiliation(s)
- Adela C. Timmons
- University of Texas at Austin Institute for Mental Health Research
- Colliga Apps Corporation
| | | | | | | | | | | | | | - LaPrincess C. Brewer
- Department of Cardiovascular Medicine, May Clinic College of Medicine, Rochester, Minnesota, United States
- Center for Health Equity and Community Engagement Research, Mayo Clinic, Rochester, Minnesota, United States
| | | | | |
Collapse
|
8
|
Kares F, König CJ, Bergs R, Protzel C, Langer M. Trust in hybrid human‐automated decision‐support. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2023. [DOI: 10.1111/ijsa.12423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Affiliation(s)
- Felix Kares
- Fachrichtung Psychologie Universität des Saarlandes Saarbrücken Germany
| | | | - Richard Bergs
- Fachrichtung Psychologie Universität des Saarlandes Saarbrücken Germany
| | - Clea Protzel
- Fachrichtung Psychologie Universität des Saarlandes Saarbrücken Germany
| | - Markus Langer
- Fachbereich Psychologie Philipps‐Universität Marburg Marburg Germany
| |
Collapse
|
9
|
Hickman L, Herde CN, Lievens F, Tay L. Automatic scoring of speeded interpersonal assessment center exercises via machine learning: Initial psychometric evidence and practical guidelines. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2023. [DOI: 10.1111/ijsa.12418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Affiliation(s)
- Louis Hickman
- Department of Psychology, Virginia Tech, The Wharton School University of Pennsylvania Blacksburg Virginia USA
| | | | | | - Louis Tay
- Department of Psychology Purdue University West Lafayette Indiana USA
| |
Collapse
|
10
|
Call for Papers: “Digital Transformation and Psychological Assessment”. EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 2023. [DOI: 10.1027/1015-5759/a000760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
11
|
Huggins‐Manley AC, Booth BM, D'Mello SK. Toward Argument‐Based Fairness with an Application to AI‐Enhanced Educational Assessments. JOURNAL OF EDUCATIONAL MEASUREMENT 2022. [DOI: 10.1111/jedm.12334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|