1
|
Massar K, Ten Hoor GA. Social Media as Input for Recruiment: Does Women's Relationship History Affect Candidate Evaluations? Psychol Rep 2025; 128:1187-1203. [PMID: 36848925 PMCID: PMC11894867 DOI: 10.1177/00332941231160065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/01/2023]
Abstract
We examine whether information about a female candidate's relationship history, obtained from social media profiles, affects evaluations of her suitability for a student union board position. Moreover, we investigate whether it is possible to mitigate any bias against women with multiple partners by providing information about the origins of prejudice. We utilized a 2 (relationship history: multiple vs. one partner(s)) X 2 (mitigating information: explaining prejudice against promiscuous women vs. explaining prejudice against outgroups) experimental design across two studies. Participants were female students (Study 1: n = 209 American students; Study 2; n = 119 European students), who indicated whether they would hire the applicant for a job, and evaluated this applicant. Results show that generally, participants tended to evaluate the candidate with multiple partners less positively than the candidate with only one partner: They were less likely to hire her (Study 1), evaluated her less positively (Study 1), and considered her less of a fit with the organization (Study 1 and 2). The results regarding providing additional information were not consistent. Our findings suggest that private social media information can influence applicant evaluations and hiring decisions, and therefore organizations should be careful when utilizing social information in recruitment processes.
Collapse
Affiliation(s)
- Karlijn Massar
- Department of Work and Social Psychology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Gill A. Ten Hoor
- Department of Work and Social Psychology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
2
|
Musslick S, Bartlett LK, Chandramouli SH, Dubova M, Gobet F, Griffiths TL, Hullman J, King RD, Kutz JN, Lucas CG, Mahesh S, Pestilli F, Sloman SJ, Holmes WR. Automating the practice of science: Opportunities, challenges, and implications. Proc Natl Acad Sci U S A 2025; 122:e2401238121. [PMID: 39869810 PMCID: PMC11804648 DOI: 10.1073/pnas.2401238121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2025] Open
Abstract
Automation transformed various aspects of our human civilization, revolutionizing industries and streamlining processes. In the domain of scientific inquiry, automated approaches emerged as powerful tools, holding promise for accelerating discovery, enhancing reproducibility, and overcoming the traditional impediments to scientific progress. This article evaluates the scope of automation within scientific practice and assesses recent approaches. Furthermore, it discusses different perspectives to the following questions: where do the greatest opportunities lie for automation in scientific practice?; What are the current bottlenecks of automating scientific practice?; and What are significant ethical and practical consequences of automating scientific practice? By discussing the motivations behind automated science, analyzing the hurdles encountered, and examining its implications, this article invites researchers, policymakers, and stakeholders to navigate the rapidly evolving frontier of automated scientific practice.
Collapse
Affiliation(s)
- Sebastian Musslick
- Institute of Cognitive Science, Osnabrück University, 49090Osnabrück, Germany
- Department of Cognitive and Psychological Sciences, Brown University, Providence, RI02912
| | - Laura K. Bartlett
- Centre for Philosophy of Natural and Social Science, The London School of Economics and Political Science, LondonWC2A 2AE, United Kingdom
| | - Suyog H. Chandramouli
- Department of Information and Communications Engineering, Aalto University, FI-00076Espoo, Finland
- Department of Computing Science, University of Alberta, Edmonton, AB T6G 2S4, Canada
- Department of Psychology, Princeton University, Princeton, NJ08544
| | - Marina Dubova
- Cognitive Science Program, Indiana University, Bloomington, IN47405
| | - Fernand Gobet
- Centre for Philosophy of Natural and Social Science, The London School of Economics and Political Science, LondonWC2A 2AE, United Kingdom
- School of Psychology, University of Roehampton, LondonSW15 4JD, United Kingdom
| | - Thomas L. Griffiths
- Department of Psychology, Princeton University, Princeton, NJ08544
- Department of Computer Science, Princeton University, Princeton, NJ08544
| | - Jessica Hullman
- Department of Computer Science, Northwestern University, Evanston, IL60208
| | - Ross D. King
- Department of Chemical Engineering and Biotechnology, University of Cambridge, CambridgeCB3 0AS, United Kingdom
- Department of Computer Science and Engineering, Chalmers University of Technology, Gothenburg412 96, Sweden
| | - J. Nathan Kutz
- Department of Applied Mathematics and Electrical and Computer Engineering, University of Washington, Seattle, WA98195
| | - Christopher G. Lucas
- School of Informatics, University of Edinburgh, EdinburghEH8 9AB, United Kingdom
| | - Suhas Mahesh
- Department of Materials Science and Engineering, University of Toronto, Toronto, ONM5S 3E4, Canada
| | - Franco Pestilli
- Department of Psychology, The University of Texas, Austin, TXM5S 3E4
- Department of Neuroscience, The University of Texas, Austin, TXM5S 3E4
| | - Sabina J. Sloman
- Department of Computer Science, University of Manchester, ManchesterM139PL, United Kingdom
| | | |
Collapse
|
3
|
Peterson RA, McGrath M, Cavanaugh JE. Can a Transparent Machine Learning Algorithm Predict Better than Its Black Box Counterparts? A Benchmarking Study Using 110 Data Sets. ENTROPY (BASEL, SWITZERLAND) 2024; 26:746. [PMID: 39330080 PMCID: PMC11431724 DOI: 10.3390/e26090746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Revised: 08/27/2024] [Accepted: 08/28/2024] [Indexed: 09/28/2024]
Abstract
We developed a novel machine learning (ML) algorithm with the goal of producing transparent models (i.e., understandable by humans) while also flexibly accounting for nonlinearity and interactions. Our method is based on ranked sparsity, and it allows for flexibility and user control in varying the shade of the opacity of black box machine learning methods. The main tenet of ranked sparsity is that an algorithm should be more skeptical of higher-order polynomials and interactions a priori compared to main effects, and hence, the inclusion of these more complex terms should require a higher level of evidence. In this work, we put our new ranked sparsity algorithm (as implemented in the open source R package, sparseR) to the test in a predictive model "bakeoff" (i.e., a benchmarking study of ML algorithms applied "out of the box", that is, with no special tuning). Algorithms were trained on a large set of simulated and real-world data sets from the Penn Machine Learning Benchmarks database, addressing both regression and binary classification problems. We evaluated the extent to which our human-centered algorithm can attain predictive accuracy that rivals popular black box approaches such as neural networks, random forests, and support vector machines, while also producing more interpretable models. Using out-of-bag error as a meta-outcome, we describe the properties of data sets in which human-centered approaches can perform as well as or better than black box approaches. We found that interpretable approaches predicted optimally or within 5% of the optimal method in most real-world data sets. We provide a more in-depth comparison of the performances of random forests to interpretable methods for several case studies, including exemplars in which algorithms performed similarly, and several cases when interpretable methods underperformed. This work provides a strong rationale for including human-centered transparent algorithms such as ours in predictive modeling applications.
Collapse
Affiliation(s)
- Ryan A Peterson
- Department of Biostatistics & Informatics, Colorado School of Public Health, University of Colorado, Anschutz Medical Campus, 13001 E. 17th Pl, Aurora, CO 80045, USA
| | - Max McGrath
- Department of Biostatistics & Informatics, Colorado School of Public Health, University of Colorado, Anschutz Medical Campus, 13001 E. 17th Pl, Aurora, CO 80045, USA
| | - Joseph E Cavanaugh
- Department of Biostatistics, College of Public Health, University of Iowa, 145 N. Riverside Dr., Iowa City, IA 52245, USA
| |
Collapse
|
4
|
Hafner L, Peifer TP, Hafner FS. Equal accuracy for Andrew and Abubakar-detecting and mitigating bias in name-ethnicity classification algorithms. AI & SOCIETY 2023:1-25. [PMID: 36789242 PMCID: PMC9910274 DOI: 10.1007/s00146-022-01619-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 12/19/2022] [Indexed: 02/11/2023]
Abstract
Uncovering the world's ethnic inequalities is hampered by a lack of ethnicity-annotated datasets. Name-ethnicity classifiers (NECs) can help, as they are able to infer people's ethnicities from their names. However, since the latest generation of NECs rely on machine learning and artificial intelligence (AI), they may suffer from the same racist and sexist biases found in many AIs. Therefore, this paper offers an algorithmic fairness audit of three NECs. It finds that the UK-Census-trained EthnicityEstimator displays large accuracy biases with regards to ethnicity, but relatively less among gender and age groups. In contrast, the Twitter-trained NamePrism and the Wikipedia-trained Ethnicolr are more balanced among ethnicity, but less among gender and age. We relate these biases to global power structures manifested in naming conventions and NECs' input distribution of names. To improve on the uncovered biases, we program a novel NEC, N2E, using fairness-aware AI techniques. We make N2E freely available at www.name-to-ethnicity.com. Supplementary Information The online version contains supplementary material available at 10.1007/s00146-022-01619-4.
Collapse
Affiliation(s)
- Lena Hafner
- Department of Politics and International Studies, University of Cambridge, Cambridge, UK
| | | | - Franziska Sofia Hafner
- Department of Social and Public Policy and Computer Science, University of Glasgow, Glasgow, UK
| |
Collapse
|
5
|
What Does Information Science Offer for Data Science Research?: A Review of Data and Information Ethics Literature. JOURNAL OF DATA AND INFORMATION SCIENCE 2022. [DOI: 10.2478/jdis-2022-0018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Abstract
This paper reviews literature pertaining to the development of data science as a discipline, current issues with data bias and ethics, and the role that the discipline of information science may play in addressing these concerns. Information science research and researchers have much to offer for data science, owing to their background as transdisciplinary scholars who apply human-centered and social-behavioral perspectives to issues within natural science disciplines. Information science researchers have already contributed to a humanistic approach to data ethics within the literature and an emphasis on data science within information schools all but ensures that this literature will continue to grow in coming decades. This review article serves as a reference for the history, current progress, and potential future directions of data ethics research within the corpus of information science literature.
Collapse
|
6
|
Hunkenschroer AL, Kriebitz A. Is AI recruiting (un)ethical? A human rights perspective on the use of AI for hiring. AI AND ETHICS 2022; 3:199-213. [PMID: 35909984 PMCID: PMC9309597 DOI: 10.1007/s43681-022-00166-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 04/19/2022] [Indexed: 06/15/2023]
Abstract
The use of artificial intelligence (AI) technologies in organizations' recruiting and selection procedures has become commonplace in business practice; accordingly, research on AI recruiting has increased substantially in recent years. But, though various articles have highlighted the potential opportunities and ethical risks of AI recruiting, the topic has not been normatively assessed yet. We aim to fill this gap by providing an ethical analysis of AI recruiting from a human rights perspective. In doing so, we elaborate on human rights' theoretical implications for corporate use of AI-driven hiring solutions. Therefore, we analyze whether AI hiring practices inherently conflict with the concepts of validity, autonomy, nondiscrimination, privacy, and transparency, which represent the main human rights relevant in this context. Concluding that these concepts are not at odds, we then use existing legal and ethical implications to determine organizations' responsibility to enforce and realize human rights standards in the context of AI recruiting.
Collapse
Affiliation(s)
- Anna Lena Hunkenschroer
- Chair of Business Ethics, Technical University of Munich, Arcisstr. 21, 80333 Munich, Germany
| | - Alexander Kriebitz
- Chair of Business Ethics, Technical University of Munich, Arcisstr. 21, 80333 Munich, Germany
| |
Collapse
|
7
|
Will P, Krpan D, Lordan G. People versus machines: introducing the HIRE framework. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10193-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractThe use of Artificial Intelligence (AI) in the recruitment process is becoming a more common method for organisations to hire new employees. Despite this, there is little consensus on whether AI should have widespread use in the hiring process, and in which contexts. In order to bring more clarity to research findings, we propose the HIRE (Human, (Artificial) Intelligence, Recruitment, Evaluation) framework with the primary aim of evaluating studies which investigate how Artificial Intelligence can be integrated into the recruitment process with respect to gauging whether AI is an adequate, better, or worse substitute for human recruiters. We illustrate the simplicity of this framework by conducting a systematic literature review on the empirical studies assessing AI in the recruitment process, with 22 final papers included. The review shows that AI is equal to or better than human recruiters when it comes to efficiency and performance. We also find that AI is mostly better than humans in improving diversity. Finally, we demonstrate that there is a perception among candidates and recruiters that AI is worse than humans. Overall, we conclude based on the evidence, that AI is equal to or better to humans when utilised in the hiring process, however, humans hold a belief of their own superiority. Our aim is that future authors adopt the HIRE framework when conducting research in this area to allow for easier comparability, and ideally place the HIRE framework outcome of AI being better, equal, worse, or unclear in the abstract.
Collapse
|
8
|
Howcroft D, Taylor P. Automation and the future of work: A social shaping of technology approach. NEW TECHNOLOGY WORK AND EMPLOYMENT 2022. [DOI: 10.1111/ntwe.12240] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Affiliation(s)
- Debra Howcroft
- People, Management and Organisation Division University of Manchester‐Alliance Business School Manchester UK
| | - Phil Taylor
- Department of Work, Employment and Organisation University of Strathclyde Glasgow Scotland
| |
Collapse
|
9
|
Köchling A, Wehner MC, Warkocz J. Can I show my skills? Affective responses to artificial intelligence in the recruitment process. REVIEW OF MANAGERIAL SCIENCE 2022. [DOI: 10.1007/s11846-021-00514-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractCompanies increasingly use artificial intelligence (AI) and algorithmic decision-making (ADM) for their recruitment and selection process for cost and efficiency reasons. However, there are concerns about the applicant’s affective response to AI systems in recruitment, and knowledge about the affective responses to the selection process is still limited, especially when AI supports different selection process stages (i.e., preselection, telephone interview, and video interview). Drawing on the affective response model, we propose that affective responses (i.e., opportunity to perform, emotional creepiness) mediate the relationships between an increasing AI-based selection process and organizational attractiveness. In particular, by using a scenario-based between-subject design with German employees (N = 160), we investigate whether and how AI-support during a complete recruitment process diminishes the opportunity to perform and increases emotional creepiness during the process. Moreover, we examine the influence of opportunity to perform and emotional creepiness on organizational attractiveness. We found that AI-support at later stages of the selection process (i.e., telephone and video interview) decreased the opportunity to perform and increased emotional creepiness. In turn, the opportunity to perform and emotional creepiness mediated the association of AI-support in telephone/video interviews on organizational attractiveness. However, we did not find negative affective responses to AI-support earlier stage of the selection process (i.e., during preselection). As we offer evidence for possible adverse reactions to the usage of AI in selection processes, this study provides important practical and theoretical implications.
Collapse
|
10
|
Dolata M, Feuerriegel S, Schwabe G. A sociotechnical view of algorithmic fairness. INFORMATION SYSTEMS JOURNAL 2021. [DOI: 10.1111/isj.12370] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Mateusz Dolata
- Department of Informatics University of Zurich Zurich Switzerland
| | - Stefan Feuerriegel
- Department of Management, Technology, and Economics ETH Zurich Zurich Switzerland
- LMU Munich School of Management LMU Munich Munich Germany
| | - Gerhard Schwabe
- Department of Informatics University of Zurich Zurich Switzerland
| |
Collapse
|
11
|
Tancev G, Pascale C. The Relocation Problem of Field Calibrated Low-Cost Sensor Systems in Air Quality Monitoring: A Sampling Bias. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6198. [PMID: 33143233 PMCID: PMC7662848 DOI: 10.3390/s20216198] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 10/20/2020] [Accepted: 10/26/2020] [Indexed: 11/16/2022]
Abstract
This publication revises the deteriorated performance of field calibrated low-cost sensor systems after spatial and temporal relocation, which is often reported for air quality monitoring devices that use machine learning models as part of their software to compensate for cross-sensitivities or interferences with environmental parameters. The cause of this relocation problem and its relationship to the chosen algorithm is elucidated using published experimental data in combination with techniques from data science. Thus, the origin is traced back to insufficient sampling of data that is used for calibration followed by the incorporation of bias into models. Biases often stem from non-representative data and are a common problem in machine learning, and more generally in artificial intelligence, and as such a rising concern. Finally, bias is believed to be partly reducible in this specific application by using balanced data sets generated in well-controlled laboratory experiments, although not trivial due to the need for infrastructure and professional competence.
Collapse
Affiliation(s)
- Georgi Tancev
- Swiss Federal Institute of Metrology, 3084 Bern, Switzerland;
| | | |
Collapse
|