Xu J, Xiao Y, Wang WH, Ning Y, Shenkman EA, Bian J, Wang F. Algorithmic fairness in computational medicine.
EBioMedicine 2022;
84:104250. [PMID:
36084616 PMCID:
PMC9463525 DOI:
10.1016/j.ebiom.2022.104250]
[Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 08/02/2022] [Accepted: 08/12/2022] [Indexed: 02/08/2023] Open
Abstract
Machine learning models are increasingly adopted for facilitating clinical decision-making. However, recent research has shown that machine learning techniques may result in potential biases when making decisions for people in different subgroups, which can lead to detrimental effects on the health and well-being of specific demographic groups such as vulnerable ethnic minorities. This problem, termed algorithmic bias, has been extensively studied in theoretical machine learning recently. However, the impact of algorithmic bias on medicine and methods to mitigate this bias remain topics of active discussion. This paper presents a comprehensive review of algorithmic fairness in the context of computational medicine, which aims at improving medicine with computational approaches. Specifically, we overview the different types of algorithmic bias, fairness quantification metrics, and bias mitigation methods, and summarize popular software libraries and tools for bias evaluation and mitigation, with the goal of providing reference and insights to researchers and practitioners in computational medicine.
Collapse