1
|
Assié G, Allassonnière S. Artificial Intelligence in Endocrinology: On Track Toward Great Opportunities. J Clin Endocrinol Metab 2024; 109:e1462-e1467. [PMID: 38466742 DOI: 10.1210/clinem/dgae154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 02/13/2024] [Accepted: 03/08/2024] [Indexed: 03/13/2024]
Abstract
In endocrinology, the types and quantity of digital data are increasing rapidly. Computing capabilities are also developing at an incredible rate, as illustrated by the recent expansion in the use of popular generative artificial intelligence (AI) applications. Numerous diagnostic and therapeutic devices using AI have already entered routine endocrine practice, and developments in this field are expected to continue to accelerate. Endocrinologists will need to be supported in managing AI applications. Beyond technological training, interdisciplinary vision is needed to encompass the ethical and legal aspects of AI, to manage the profound impact of AI on patient/provider relationships, and to maintain an optimal balance between human input and AI in endocrinology.
Collapse
Affiliation(s)
- Guillaume Assié
- Université Paris Cité, CNRS UMR8104, INSERM U1016, Institut Cochin, F-75014 Paris, France
- Service d'endocrinologie, Center for Rare Adrenal Diseases, Assistance Publique-Hôpitaux de Paris, Hôpital Cochin, 75014 Paris, France
| | - Stéphanie Allassonnière
- Université Paris Cité, UFR Medecine, 75006 Paris, France
- HeKA INSERM, INRIA Paris, Centre de Recherche des Cordeliers Paris, Université Paris Cité, 75006 Paris, France
| |
Collapse
|
2
|
Wang HE, Weiner JP, Saria S, Kharrazi H. Evaluating Algorithmic Bias in 30-Day Hospital Readmission Models: Retrospective Analysis. J Med Internet Res 2024; 26:e47125. [PMID: 38422347 PMCID: PMC11066744 DOI: 10.2196/47125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 12/28/2023] [Accepted: 02/27/2024] [Indexed: 03/02/2024] Open
Abstract
BACKGROUND The adoption of predictive algorithms in health care comes with the potential for algorithmic bias, which could exacerbate existing disparities. Fairness metrics have been proposed to measure algorithmic bias, but their application to real-world tasks is limited. OBJECTIVE This study aims to evaluate the algorithmic bias associated with the application of common 30-day hospital readmission models and assess the usefulness and interpretability of selected fairness metrics. METHODS We used 10.6 million adult inpatient discharges from Maryland and Florida from 2016 to 2019 in this retrospective study. Models predicting 30-day hospital readmissions were evaluated: LACE Index, modified HOSPITAL score, and modified Centers for Medicare & Medicaid Services (CMS) readmission measure, which were applied as-is (using existing coefficients) and retrained (recalibrated with 50% of the data). Predictive performances and bias measures were evaluated for all, between Black and White populations, and between low- and other-income groups. Bias measures included the parity of false negative rate (FNR), false positive rate (FPR), 0-1 loss, and generalized entropy index. Racial bias represented by FNR and FPR differences was stratified to explore shifts in algorithmic bias in different populations. RESULTS The retrained CMS model demonstrated the best predictive performance (area under the curve: 0.74 in Maryland and 0.68-0.70 in Florida), and the modified HOSPITAL score demonstrated the best calibration (Brier score: 0.16-0.19 in Maryland and 0.19-0.21 in Florida). Calibration was better in White (compared to Black) populations and other-income (compared to low-income) groups, and the area under the curve was higher or similar in the Black (compared to White) populations. The retrained CMS and modified HOSPITAL score had the lowest racial and income bias in Maryland. In Florida, both of these models overall had the lowest income bias and the modified HOSPITAL score showed the lowest racial bias. In both states, the White and higher-income populations showed a higher FNR, while the Black and low-income populations resulted in a higher FPR and a higher 0-1 loss. When stratified by hospital and population composition, these models demonstrated heterogeneous algorithmic bias in different contexts and populations. CONCLUSIONS Caution must be taken when interpreting fairness measures' face value. A higher FNR or FPR could potentially reflect missed opportunities or wasted resources, but these measures could also reflect health care use patterns and gaps in care. Simply relying on the statistical notions of bias could obscure or underplay the causes of health disparity. The imperfect health data, analytic frameworks, and the underlying health systems must be carefully considered. Fairness measures can serve as a useful routine assessment to detect disparate model performances but are insufficient to inform mechanisms or policy changes. However, such an assessment is an important first step toward data-driven improvement to address existing health disparities.
Collapse
Affiliation(s)
- H Echo Wang
- Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, United States
| | - Jonathan P Weiner
- Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, United States
- Johns Hopkins Center for Population Health Information Technology, Baltimore, MD, United States
| | - Suchi Saria
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Hadi Kharrazi
- Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, United States
- Johns Hopkins Center for Population Health Information Technology, Baltimore, MD, United States
| |
Collapse
|
3
|
Green BL, Murphy A, Robinson E. Accelerating health disparities research with artificial intelligence. Front Digit Health 2024; 6:1330160. [PMID: 38322109 PMCID: PMC10844447 DOI: 10.3389/fdgth.2024.1330160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 01/10/2024] [Indexed: 02/08/2024] Open
Affiliation(s)
- B. Lee Green
- Department of Health Outcomes and Behavior, Moffitt Cancer Center, Tampa, FL, United States
| | - Anastasia Murphy
- Department of Health Outcomes and Behavior, Moffitt Cancer Center, Tampa, FL, United States
| | - Edmondo Robinson
- Center for Digital Health, Moffitt Cancer Center, Tampa, FL, United States
| |
Collapse
|
4
|
Mei Z, Zheng D, Ge M. Informative Artifacts in AI-Assisted Care. N Engl J Med 2023; 389:10.1056/NEJMc2311525#sa2. [PMID: 38048205 DOI: 10.1056/nejmc2311525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/06/2023]
Affiliation(s)
- Zubing Mei
- Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - De Zheng
- Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Maojun Ge
- Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai, China
| |
Collapse
|
5
|
Celeste C, Ming D, Broce J, Ojo DP, Drobina E, Louis-Jacques AF, Gilbert JE, Fang R, Parker IK. Ethnic disparity in diagnosing asymptomatic bacterial vaginosis using machine learning. NPJ Digit Med 2023; 6:211. [PMID: 37978250 PMCID: PMC10656445 DOI: 10.1038/s41746-023-00953-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 10/27/2023] [Indexed: 11/19/2023] Open
Abstract
While machine learning (ML) has shown great promise in medical diagnostics, a major challenge is that ML models do not always perform equally well among ethnic groups. This is alarming for women's health, as there are already existing health disparities that vary by ethnicity. Bacterial Vaginosis (BV) is a common vaginal syndrome among women of reproductive age and has clear diagnostic differences among ethnic groups. Here, we investigate the ability of four ML algorithms to diagnose BV. We determine the fairness in the prediction of asymptomatic BV using 16S rRNA sequencing data from Asian, Black, Hispanic, and white women. General purpose ML model performances vary based on ethnicity. When evaluating the metric of false positive or false negative rate, we find that models perform least effectively for Hispanic and Asian women. Models generally have the highest performance for white women and the lowest for Asian women. These findings demonstrate a need for improved methodologies to increase model fairness for predicting BV.
Collapse
Affiliation(s)
- Cameron Celeste
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, 32610, USA
| | - Dion Ming
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, 32610, USA
| | - Justin Broce
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Diandra P Ojo
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Emma Drobina
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Adetola F Louis-Jacques
- Department of Obstetrics and Gynecology, College of Medicine, University of Florida, Gainesville, FL, 32611, USA
| | - Juan E Gilbert
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, 32610, USA.
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, 32611, USA.
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32611, USA.
- Department of Radiology, University of Florida, Gainesville, FL, 32611, USA.
| | - Ivana K Parker
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, 32610, USA.
| |
Collapse
|
6
|
McCradden MD, Joshi S, Anderson JA, London AJ. A normative framework for artificial intelligence as a sociotechnical system in healthcare. PATTERNS (NEW YORK, N.Y.) 2023; 4:100864. [PMID: 38035190 PMCID: PMC10682751 DOI: 10.1016/j.patter.2023.100864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/02/2023]
Abstract
Artificial intelligence (AI) tools are of great interest to healthcare organizations for their potential to improve patient care, yet their translation into clinical settings remains inconsistent. One of the reasons for this gap is that good technical performance does not inevitably result in patient benefit. We advocate for a conceptual shift wherein AI tools are seen as components of an intervention ensemble. The intervention ensemble describes the constellation of practices that, together, bring about benefit to patients or health systems. Shifting from a narrow focus on the tool itself toward the intervention ensemble prioritizes a "sociotechnical" vision for translation of AI that values all components of use that support beneficial patient outcomes. The intervention ensemble approach can be used for regulation, institutional oversight, and for AI adopters to responsibly and ethically appraise, evaluate, and use AI tools.
Collapse
Affiliation(s)
- Melissa D. McCradden
- Department of Bioethics, The Hospital for Sick Children, Toronto, ON, Canada
- Genetics & Genome Biology Research Program, Peter Gilgan Center for Research & Learning, Toronto, ON, Canada
- Division of Clinical & Public Health, Dalla Lana School of Public Health, Toronto, ON, Canada
| | - Shalmali Joshi
- Department of Biomedical Informatics, Department of Computer Science (Affliate), Data Science Institute, Columbia University, New York, NY, USA
| | - James A. Anderson
- Department of Bioethics, The Hospital for Sick Children, Toronto, ON, Canada
- Institute for Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Alex John London
- Department of Philosophy and Center for Ethics and Policy, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
7
|
Arora A, Alderman JE, Palmer J, Ganapathi S, Laws E, McCradden MD, Oakden-Rayner L, Pfohl SR, Ghassemi M, McKay F, Treanor D, Rostamzadeh N, Mateen B, Gath J, Adebajo AO, Kuku S, Matin R, Heller K, Sapey E, Sebire NJ, Cole-Lewis H, Calvert M, Denniston A, Liu X. The value of standards for health datasets in artificial intelligence-based applications. Nat Med 2023; 29:2929-2938. [PMID: 37884627 PMCID: PMC10667100 DOI: 10.1038/s41591-023-02608-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 09/22/2023] [Indexed: 10/28/2023]
Abstract
Artificial intelligence as a medical device is increasingly being applied to healthcare for diagnosis, risk stratification and resource allocation. However, a growing body of evidence has highlighted the risk of algorithmic bias, which may perpetuate existing health inequity. This problem arises in part because of systemic inequalities in dataset curation, unequal opportunity to participate in research and inequalities of access. This study aims to explore existing standards, frameworks and best practices for ensuring adequate data diversity in health datasets. Exploring the body of existing literature and expert views is an important step towards the development of consensus-based guidelines. The study comprises two parts: a systematic review of existing standards, frameworks and best practices for healthcare datasets; and a survey and thematic analysis of stakeholder views of bias, health equity and best practices for artificial intelligence as a medical device. We found that the need for dataset diversity was well described in literature, and experts generally favored the development of a robust set of guidelines, but there were mixed views about how these could be implemented practically. The outputs of this study will be used to inform the development of standards for transparency of data diversity in health datasets (the STANDING Together initiative).
Collapse
Affiliation(s)
- Anmol Arora
- School of Clinical Medicine, University of Cambridge, Cambridge, UK
| | - Joseph E Alderman
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | - Joanne Palmer
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | | | - Elinor Laws
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | - Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics and Genome Biology, Peter Gilgan Centre for Research and Learning, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, Toronto, Ontario, Canada
| | - Lauren Oakden-Rayner
- The Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | | | - Marzyeh Ghassemi
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for Medical Engineering & Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Vector Institute, Toronto, Ontario, Canada
| | - Francis McKay
- The Ethox Centre and the Wellcome Centre for Ethics and Humanities, Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Darren Treanor
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- University of Leeds, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | | | - Bilal Mateen
- Institute for Health Informatics, University College London, London, UK
- Wellcome Trust, London, UK
| | - Jacqui Gath
- Patient and Public Involvement and Engagement (PPIE) Group, STANDING Together, Birmingham, UK
| | - Adewole O Adebajo
- Patient and Public Involvement and Engagement (PPIE) Group, STANDING Together, Birmingham, UK
| | | | - Rubeta Matin
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | | | - Elizabeth Sapey
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- PIONEER, HDR UK Hub in Acute Care, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
| | - Neil J Sebire
- National Institute for Health and Care Research, Great Ormond Street Hospital Biomedical Research Centre, London, UK
- Great Ormond Street Institute of Child Health, University Hospital London, London, UK
| | | | - Melanie Calvert
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
- Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Applied Research Collaboration West Midlands, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Birmingham-Oxford Blood and Transplant Research Unit in Precision Transplant and Cellular Therapeutics, University of Birmingham, Birmingham, UK
- DEMAND Hub, University of Birmingham, Birmingham, UK
- UK SPINE, University of Birmingham, Birmingham, UK
| | - Alastair Denniston
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Biomedical Research Centre, Moorfields Eye Hospital/University College London, London, UK
| | - Xiaoxuan Liu
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK.
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK.
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK.
| |
Collapse
|
8
|
Banja J, Gichoya JW, Martinez-Martin N, Waller LA, Clifford GD. Fairness as an afterthought: An American perspective on fairness in model developer-clinician user collaborations. PLOS DIGITAL HEALTH 2023; 2:e0000386. [PMID: 37983258 PMCID: PMC10659157 DOI: 10.1371/journal.pdig.0000386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Numerous ethics guidelines have been handed down over the last few years on the ethical applications of machine learning models. Virtually every one of them mentions the importance of "fairness" in the development and use of these models. Unfortunately, though, these ethics documents omit providing a consensually adopted definition or characterization of fairness. As one group of authors observed, these documents treat fairness as an "afterthought" whose importance is undeniable but whose essence seems strikingly elusive. In this essay, which offers a distinctly American treatment of "fairness," we comment on a number of fairness formulations and on qualitative or statistical methods that have been encouraged to achieve fairness. We argue that none of them, at least from an American moral perspective, provides a one-size-fits-all definition of or methodology for securing fairness that could inform or standardize fairness over the universe of use cases witnessing machine learning applications. Instead, we argue that because fairness comprehensions and applications reflect a vast range of use contexts, model developers and clinician users will need to engage in thoughtful collaborations that examine how fairness should be conceived and operationalized in the use case at issue. Part II of this paper illustrates key moments in these collaborations, especially when inter and intra disagreement occurs among model developer and clinician user groups over whether a model is fair or unfair. We conclude by noting that these collaborations will likely occur over the lifetime of a model if its claim to fairness is to advance beyond "afterthought" status.
Collapse
Affiliation(s)
- John Banja
- Center for Ethics, Emory University, Atlanta, Georgia, United States of America
| | - Judy Wawira Gichoya
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia, United States of America
| | - Nicole Martinez-Martin
- Stanford University Center for Biomedical Ethics, Stanford University, Stanford, California, United States of America
| | - Lance A. Waller
- Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, United States of America
| | - Gari D. Clifford
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia, United States of America
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| |
Collapse
|
9
|
Ramezani M, Takian A, Bakhtiari A, Rabiee HR, Ghazanfari S, Sazgarnejad S. Research agenda for using artificial intelligence in health governance: interpretive scoping review and framework. BioData Min 2023; 16:31. [PMID: 37904172 PMCID: PMC10617108 DOI: 10.1186/s13040-023-00346-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 10/07/2023] [Indexed: 11/01/2023] Open
Abstract
BACKGROUND The governance of health systems is complex in nature due to several intertwined and multi-dimensional factors contributing to it. Recent challenges of health systems reflect the need for innovative approaches that can minimize adverse consequences of policies. Hence, there is compelling evidence of a distinct outlook on the health ecosystem using artificial intelligence (AI). Therefore, this study aimed to investigate the roles of AI and its applications in health system governance through an interpretive scoping review of current evidence. METHOD This study intended to offer a research agenda and framework for the applications of AI in health systems governance. To include shreds of evidence with a greater focus on the application of AI in health governance from different perspectives, we searched the published literature from 2000 to 2023 through PubMed, Scopus, and Web of Science Databases. RESULTS Our findings showed that integrating AI capabilities into health systems governance has the potential to influence three cardinal dimensions of health. These include social determinants of health, elements of governance, and health system tasks and goals. AI paves the way for strengthening the health system's governance through various aspects, i.e., intelligence innovations, flexible boundaries, multidimensional analysis, new insights, and cognition modifications to the health ecosystem area. CONCLUSION AI is expected to be seen as a tool with new applications and capabilities, with the potential to change each component of governance in the health ecosystem, which can eventually help achieve health-related goals.
Collapse
Affiliation(s)
- Maryam Ramezani
- Department of Health Management, Policy and Economics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
- Health Equity Research Centre (HERC), Tehran University of Medical Sciences, Tehran, Iran
| | - Amirhossein Takian
- Department of Health Management, Policy and Economics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran.
- Health Equity Research Centre (HERC), Tehran University of Medical Sciences, Tehran, Iran.
- Department of Global Health and Public Policy, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran.
| | - Ahad Bakhtiari
- Health Equity Research Centre (HERC), Tehran University of Medical Sciences, Tehran, Iran
| | - Hamid R Rabiee
- Department of Computer Engineering, Sharif University of Technology, Tehran, Iran
| | - Sadegh Ghazanfari
- Department of Health Management, Policy and Economics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
| | - Saharnaz Sazgarnejad
- School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
10
|
Platt J, Nong P, Merid B, Raj M, Cope E, Kardia S, Creary M. Applying anti-racist approaches to informatics: a new lens on traditional frames. J Am Med Inform Assoc 2023; 30:1747-1753. [PMID: 37403330 PMCID: PMC10531112 DOI: 10.1093/jamia/ocad123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 05/22/2023] [Accepted: 06/28/2023] [Indexed: 07/06/2023] Open
Abstract
Health organizations and systems rely on increasingly sophisticated informatics infrastructure. Without anti-racist expertise, the field risks reifying and entrenching racism in information systems. We consider ways the informatics field can recognize institutional, systemic, and structural racism and propose the use of the Public Health Critical Race Praxis (PHCRP) to mitigate and dismantle racism in digital forms. We enumerate guiding questions for stakeholders along with a PHCRP-Informatics framework. By focusing on (1) critical self-reflection, (2) following the expertise of well-established scholars of racism, (3) centering the voices of affected individuals and communities, and (4) critically evaluating practice resulting from informatics systems, stakeholders can work to minimize the impacts of racism. Informatics, informed and guided by this proposed framework, will help realize the vision of health systems that are more fair, just, and equitable.
Collapse
Affiliation(s)
- Jodyn Platt
- Department of Learning Health Sciences, University of Michigan Medical School, 300 North Ingalls, Suite 1161, Ann Arbor, Michigan, USA
| | - Paige Nong
- Department of Health Management and Policy, University of Michigan School of Public Health, Ann Arbor, Michigan, USA
| | - Beza Merid
- School for the Future of Innovation in Society, Arizona State University, Tempe, Arizona, USA
| | - Minakshi Raj
- Department of Kinesiology and Community Health, College of Applied Health Sciences, University of Illinois at Urbana Champaign, Champaign, Illinois, USA
| | | | - Sharon Kardia
- Department of Epidemiology, University of Michigan School of Public Health, Ann Arbor, Michigan, USA
| | - Melissa Creary
- Department of Health Management and Policy, University of Michigan School of Public Health, Ann Arbor, Michigan, USA
| |
Collapse
|
11
|
Gregg B. AI-Based Medical Solutions Can Threaten Physicians' Ethical Obligations Only If Allowed to Do So. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:84-86. [PMID: 37647468 DOI: 10.1080/15265161.2023.2237437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
12
|
Davis MA, Lim N, Jordan J, Yee J, Gichoya JW, Lee R. Imaging Artificial Intelligence: A Framework for Radiologists to Address Health Equity, From the AJR Special Series on DEI. AJR Am J Roentgenol 2023; 221:302-308. [PMID: 37095660 DOI: 10.2214/ajr.22.28802] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
Artificial intelligence (AI) holds promise for helping patients access new and individualized health care pathways while increasing efficiencies for health care practitioners. Radiology has been at the forefront of this technology in medicine; many radiology practices are implementing and trialing AI-focused products. AI also holds great promise for reducing health disparities and promoting health equity. Radiology is ideally positioned to help reduce disparities given its central and critical role in patient care. The purposes of this article are to discuss the potential benefits and pitfalls of deploying AI algorithms in radiology, specifically highlighting the impact of AI on health equity; to explore ways to mitigate drivers of inequity; and to enhance pathways for creating better health care for all individuals, centering on a practical framework that helps radiologists address health equity during deployment of new tools.
Collapse
Affiliation(s)
- Melissa A Davis
- Department of Diagnostic Radiology, Yale University School of Medicine, 789 Howard Ave, PO Box 20842, New Haven, CT 06520
| | | | - John Jordan
- Stanford University School of Medicine, Stanford, CA
| | - Judy Yee
- Montefiore Medical Center, Albert Einstein College of Medicine, New York, NY
| | | | - Ryan Lee
- Jefferson Health, Philadelphia, PA
| |
Collapse
|
13
|
Teeple S, Chivers C, Linn KA, Halpern SD, Eneanya N, Draugelis M, Courtright K. Evaluating equity in performance of an electronic health record-based 6-month mortality risk model to trigger palliative care consultation: a retrospective model validation analysis. BMJ Qual Saf 2023; 32:503-516. [PMID: 37001995 PMCID: PMC10898860 DOI: 10.1136/bmjqs-2022-015173] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 03/08/2023] [Indexed: 04/03/2023]
Abstract
OBJECTIVE Evaluate predictive performance of an electronic health record (EHR)-based, inpatient 6-month mortality risk model developed to trigger palliative care consultation among patient groups stratified by age, race, ethnicity, insurance and socioeconomic status (SES), which may vary due to social forces (eg, racism) that shape health, healthcare and health data. DESIGN Retrospective evaluation of prediction model. SETTING Three urban hospitals within a single health system. PARTICIPANTS All patients ≥18 years admitted between 1 January and 31 December 2017, excluding observation, obstetric, rehabilitation and hospice (n=58 464 encounters, 41 327 patients). MAIN OUTCOME MEASURES General performance metrics (c-statistic, integrated calibration index (ICI), Brier Score) and additional measures relevant to health equity (accuracy, false positive rate (FPR), false negative rate (FNR)). RESULTS For black versus non-Hispanic white patients, the model's accuracy was higher (0.051, 95% CI 0.044 to 0.059), FPR lower (-0.060, 95% CI -0.067 to -0.052) and FNR higher (0.049, 95% CI 0.023 to 0.078). A similar pattern was observed among patients who were Hispanic, younger, with Medicaid/missing insurance, or living in low SES zip codes. No consistent differences emerged in c-statistic, ICI or Brier Score. Younger age had the second-largest effect size in the mortality prediction model, and there were large standardised group differences in age (eg, 0.32 for non-Hispanic white versus black patients), suggesting age may contribute to systematic differences in the predicted probabilities between groups. CONCLUSIONS An EHR-based mortality risk model was less likely to identify some marginalised patients as potentially benefiting from palliative care, with younger age pinpointed as a possible mechanism. Evaluating predictive performance is a critical preliminary step in addressing algorithmic inequities in healthcare, which must also include evaluating clinical impact, and governance and regulatory structures for oversight, monitoring and accountability.
Collapse
Affiliation(s)
- Stephanie Teeple
- Department of Biostatistics, Epidemiology & Informatics, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
- Palliative and Advanced Illness Research (PAIR) Center, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | | | - Kristin A Linn
- Department of Biostatistics, Epidemiology & Informatics, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
| | - Scott D Halpern
- Palliative and Advanced Illness Research (PAIR) Center, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
- Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
| | - Nwamaka Eneanya
- Palliative and Advanced Illness Research (PAIR) Center, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
- Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
| | | | - Katherine Courtright
- Palliative and Advanced Illness Research (PAIR) Center, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
- Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
| |
Collapse
|
14
|
Cho MK, Martinez-Martin N. Epistemic Rights and Responsibilities of Digital Simulacra for Biomedicine. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:43-54. [PMID: 36507873 PMCID: PMC10258225 DOI: 10.1080/15265161.2022.2146785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Big data and AI have enabled digital simulation for prediction of future health states or behaviors of specific individuals, populations or humans in general. "Digital simulacra" use multimodal datasets to develop computational models that are virtual representations of people or groups, generating predictions of how systems evolve and react to interventions over time. These include digital twins and virtual patients for in silico clinical trials, both of which seek to transform research and health care by speeding innovation and bridging the epistemic gap between population-based research findings and their application to the individual. Nevertheless, digital simulacra mark a major milestone on a trajectory to embrace the epistemic culture of data science and a potential abandonment of medical epistemological concepts of causality and representation. In doing so, "data first" approaches potentially shift moral attention from actual patients and principles, such as equity, to simulated patients and patient data.
Collapse
|
15
|
From patterns to patients: Advances in clinical machine learning for cancer diagnosis, prognosis, and treatment. Cell 2023; 186:1772-1791. [PMID: 36905928 DOI: 10.1016/j.cell.2023.01.035] [Citation(s) in RCA: 47] [Impact Index Per Article: 47.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/10/2023] [Accepted: 01/26/2023] [Indexed: 03/12/2023]
Abstract
Machine learning (ML) is increasingly used in clinical oncology to diagnose cancers, predict patient outcomes, and inform treatment planning. Here, we review recent applications of ML across the clinical oncology workflow. We review how these techniques are applied to medical imaging and to molecular data obtained from liquid and solid tumor biopsies for cancer diagnosis, prognosis, and treatment design. We discuss key considerations in developing ML for the distinct challenges posed by imaging and molecular data. Finally, we examine ML models approved for cancer-related patient usage by regulatory agencies and discuss approaches to improve the clinical usefulness of ML.
Collapse
|
16
|
Leveraging Clinical Informatics and Data Science to Improve Care and Facilitate Research in Pediatric Acute Respiratory Distress Syndrome: From the Second Pediatric Acute Lung Injury Consensus Conference. Pediatr Crit Care Med 2023; 24:S1-S11. [PMID: 36661432 DOI: 10.1097/pcc.0000000000003155] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
OBJECTIVES The use of electronic algorithms, clinical decision support systems, and other clinical informatics interventions is increasing in critical care. Pediatric acute respiratory distress syndrome (PARDS) is a complex, dynamic condition associated with large amounts of clinical data and frequent decisions at the bedside. Novel data-driven technologies that can help screen, prompt, and support clinician decision-making could have a significant impact on patient outcomes. We sought to identify and summarize relevant evidence related to clinical informatics interventions in both PARDS and adult respiratory distress syndrome (ARDS), for the second Pediatric Acute Lung Injury Consensus Conference. DATA SOURCES MEDLINE (Ovid), Embase (Elsevier), and CINAHL Complete (EBSCOhost). STUDY SELECTION We included studies of pediatric or adult critically ill patients with or at risk of ARDS that examined automated screening tools, electronic algorithms, or clinical decision support systems. DATA EXTRACTION Title/abstract review, full text review, and data extraction using a standardized data extraction form. DATA SYNTHESIS The Grading of Recommendations Assessment, Development and Evaluation approach was used to identify and summarize evidence and develop recommendations. Twenty-six studies were identified for full text extraction to address the Patient/Intervention/Comparator/Outcome questions, and 14 were used for the recommendations/statements. Two clinical recommendations were generated, related to the use of electronic screening tools and automated monitoring of compliance with best practice guidelines. Two research statements were generated, related to the development of multicenter data collaborations and the design of generalizable algorithms and electronic tools. One policy statement was generated, related to the provision of material and human resources by healthcare organizations to empower clinicians to develop clinical informatics interventions to improve the care of patients with PARDS. CONCLUSIONS We present two clinical recommendations and three statements (two research one policy) for the use of electronic algorithms and clinical informatics tools for patients with PARDS based on a systematic review of the literature and expert consensus.
Collapse
|
17
|
d'Elia A, Gabbay M, Rodgers S, Kierans C, Jones E, Durrani I, Thomas A, Frith L. Artificial intelligence and health inequities in primary care: a systematic scoping review and framework. Fam Med Community Health 2022; 10:fmch-2022-001670. [PMID: 36450391 PMCID: PMC9716837 DOI: 10.1136/fmch-2022-001670] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
OBJECTIVE Artificial intelligence (AI) will have a significant impact on healthcare over the coming decade. At the same time, health inequity remains one of the biggest challenges. Primary care is both a driver and a mitigator of health inequities and with AI gaining traction in primary care, there is a need for a holistic understanding of how AI affect health inequities, through the act of providing care and through potential system effects. This paper presents a systematic scoping review of the ways AI implementation in primary care may impact health inequity. DESIGN Following a systematic scoping review approach, we searched for literature related to AI, health inequity, and implementation challenges of AI in primary care. In addition, articles from primary exploratory searches were added, and through reference screening.The results were thematically summarised and used to produce both a narrative and conceptual model for the mechanisms by which social determinants of health and AI in primary care could interact to either improve or worsen health inequities.Two public advisors were involved in the review process. ELIGIBILITY CRITERIA Peer-reviewed publications and grey literature in English and Scandinavian languages. INFORMATION SOURCES PubMed, SCOPUS and JSTOR. RESULTS A total of 1529 publications were identified, of which 86 met the inclusion criteria. The findings were summarised under six different domains, covering both positive and negative effects: (1) access, (2) trust, (3) dehumanisation, (4) agency for self-care, (5) algorithmic bias and (6) external effects. The five first domains cover aspects of the interface between the patient and the primary care system, while the last domain covers care system-wide and societal effects of AI in primary care. A graphical model has been produced to illustrate this. Community involvement throughout the whole process of designing and implementing of AI in primary care was a common suggestion to mitigate the potential negative effects of AI. CONCLUSION AI has the potential to affect health inequities through a multitude of ways, both directly in the patient consultation and through transformative system effects. This review summarises these effects from a system tive and provides a base for future research into responsible implementation.
Collapse
Affiliation(s)
- Alexander d'Elia
- Department of Public Health, Policy and Systems, University of Liverpool, Liverpool, UK
| | - Mark Gabbay
- Primary Care and Mental Health, University of Liverpool, Liverpool, UK
| | - Sarah Rodgers
- Department of Public Health, Policy and Systems, University of Liverpool, Liverpool, UK
| | - Ciara Kierans
- Department of Public Health, Policy and Systems, University of Liverpool, Liverpool, UK
| | - Elisa Jones
- Department of Public Health, Policy and Systems, University of Liverpool, Liverpool, UK
| | | | | | - Lucy Frith
- Centre for Social Ethics & Policy, The University of Manchester, Manchester, UK
| |
Collapse
|
18
|
Lennerz JK, Green U, Williamson DFK, Mahmood F. A unifying force for the realization of medical AI. NPJ Digit Med 2022; 5:172. [PMID: 36380011 PMCID: PMC9666657 DOI: 10.1038/s41746-022-00721-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/08/2022] [Indexed: 11/16/2022] Open
|
19
|
Lohmann P, Franceschi E, Vollmuth P, Dhermain F, Weller M, Preusser M, Smits M, Galldiks N. Radiomics in neuro-oncological clinical trials. Lancet Digit Health 2022; 4:e841-e849. [PMID: 36182633 DOI: 10.1016/s2589-7500(22)00144-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 07/05/2022] [Accepted: 07/08/2022] [Indexed: 06/16/2023]
Abstract
The development of clinical trials has led to substantial improvements in the prevention and treatment of many diseases, including brain cancer. Advances in medicine, such as improved surgical techniques, the development of new drugs and devices, the use of statistical methods in research, and the development of codes of ethics, have considerably influenced the way clinical trials are conducted today. In addition, methods from the broad field of artificial intelligence, such as radiomics, have the potential to considerably affect clinical trials and clinical practice in the future. Radiomics is a method to extract undiscovered features from routinely acquired imaging data that can neither be captured by means of human perception nor conventional image analysis. In patients with brain cancer, radiomics has shown its potential for the non-invasive identification of prognostic biomarkers, automated response assessment, and differentiation between treatment-related changes from tumour progression. Despite promising results, radiomics is not yet established in routine clinical practice nor in clinical trials. In this Viewpoint, the European Organization for Research and Treatment of Cancer Brain Tumour Group summarises the current status of radiomics, discusses its potential and limitations, envisions its future role in clinical trials in neuro-oncology, and provides guidance on how to address the challenges in radiomics.
Collapse
Affiliation(s)
- Philipp Lohmann
- Institute of Neuroscience and Medicine (INM-3, INM-4), Research Center Juelich (FZJ), Juelich, Germany; Department of Stereotactic and Functional Neurosurgery, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany; Brain Tumour Group, European Organization for Research and Treatment of Cancer, Brussels, Belgium.
| | - Enrico Franceschi
- Brain Tumour Group, European Organization for Research and Treatment of Cancer, Brussels, Belgium; IRCCS Istituto Scienze Neurologiche di Bologna, Nervous System Medical Oncology Department, Bologna, Italy
| | - Philipp Vollmuth
- Brain Tumour Group, European Organization for Research and Treatment of Cancer, Brussels, Belgium; Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Frédéric Dhermain
- Brain Tumour Group, European Organization for Research and Treatment of Cancer, Brussels, Belgium; Radiation Oncology Department, Gustave Roussy University Hospital, Cancer Campus Grand Paris, Villejuif, France
| | - Michael Weller
- Brain Tumour Group, European Organization for Research and Treatment of Cancer, Brussels, Belgium; Department of Neurology, University Hospital and University of Zurich, Zurich, Switzerland
| | - Matthias Preusser
- Brain Tumour Group, European Organization for Research and Treatment of Cancer, Brussels, Belgium; Division of Oncology, Department of Internal Medicine I, Medical University of Vienna, Vienna, Austria
| | - Marion Smits
- Brain Tumour Group, European Organization for Research and Treatment of Cancer, Brussels, Belgium; Department of Radiology and Nuclear Medicine and Brain Tumour Center, Erasmus Medical Center, Rotterdam, Netherlands
| | - Norbert Galldiks
- Institute of Neuroscience and Medicine (INM-3, INM-4), Research Center Juelich (FZJ), Juelich, Germany; Department of Neurology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany; Brain Tumour Group, European Organization for Research and Treatment of Cancer, Brussels, Belgium; Center for Integrated Oncology, Universities of Aachen, Bonn, Cologne, and Duesseldorf, Cologne, Germany
| |
Collapse
|
20
|
Systematic analysis of the test design and performance of AI/ML-based medical devices approved for triage/detection/diagnosis in the USA and Japan. Sci Rep 2022; 12:16874. [PMID: 36207474 PMCID: PMC9542463 DOI: 10.1038/s41598-022-21426-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 09/27/2022] [Indexed: 11/08/2022] Open
Abstract
The development of computer-aided detection (CAD) using artificial intelligence (AI) and machine learning (ML) is rapidly evolving. Submission of AI/ML-based CAD devices for regulatory approval requires information about clinical trial design and performance criteria, but the requirements vary between countries. This study compares the requirements for AI/ML-based CAD devices approved by the US Food and Drug Administration (FDA) and the Pharmaceuticals and Medical Devices Agency (PMDA) in Japan. A list of 45 FDA-approved and 12 PMDA-approved AI/ML-based CAD devices was compiled. In the USA, devices classified as computer-aided simple triage were approved based on standalone software testing, whereas devices classified as computer-aided detection/diagnosis were approved based on reader study testing. In Japan, however, there was no clear distinction between evaluation methods according to the category. In the USA, a prospective randomized controlled trial was conducted for AI/ML-based CAD devices used for the detection of colorectal polyps, whereas in Japan, such devices were approved based on standalone software testing. This study indicated that the different viewpoints of AI/ML-based CAD in the two countries influenced the selection of different evaluation methods. This study’s findings may be useful for defining a unified global development and approval standard for AI/ML-based CAD.
Collapse
|
21
|
Couckuyt A, Seurinck R, Emmaneel A, Quintelier K, Novak D, Van Gassen S, Saeys Y. Challenges in translational machine learning. Hum Genet 2022; 141:1451-1466. [PMID: 35246744 PMCID: PMC8896412 DOI: 10.1007/s00439-022-02439-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Accepted: 02/08/2022] [Indexed: 11/25/2022]
Abstract
Machine learning (ML) algorithms are increasingly being used to help implement clinical decision support systems. In this new field, we define as "translational machine learning", joint efforts and strong communication between data scientists and clinicians help to span the gap between ML and its adoption in the clinic. These collaborations also improve interpretability and trust in translational ML methods and ultimately aim to result in generalizable and reproducible models. To help clinicians and bioinformaticians refine their translational ML pipelines, we review the steps from model building to the use of ML in the clinic. We discuss experimental setup, computational analysis, interpretability and reproducibility, and emphasize the challenges involved. We highly advise collaboration and data sharing between consortia and institutes to build multi-centric cohorts that facilitate ML methodologies that generalize across centers. In the end, we hope that this review provides a way to streamline translational ML and helps to tackle the challenges that come with it.
Collapse
Affiliation(s)
- Artuur Couckuyt
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
| | - Ruth Seurinck
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
| | - Annelies Emmaneel
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
| | - Katrien Quintelier
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
- Department of Pulmonary Diseases, Erasmus MC, Rotterdam, The Netherlands
| | - David Novak
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
| | - Sofie Van Gassen
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
| | - Yvan Saeys
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium.
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium.
| |
Collapse
|
22
|
Pandey A, Adedinsewo D. The Future of AI-Enhanced ECG Interpretation for Valvular Heart Disease Screening. J Am Coll Cardiol 2022; 80:627-630. [PMID: 35926936 DOI: 10.1016/j.jacc.2022.05.034] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 05/16/2022] [Indexed: 10/16/2022]
Affiliation(s)
- Ambarish Pandey
- Division of Cardiology, Department of Internal Medicine, UT Southwestern Medical Center, Dallas, Texas, USA.
| | - Demilade Adedinsewo
- Department of Cardiovascular Medicine, Mayo Clinic, Jacksonville, Florida, USA
| |
Collapse
|
23
|
Juhn YJ, Ryu E, Wi CI, King KS, Malik M, Romero-Brufau S, Weng C, Sohn S, Sharp RR, Halamka JD. Assessing socioeconomic bias in machine learning algorithms in health care: a case study of the HOUSES index. J Am Med Inform Assoc 2022; 29:1142-1151. [PMID: 35396996 PMCID: PMC9196683 DOI: 10.1093/jamia/ocac052] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 03/24/2022] [Accepted: 04/05/2022] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE Artificial intelligence (AI) models may propagate harmful biases in performance and hence negatively affect the underserved. We aimed to assess the degree to which data quality of electronic health records (EHRs) affected by inequities related to low socioeconomic status (SES), results in differential performance of AI models across SES. MATERIALS AND METHODS This study utilized existing machine learning models for predicting asthma exacerbation in children with asthma. We compared balanced error rate (BER) against different SES levels measured by HOUsing-based SocioEconomic Status measure (HOUSES) index. As a possible mechanism for differential performance, we also compared incompleteness of EHR information relevant to asthma care by SES. RESULTS Asthmatic children with lower SES had larger BER than those with higher SES (eg, ratio = 1.35 for HOUSES Q1 vs Q2-Q4) and had a higher proportion of missing information relevant to asthma care (eg, 41% vs 24% for missing asthma severity and 12% vs 9.8% for undiagnosed asthma despite meeting asthma criteria). DISCUSSION Our study suggests that lower SES is associated with worse predictive model performance. It also highlights the potential role of incomplete EHR data in this differential performance and suggests a way to mitigate this bias. CONCLUSION The HOUSES index allows AI researchers to assess bias in predictive model performance by SES. Although our case study was based on a small sample size and a single-site study, the study results highlight a potential strategy for identifying bias by using an innovative SES measure.
Collapse
Affiliation(s)
- Young J Juhn
- Precision Population Science Lab, Mayo Clinic, Rochester, Minnesota, USA
- Artificial Intelligence Program of Department of Pediatric and Adolescent Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Euijung Ryu
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, Minnesota, USA
| | - Chung-Il Wi
- Precision Population Science Lab, Mayo Clinic, Rochester, Minnesota, USA
- Artificial Intelligence Program of Department of Pediatric and Adolescent Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Katherine S King
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, Minnesota, USA
| | - Momin Malik
- Center for Digital Health, Mayo Clinic, Rochester, Minnesota, USA
| | | | - Chunhua Weng
- Department of Biomedical Informatics, Columbia University, New York, New York, USA
| | - Sunghwan Sohn
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota, USA
| | - Richard R Sharp
- Biomedical Ethics Program, Mayo Clinic, Rochester, Minnesota, USA
| | - John D Halamka
- Center for Digital Health, Mayo Clinic, Rochester, Minnesota, USA
- Mayo Clinic Platform, Rochester, Minnesota, USA
| |
Collapse
|
24
|
Nong P, Williamson A, Anthony D, Platt J, Kardia S. Discrimination, trust, and withholding information from providers: Implications for missing data and inequity. SSM Popul Health 2022; 18:101092. [PMID: 35479582 PMCID: PMC9035429 DOI: 10.1016/j.ssmph.2022.101092] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 03/31/2022] [Accepted: 04/01/2022] [Indexed: 11/24/2022] Open
Abstract
Quality care requires collaborative communication, information exchange, and decision-making between patients and providers. Complete and accurate data about patients and from patients are especially important as high volumes of data are used to build clinical decision support tools and inform precision medicine initiatives. However, systematically missing data can bias these tools and threaten their effectiveness. Data completeness relies in many ways on patients being comfortable disclosing information to their providers without prohibitive concerns about security or privacy. Patients are likely to withhold information in the context of low trust relationships with providers, but it is unknown how experiences of discrimination in the healthcare system also relate to non-disclosure. In this study, we assess the relationship between withholding information from providers, experiences of discrimination, and multiple types of patient trust. Using a nationally representative sample of US adults (n = 2,029), weighted logistic regression modeling indicated a statistically significant relationship between experiences of discrimination and withholding information from providers (OR 3.7; CI [2.6-5.2], p < .001). Low trust in provider disclosure of conflicts of interest and low trust in providers' responsible use of health information were also positively associated with non-disclosure. We further analyzed the relationship between non-disclosure and the five most common types of discrimination (e.g., discrimination based on race, education/income, weight, gender, and age). We observed that all five types were statistically significantly associated with non-disclosure (p < .05). These results suggest that experiences of discrimination and specific types of low trust have a meaningful association with a patient's willingness to share information with their provider, with important implications for the quality of data available for medical decision-making and care. Because incomplete information can contribute to lower quality care, especially in the context of data-driven decision-making, patients experiencing discrimination may be further disadvantaged and harmed by systematic data missingness in their records.
Collapse
Affiliation(s)
- Paige Nong
- University of Michigan School of Public Health, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| | - Alicia Williamson
- University of Michigan School of Information, 105 S. State St, Ann Arbor, MI 48109, USA
| | - Denise Anthony
- University of Michigan School of Public Health, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| | - Jodyn Platt
- University of Michigan Department of Learning Health Sciences, 300 N Ingalls St, Ann Arbor, MI, 48109, USA
| | - Sharon Kardia
- University of Michigan School of Public Health, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| |
Collapse
|
25
|
McCradden MD, Anderson JA, A Stephenson E, Drysdale E, Erdman L, Goldenberg A, Zlotnik Shaul R. A Research Ethics Framework for the Clinical Translation of Healthcare Machine Learning. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:8-22. [PMID: 35048782 DOI: 10.1080/15265161.2021.2013977] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The application of artificial intelligence and machine learning (ML) technologies in healthcare have immense potential to improve the care of patients. While there are some emerging practices surrounding responsible ML as well as regulatory frameworks, the traditional role of research ethics oversight has been relatively unexplored regarding its relevance for clinical ML. In this paper, we provide a comprehensive research ethics framework that can apply to the systematic inquiry of ML research across its development cycle. The pathway consists of three stages: (1) exploratory, hypothesis-generating data access; (2) silent period evaluation; (3) prospective clinical evaluation. We connect each stage to its literature and ethical justification and suggest adaptations to traditional paradigms to suit ML while maintaining ethical rigor and the protection of individuals. This pathway can accommodate a multitude of research designs from observational to controlled trials, and the stages can apply individually to a variety of ML applications.
Collapse
Affiliation(s)
- Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
- Division of Clinical & Public Health, Dalla Lana School of Public Health
| | - James A Anderson
- Department of Bioethics, The Hospital for Sick Children
- Institute for Health Management Policy, & Evaluation, University of Toronto
| | - Elizabeth A Stephenson
- Labatt Family Heart Centre, The Hospital for Sick Children
- Department of Pediatrics, The Hospital for Sick Children
| | - Erik Drysdale
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
| | - Lauren Erdman
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
- Vector Institute
- Department of Computer Science, University of Toronto
| | - Anna Goldenberg
- Department of Bioethics, The Hospital for Sick Children
- Vector Institute
- Department of Computer Science, University of Toronto
- CIFAR
| | - Randi Zlotnik Shaul
- Department of Bioethics, The Hospital for Sick Children
- Department of Pediatrics, The Hospital for Sick Children
- Child Health Evaluative Sciences, The Hospital for Sick Children
| |
Collapse
|
26
|
Čartolovni A, Tomičić A, Lazić Mosler E. Ethical, legal, and social considerations of AI-based medical decision-support tools: A scoping review. Int J Med Inform 2022; 161:104738. [PMID: 35299098 DOI: 10.1016/j.ijmedinf.2022.104738] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 02/11/2022] [Accepted: 03/10/2022] [Indexed: 10/18/2022]
Abstract
INTRODUCTION Recent developments in the field of Artificial Intelligence (AI) applied to healthcare promise to solve many of the existing global issues in advancing human health and managing global health challenges. This comprehensive review aims not only to surface the underlying ethical and legal but also social implications (ELSI) that have been overlooked in recent reviews while deserving equal attention in the development stage, and certainly ahead of implementation in healthcare. It is intended to guide various stakeholders (eg. designers, engineers, clinicians) in addressing the ELSI of AI at the design stage using the Ethics by Design (EbD) approach. METHODS The authors followed a systematised scoping methodology and searched the following databases: Pubmed, Web of science, Ovid, Scopus, IEEE Xplore, EBSCO Search (Academic Search Premier, CINAHL, PSYCINFO, APA PsycArticles, ERIC) for the ELSI of AI in healthcare through January 2021. Data were charted and synthesised, and the authors conducted a descriptive and thematic analysis of the collected data. RESULTS After reviewing 1108 papers, 94 were included in the final analysis. Our results show a growing interest in the academic community for ELSI in the field of AI. The main issues of concern identified in our analysis fall into four main clusters of impact: AI algorithms, physicians, patients, and healthcare in general. The most prevalent issues are patient safety, algorithmic transparency, lack of proper regulation, liability & accountability, impact on patient-physician relationship and governance of AI empowered healthcare. CONCLUSIONS The results of our review confirm the potential of AI to significantly improve patient care, but the drawbacks to its implementation relate to complex ELSI that have yet to be addressed. Most ELSI refer to the impact on and extension of the reciprocal and fiduciary patient-physician relationship. With the integration of AIbased decision making tools, a bilateral patient-physician relationship may shift into a trilateral one.
Collapse
Affiliation(s)
- Anto Čartolovni
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia; School of Medicine, Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia.
| | - Ana Tomičić
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia.
| | - Elvira Lazić Mosler
- School of Medicine, Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia; General Hospital Dr. Ivo Pedišić, Sisak, Croatia.
| |
Collapse
|
27
|
Abstract
The use of machine learning (ML) in healthcare raises numerous ethical concerns, especially as models can amplify existing health inequities. Here, we outline ethical considerations for equitable ML in the advancement of healthcare. Specifically, we frame ethics of ML in healthcare through the lens of social justice. We describe ongoing efforts and outline challenges in a proposed pipeline of ethical ML in health, ranging from problem selection to postdeployment considerations. We close by summarizing recommendations to address these challenges.
Collapse
Affiliation(s)
- Irene Y Chen
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Emma Pierson
- Microsoft Research, Cambridge, Massachusetts 02143, USA
| | - Sherri Rose
- Center for Health Policy and Center for Primary Care and Outcomes Research, Stanford University, Stanford, California 94305, USA
| | | | - Kadija Ferryman
- Department of Technology, Culture, and Society, Tandon School of Engineering, New York University, Brooklyn, New York 11201, USA
| | - Marzyeh Ghassemi
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
- Institute for Medical and Evaluative Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
28
|
How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nat Med 2021; 27:582-584. [PMID: 33820998 DOI: 10.1038/s41591-021-01312-x] [Citation(s) in RCA: 167] [Impact Index Per Article: 55.7] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
29
|
Zou J, Schiebinger L. Ensuring that biomedical AI benefits diverse populations. EBioMedicine 2021; 67:103358. [PMID: 33962897 PMCID: PMC8176083 DOI: 10.1016/j.ebiom.2021.103358] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 04/12/2021] [Accepted: 04/12/2021] [Indexed: 12/21/2022] Open
Abstract
Artificial Intelligence (AI) can potentially impact many aspects of human health, from basic research discovery to individual health assessment. It is critical that these advances in technology broadly benefit diverse populations from around the world. This can be challenging because AI algorithms are often developed on non-representative samples and evaluated based on narrow metrics. Here we outline key challenges to biomedical AI in outcome design, data collection and technology evaluation, and use examples from precision health to illustrate how bias and health disparity may arise in each stage. We then suggest both short term approaches-more diverse data collection and AI monitoring-and longer term structural changes in funding, publications, and education to address these challenges.
Collapse
Affiliation(s)
- James Zou
- Department of Biomedical Data Science, Stanford University, United States
| | | |
Collapse
|
30
|
Wawira Gichoya J, McCoy LG, Celi LA, Ghassemi M. Equity in essence: a call for operationalising fairness in machine learning for healthcare. BMJ Health Care Inform 2021; 28:e100289. [PMID: 33910923 PMCID: PMC8733939 DOI: 10.1136/bmjhci-2020-100289] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Revised: 02/07/2021] [Accepted: 02/09/2021] [Indexed: 12/22/2022] Open
Affiliation(s)
- Judy Wawira Gichoya
- Department of Radiology & Imaging Sciences, Emory University, Atlanta, Georgia, USA
- Fogarty International Center, National Institutes of Health (NIH), Bethesda, Maryland, USA
| | - Liam G McCoy
- Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Harvard-MIT Division of Health Sciences and Technology, Cambridge, Massachusetts, USA
- Division of Pulmonary Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
- Department of Biostatistics, Harvrd T.H. Chan School of Public Health, Boston, Massachusetts, USA
| | - Marzyeh Ghassemi
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| |
Collapse
|
31
|
Petersen C, Smith J, Freimuth RR, Goodman KW, Jackson GP, Kannry J, Liu H, Madhavan S, Sittig DF, Wright A. Recommendations for the safe, effective use of adaptive CDS in the US healthcare system: an AMIA position paper. J Am Med Inform Assoc 2021; 28:677-684. [PMID: 33447854 DOI: 10.1093/jamia/ocaa319] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 12/01/2020] [Indexed: 02/07/2023] Open
Abstract
The development and implementation of clinical decision support (CDS) that trains itself and adapts its algorithms based on new data-here referred to as Adaptive CDS-present unique challenges and considerations. Although Adaptive CDS represents an expected progression from earlier work, the activities needed to appropriately manage and support the establishment and evolution of Adaptive CDS require new, coordinated initiatives and oversight that do not currently exist. In this AMIA position paper, the authors describe current and emerging challenges to the safe use of Adaptive CDS and lay out recommendations for the effective management and monitoring of Adaptive CDS.
Collapse
Affiliation(s)
- Carolyn Petersen
- Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, Minnesota, USA
| | - Jeffery Smith
- The Office of the National Coordinator for Health Information Technology, Washington, DC, USA
| | - Robert R Freimuth
- Division of Digital Health Sciences, Center for Individualized Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Kenneth W Goodman
- Institute for Bioethics and Health Policy, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Gretchen Purcell Jackson
- IBM Watson Health, Cambridge, Massachusetts, USA.,Department of Pediatric Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Joseph Kannry
- Mount Sinai Health System, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Hongfang Liu
- Division of Digital Health Sciences, Mayo Clinic, Rochester, Minnesota, USA
| | - Subha Madhavan
- Department of Oncology, Georgetown Lombardi Comprehensive Cancer Center, Innovation Center for Biomedical Informatics, Georgetown University Medical Center, Washington, DC, USA
| | - Dean F Sittig
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, UT-Memorial Hermann Center for Healthcare Quality & Safety, Houston, Texas, USA
| | - Adam Wright
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
32
|
Smith J. Setting the agenda: an informatics-led policy framework for adaptive CDS. J Am Med Inform Assoc 2020; 27:1831-1833. [PMID: 33301025 PMCID: PMC7727380 DOI: 10.1093/jamia/ocaa239] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Indexed: 03/31/2024] Open
Affiliation(s)
- Jeffery Smith
- American Medical Informatics Association, Bethesda, Maryland, USA
| |
Collapse
|