1
|
Arriagada-Bruneau G, López C, Davidoff A. A Bias Network Approach (BNA) to Encourage Ethical Reflection Among AI Developers. SCIENCE AND ENGINEERING ETHICS 2024; 31:1. [PMID: 39688772 PMCID: PMC11652403 DOI: 10.1007/s11948-024-00526-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 11/11/2024] [Indexed: 12/18/2024]
Abstract
We introduce the Bias Network Approach (BNA) as a sociotechnical method for AI developers to identify, map, and relate biases across the AI development process. This approach addresses the limitations of what we call the "isolationist approach to AI bias," a trend in AI literature where biases are seen as separate occurrences linked to specific stages in an AI pipeline. Dealing with these multiple biases can trigger a sense of excessive overload in managing each potential bias individually or promote the adoption of an uncritical approach to understanding the influence of biases in developers' decision-making. The BNA fosters dialogue and a critical stance among developers, guided by external experts, using graphical representations to depict biased connections. To test the BNA, we conducted a pilot case study on the "waiting list" project, involving a small AI developer team creating a healthcare waiting list NPL model in Chile. The analysis showed promising findings: (i) the BNA aids in visualizing interconnected biases and their impacts, facilitating ethical reflection in a more accessible way; (ii) it promotes transparency in decision-making throughout AI development; and (iii) more focus is necessary on professional biases and material limitations as sources of bias in AI development.
Collapse
Affiliation(s)
- Gabriela Arriagada-Bruneau
- Instituto de Éticas Aplicadas, Instituto de Ingeniería Matemática y Computacional, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna, 4860, Santiago, Chile.
- Centro Nacional de Inteligencia Artificial (CENIA), Santiago, Chile.
| | - Claudia López
- Departamento de Informática, Universidad Técnica Federico Santa María, Avenida España, 1680, Valparaíso, Chile
| | - Alexandra Davidoff
- Sociology of Childhood and Children's Rights, Social Research Institute, UCL. 20 Bedford Way, London, UK
- Nucleo Futures of Artificial Intelligence Research (FAIR), Santiago, Chile
| |
Collapse
|
2
|
Trentz C, Engelbart J, Semprini J, Kahl A, Anyimadu E, Buatti J, Casavant T, Charlton M, Canahuate G. Evaluating machine learning model bias and racial disparities in non-small cell lung cancer using SEER registry data. Health Care Manag Sci 2024; 27:631-649. [PMID: 39495385 DOI: 10.1007/s10729-024-09691-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 09/30/2024] [Indexed: 11/05/2024]
Abstract
BACKGROUND Despite decades of pursuing health equity, racial and ethnic disparities persist in healthcare in America. For cancer specifically, one of the leading observed disparities is worse mortality among non-Hispanic Black patients compared to non-Hispanic White patients across the cancer care continuum. These real-world disparities are reflected in the data used to inform the decisions made to alleviate such inequities. Failing to account for inherently biased data underlying these observations could intensify racial cancer disparities and lead to misguided efforts that fail to appropriately address the real causes of health inequity. OBJECTIVE Estimate the racial/ethnic bias of machine learning models in predicting two-year survival and surgery treatment recommendation for non-small cell lung cancer (NSCLC) patients. METHODS A Cox survival model, and a LOGIT model as well as three other machine learning models for predicting surgery recommendation were trained using SEER data from NSCLC patients diagnosed from 2000-2018. Models were trained with a 70/30 train/test split (both including and excluding race/ethnicity) and evaluated using performance and fairness metrics. The effects of oversampling the training data were also evaluated. RESULTS The survival models show disparate impact towards non-Hispanic Black patients regardless of whether race/ethnicity is used as a predictor. The models including race/ethnicity amplified the disparities observed in the data. The exclusion of race/ethnicity as a predictor in the survival and surgery recommendation models improved fairness metrics without degrading model performance. Stratified oversampling strategies reduced disparate impact while reducing the accuracy of the model. CONCLUSION NSCLC disparities are complex and multifaceted. Yet, even when accounting for age and stage at diagnosis, non-Hispanic Black patients with NSCLC are less often recommended to have surgery than non-Hispanic White patients. Machine learning models amplified the racial/ethnic disparities across the cancer care continuum (which are reflected in the data used to make model decisions). Excluding race/ethnicity lowered the bias of the models but did not affect disparate impact. Developing analytical strategies to improve fairness would in turn improve the utility of machine learning approaches analyzing population-based cancer data.
Collapse
Affiliation(s)
- Cameron Trentz
- Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa, USA
| | - Jacklyn Engelbart
- Epidemiology Department, University of Iowa, Iowa City, Iowa, USA
- General Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Jason Semprini
- Health Management & Policy Department, University of Iowa, Iowa City, Iowa, USA
| | - Amanda Kahl
- Epidemiology Department, University of Iowa, Iowa City, Iowa, USA
| | - Eric Anyimadu
- Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa, USA
| | - John Buatti
- Radiation Oncology Department, University of Iowa, Iowa City, Iowa, USA
| | - Thomas Casavant
- Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa, USA
| | - Mary Charlton
- Epidemiology Department, University of Iowa, Iowa City, Iowa, USA
| | - Guadalupe Canahuate
- Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa, USA.
| |
Collapse
|
3
|
Färber A, Schwabe C, Stalder PH, Dolata M, Schwabe G. Physicians' and Patients' Expectations From Digital Agents for Consultations: Interview Study Among Physicians and Patients. JMIR Hum Factors 2024; 11:e49647. [PMID: 38498022 PMCID: PMC10985611 DOI: 10.2196/49647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 12/09/2023] [Accepted: 01/15/2024] [Indexed: 03/19/2024] Open
Abstract
BACKGROUND Physicians are currently overwhelmed by administrative tasks and spend very little time in consultations with patients, which hampers health literacy, shared decision-making, and treatment adherence. OBJECTIVE This study aims to examine whether digital agents constructed using fast-evolving generative artificial intelligence, such as ChatGPT, have the potential to improve consultations, adherence to treatment, and health literacy. We interviewed patients and physicians to obtain their opinions about 3 digital agents-a silent digital expert, a communicative digital expert, and a digital companion (DC). METHODS We conducted in-depth interviews with 25 patients and 22 physicians from a purposeful sample, with the patients having a wide age range and coming from different educational backgrounds and the physicians having different medical specialties. Transcripts of the interviews were deductively coded using MAXQDA (VERBI Software GmbH) and then summarized according to code and interview before being clustered for interpretation. RESULTS Statements from patients and physicians were categorized according to three consultation phases: (1) silent and communicative digital experts that are part of the consultation, (2) digital experts that hand over to a DC, and (3) DCs that support patients in the period between consultations. Overall, patients and physicians were open to these forms of digital support but had reservations about all 3 agents. CONCLUSIONS Ultimately, we derived 9 requirements for designing digital agents to support consultations, treatment adherence, and health literacy based on the literature and our qualitative findings.
Collapse
Affiliation(s)
- Andri Färber
- ZHAW School of Management and Law, Zurich University of Applied Sciences, Winterthur, Switzerland
- Department of Informatics, University of Zurich, Zurich, Switzerland
| | | | - Philipp H Stalder
- ZHAW School of Management and Law, Zurich University of Applied Sciences, Winterthur, Switzerland
| | - Mateusz Dolata
- Department of Informatics, University of Zurich, Zurich, Switzerland
| | - Gerhard Schwabe
- Department of Informatics, University of Zurich, Zurich, Switzerland
| |
Collapse
|
4
|
Iqbal J, Cortés Jaimes DC, Makineni P, Subramani S, Hemaida S, Thugu TR, Butt AN, Sikto JT, Kaur P, Lak MA, Augustine M, Shahzad R, Arain M. Reimagining Healthcare: Unleashing the Power of Artificial Intelligence in Medicine. Cureus 2023; 15:e44658. [PMID: 37799217 PMCID: PMC10549955 DOI: 10.7759/cureus.44658] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2023] [Indexed: 10/07/2023] Open
Abstract
Artificial intelligence (AI) has opened new medical avenues and revolutionized diagnostic and therapeutic practices, allowing healthcare providers to overcome significant challenges associated with cost, disease management, accessibility, and treatment optimization. Prominent AI technologies such as machine learning (ML) and deep learning (DL) have immensely influenced diagnostics, patient monitoring, novel pharmaceutical discoveries, drug development, and telemedicine. Significant innovations and improvements in disease identification and early intervention have been made using AI-generated algorithms for clinical decision support systems and disease prediction models. AI has remarkably impacted clinical drug trials by amplifying research into drug efficacy, adverse events, and candidate molecular design. AI's precision and analysis regarding patients' genetic, environmental, and lifestyle factors have led to individualized treatment strategies. During the COVID-19 pandemic, AI-assisted telemedicine set a precedent for remote healthcare delivery and patient follow-up. Moreover, AI-generated applications and wearable devices have allowed ambulatory monitoring of vital signs. However, apart from being immensely transformative, AI's contribution to healthcare is subject to ethical and regulatory concerns. AI-backed data protection and algorithm transparency should be strictly adherent to ethical principles. Vigorous governance frameworks should be in place before incorporating AI in mental health interventions through AI-operated chatbots, medical education enhancements, and virtual reality-based training. The role of AI in medical decision-making has certain limitations, necessitating the importance of hands-on experience. Therefore, reaching an optimal balance between AI's capabilities and ethical considerations to ensure impartial and neutral performance in healthcare applications is crucial. This narrative review focuses on AI's impact on healthcare and the importance of ethical and balanced incorporation to make use of its full potential.
Collapse
Affiliation(s)
| | - Diana Carolina Cortés Jaimes
- Epidemiology, Universidad Autónoma de Bucaramanga, Bucaramanga, COL
- Medicine, Pontificia Universidad Javeriana, Bogotá, COL
| | - Pallavi Makineni
- Medicine, All India Institute of Medical Sciences, Bhubaneswar, Bhubaneswar, IND
| | - Sachin Subramani
- Medicine and Surgery, Employees' State Insurance Corporation (ESIC) Medical College, Gulbarga, IND
| | - Sarah Hemaida
- Internal Medicine, Istanbul Okan University, Istanbul, TUR
| | - Thanmai Reddy Thugu
- Internal Medicine, Sri Padmavathi Medical College for Women, Sri Venkateswara Institute of Medical Sciences (SVIMS), Tirupati, IND
| | - Amna Naveed Butt
- Medicine/Internal Medicine, Allama Iqbal Medical College, Lahore, PAK
| | | | - Pareena Kaur
- Medicine, Punjab Institute of Medical Sciences, Jalandhar, IND
| | | | | | - Roheen Shahzad
- Medicine, Combined Military Hospital (CMH) Lahore Medical College and Institute of Dentistry, Lahore, PAK
| | - Mustafa Arain
- Internal Medicine, Civil Hospital Karachi, Karachi, PAK
| |
Collapse
|
5
|
Timmons AC, Duong JB, Fiallo NS, Lee T, Vo HPQ, Ahle MW, Comer JS, Brewer LC, Frazier SL, Chaspari T. A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1062-1096. [PMID: 36490369 PMCID: PMC10250563 DOI: 10.1177/17456916221134490] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Advances in computer science and data-analytic methods are driving a new era in mental health research and application. Artificial intelligence (AI) technologies hold the potential to enhance the assessment, diagnosis, and treatment of people experiencing mental health problems and to increase the reach and impact of mental health care. However, AI applications will not mitigate mental health disparities if they are built from historical data that reflect underlying social biases and inequities. AI models biased against sensitive classes could reinforce and even perpetuate existing inequities if these models create legacies that differentially impact who is diagnosed and treated, and how effectively. The current article reviews the health-equity implications of applying AI to mental health problems, outlines state-of-the-art methods for assessing and mitigating algorithmic bias, and presents a call to action to guide the development of fair-aware AI in psychological science.
Collapse
Affiliation(s)
- Adela C. Timmons
- University of Texas at Austin Institute for Mental Health Research
- Colliga Apps Corporation
| | | | | | | | | | | | | | - LaPrincess C. Brewer
- Department of Cardiovascular Medicine, May Clinic College of Medicine, Rochester, Minnesota, United States
- Center for Health Equity and Community Engagement Research, Mayo Clinic, Rochester, Minnesota, United States
| | | | | |
Collapse
|
6
|
Almeida A, Brás S, Sargento S, Pinto FC. Time series big data: a survey on data stream frameworks, analysis and algorithms. JOURNAL OF BIG DATA 2023; 10:83. [PMID: 37274443 PMCID: PMC10225118 DOI: 10.1186/s40537-023-00760-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 05/08/2023] [Indexed: 06/06/2023]
Abstract
Big data has a substantial role nowadays, and its importance has significantly increased over the last decade. Big data's biggest advantages are providing knowledge, supporting the decision-making process, and improving the use of resources, services, and infrastructures. The potential of big data increases when we apply it in real-time by providing real-time analysis, predictions, and forecasts, among many other applications. Our goal with this article is to provide a viewpoint on how to build a system capable of processing big data in real-time, performing analysis, and applying algorithms. A system should be designed to handle vast amounts of data and provide valuable knowledge through analysis and algorithms. This article explores the current approaches and how they can be used for the real-time operations and predictions.
Collapse
Affiliation(s)
- Ana Almeida
- Instituto de Telecomunicações, Aveiro, Portugal
- Departamento de Eletrónica, Telecomunicações e Informática, Universidade de Aveiro, Aveiro, Portugal
| | - Susana Brás
- Departamento de Eletrónica, Telecomunicações e Informática, Universidade de Aveiro, Aveiro, Portugal
- IEETA, DETI, LASI, Universidade de Aveiro, Aveiro, Portugal
| | - Susana Sargento
- Instituto de Telecomunicações, Aveiro, Portugal
- Departamento de Eletrónica, Telecomunicações e Informática, Universidade de Aveiro, Aveiro, Portugal
| | - Filipe Cabral Pinto
- Instituto de Telecomunicações, Aveiro, Portugal
- Altice Labs, Aveiro, Portugal
| |
Collapse
|
7
|
Integrating a Blockchain-Based Governance Framework for Responsible AI. FUTURE INTERNET 2023. [DOI: 10.3390/fi15030097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/04/2023] Open
Abstract
This research paper reviews the potential of smart contracts for responsible AI with a focus on frameworks, hardware, energy efficiency, and cyberattacks. Smart contracts are digital agreements that are executed by a blockchain, and they have the potential to revolutionize the way we conduct business by increasing transparency and trust. When it comes to responsible AI systems, smart contracts can play a crucial role in ensuring that the terms and conditions of the contract are fair and transparent as well as that any automated decision-making is explainable and auditable. Furthermore, the energy consumption of blockchain networks has been a matter of concern; this article explores the energy efficiency element of smart contracts. Energy efficiency in smart contracts may be enhanced by the use of techniques such as off-chain processing and sharding. The study emphasises the need for careful auditing and testing of smart contract code in order to protect against cyberattacks along with the use of secure libraries and frameworks to lessen the likelihood of smart contract vulnerabilities.
Collapse
|
8
|
Algorithmic Fairness in AI. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2023. [DOI: 10.1007/s12599-023-00787-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
|
9
|
Dolata M, Katsiuba D, Wellnhammer N, Schwabe G. Learning with Digital Agents: An Analysis based on the Activity Theory. J MANAGE INFORM SYST 2023. [DOI: 10.1080/07421222.2023.2172775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
10
|
Yogarajan V, Dobbie G, Leitch S, Keegan TT, Bensemann J, Witbrock M, Asrani V, Reith D. Data and model bias in artificial intelligence for healthcare applications in New Zealand. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1070493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
IntroductionDevelopments in Artificial Intelligence (AI) are adopted widely in healthcare. However, the introduction and use of AI may come with biases and disparities, resulting in concerns about healthcare access and outcomes for underrepresented indigenous populations. In New Zealand, Māori experience significant inequities in health compared to the non-Indigenous population. This research explores equity concepts and fairness measures concerning AI for healthcare in New Zealand.MethodsThis research considers data and model bias in NZ-based electronic health records (EHRs). Two very distinct NZ datasets are used in this research, one obtained from one hospital and another from multiple GP practices, where clinicians obtain both datasets. To ensure research equality and fair inclusion of Māori, we combine expertise in Artificial Intelligence (AI), New Zealand clinical context, and te ao Māori. The mitigation of inequity needs to be addressed in data collection, model development, and model deployment. In this paper, we analyze data and algorithmic bias concerning data collection and model development, training and testing using health data collected by experts. We use fairness measures such as disparate impact scores, equal opportunities and equalized odds to analyze tabular data. Furthermore, token frequencies, statistical significance testing and fairness measures for word embeddings, such as WEAT and WEFE frameworks, are used to analyze bias in free-form medical text. The AI model predictions are also explained using SHAP and LIME.ResultsThis research analyzed fairness metrics for NZ EHRs while considering data and algorithmic bias. We show evidence of bias due to the changes made in algorithmic design. Furthermore, we observe unintentional bias due to the underlying pre-trained models used to represent text data. This research addresses some vital issues while opening up the need and opportunity for future research.DiscussionsThis research takes early steps toward developing a model of socially responsible and fair AI for New Zealand's population. We provided an overview of reproducible concepts that can be adopted toward any NZ population data. Furthermore, we discuss the gaps and future research avenues that will enable more focused development of fairness measures suitable for the New Zealand population's needs and social structure. One of the primary focuses of this research was ensuring fair inclusions. As such, we combine expertise in AI, clinical knowledge, and the representation of indigenous populations. This inclusion of experts will be vital moving forward, proving a stepping stone toward the integration of AI for better outcomes in healthcare.
Collapse
|
11
|
The Cost of Fairness in AI: Evidence from E-Commerce. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2022. [DOI: 10.1007/s12599-021-00716-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
AbstractContemporary information systems make widespread use of artificial intelligence (AI). While AI offers various benefits, it can also be subject to systematic errors, whereby people from certain groups (defined by gender, age, or other sensitive attributes) experience disparate outcomes. In many AI applications, disparate outcomes confront businesses and organizations with legal and reputational risks. To address these, technologies for so-called “AI fairness” have been developed, by which AI is adapted such that mathematical constraints for fairness are fulfilled. However, the financial costs of AI fairness are unclear. Therefore, the authors develop AI fairness for a real-world use case from e-commerce, where coupons are allocated according to clickstream sessions. In their setting, the authors find that AI fairness successfully manages to adhere to fairness requirements, while reducing the overall prediction performance only slightly. However, they find that AI fairness also results in an increase in financial cost. Thus, in this way the paper’s findings contribute to designing information systems on the basis of AI fairness.
Collapse
|
12
|
Tarafdar M, Page X, Marabelli M. Algorithms as co‐workers: Human algorithm role interactions in algorithmic work. INFORMATION SYSTEMS JOURNAL 2022. [DOI: 10.1111/isj.12389] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Affiliation(s)
- Monideepa Tarafdar
- Isenberg School of Management University of Massachusetts Amherst Amherst Massachusetts USA
| | - Xinru Page
- Computer Science Department Bringham Young University Provo Utah USA
| | - Marco Marabelli
- Information and Process Management Department Bentley University Waltham Massachusetts USA
| |
Collapse
|
13
|
Caldwell S, Sweetser P, O’Donnell N, Knight MJ, Aitchison M, Gedeon T, Johnson D, Brereton M, Gallagher M, Conroy D. An Agile New Research Framework for Hybrid Human-AI Teaming: Trust, Transparency, and Transferability. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3514257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
We propose a new research framework by which the nascent discipline of human-AI teaming can be explored within experimental environments in preparation for transferal to real-world contexts. We examine the existing literature and unanswered research questions through the lens of an Agile approach to construct our proposed framework. Our framework aims to provide a structure for understanding the macro features of this research landscape, supporting holistic research into the acceptability of human-AI teaming to human team members and the affordances of AI team members. The framework has the potential to enhance decision-making and performance of hybrid human-AI teams. Further, our framework proposes the application of Agile methodology for research management and knowledge discovery. We propose a transferability pathway for hybrid teaming to be initially tested in a safe environment, such as a real-time strategy video game, with elements of lessons learned that can be transferred to real-world situations.
Collapse
Affiliation(s)
| | | | | | | | | | - Tom Gedeon
- Australian National University, Australia
| | | | | | | | | |
Collapse
|
14
|
Mihale-Wilson C, Hinz O, van der Aalst W, Weinhardt C. Corporate Digital Responsibility. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2022. [DOI: 10.1007/s12599-022-00746-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
15
|
Dolata M, Feuerriegel S, Schwabe G. A sociotechnical view of algorithmic fairness. INFORMATION SYSTEMS JOURNAL 2021. [DOI: 10.1111/isj.12370] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Mateusz Dolata
- Department of Informatics University of Zurich Zurich Switzerland
| | - Stefan Feuerriegel
- Department of Management, Technology, and Economics ETH Zurich Zurich Switzerland
- LMU Munich School of Management LMU Munich Munich Germany
| | - Gerhard Schwabe
- Department of Informatics University of Zurich Zurich Switzerland
| |
Collapse
|
16
|
|